A family feud over a southern Oregon winery has become a cautionary tale about the dangers of artificial intelligence in legal practice, resulting in what’s believed to be the largest sanctions ever imposed for AI-related misconduct.
Inheritance Battle
The inheritance dispute centered on Valley View Winery in Jacksonville, Ore., where four siblings clashed over control of the family business following their mother’s death. Brothers Mark and Michael Wisnovsky, who had operated the winery for decades, found themselves facing a $12 million lawsuit from their sister, Joanne Couvrette, who claimed their mother intended for the land and business to go to her and their older brother, Robert.
According to the brothers’ attorney, Kendra Gustitus, in 2016, the mother had changed her estate plan to ensure her sons could continue running the winery. Rather than splitting the land equally among the four children, the revised plan allowed the two brothers operating the winery to buy out their siblings’ interests in the property.
In 2019, Joanne and her mother filed a new estate plan that would give the winery to her and Robert. In 2021, Joanne filed the aforementioned lawsuit, accusing her brothers of manipulating their mother in previous estate agreements. The brothers countersued the same year. Their mother, who the brothers said had signs of dementia, died in 2023.
The AI Citation Scandal
The textbook case of an estate probate turned family feud took a dramatic turn when the brothers’ legal team discovered that the court filings submitted by Joanne’s attorneys contained fabricated case law and incorrect quotations attributed to real court opinions. What, at first glance, could have been a small slip-up, perhaps one of the attorneys had a glass or two of the fine wine before writing the briefs, turned out to be more than a dozen fake citations, continuing even after the brothers’ team pointed out the previous ones.
Over a five-month period, across three separate briefs on cross-motions for summary judgment, the plaintiffs’ legal team submitted documents containing 15 AI-generated fake case citations and eight fabricated quotations. Some filings included completely non-existent cases, while others contained incorrect quotes falsely attributed to legitimate court opinions.
The kicker was that, based on what the judge referred to as “persuasive” evidence, not only was Joanne’s legal team unapologetic about the false information, but also, it might have been Joanne herself who wrote the legal briefs using AI software, which she then provided to her legal team.
The Sanctions
Unamused, Judge Clarke dismissed Joanne’s claims with prejudice and imposed over $100,000 in fines and attorney fees against the two lawyers involved in the case.
The six-figure penalty represents potentially the largest ever imposed for AI-related errors, although it can’t be said with certainty because penalty amounts aren’t always disclosed in some cases.
“Judges are sending a clear message, and I think it comes down to the sanctity of the profession. Attorneys are officers of the court. We are held to a standard of candor and competence that the public and the judiciary rely on. The size of this sanction, believed to be the highest ever imposed for AI hallucinations in U.S. legal history, reflects how seriously courts are taking this,” said Ashlee Difuntorum, attorney at Kinsella Holley Iser Kump Steinsapir LLP.
Joanne is expected to appeal the decision.
Broader Implications for the Legal Profession
The case highlights a growing issue: more legal professionals are relying on AI tools in their work. A database tracking judicial reprimands for the misuse of AI in court filings found that there have been over 1,300 cases involving what’s been dubbed “AI hallucinations,” such as fake citations or arguments.
Lawyers are ultimately responsible for verifying the accuracy of any filings they submit to the court, regardless of whether the information comes from AI tools, paralegals, or other sources.
While most AI hallucination sanctions have been a slap on the wrist, the judge presiding in this case decided to set an example with harsh sanctions for the attorneys’ lack of candor and failure to promptly acknowledge and address the problem. It also sends a message of the dangers of relying on AI-generated legal research without proper verification through conventional legal research methods.
We shouldn’t shun the benefits of adding AI to the practice, however. “Used correctly, AI can be a powerful and helpful tool for legal research,” said Difuntorum. “Yes, the risks are real. AI can hallucinate citations, misstate holdings or generate cases that simply don’t exist. But that’s why it must be used thoughtfully, not a reason to avoid it altogether,” she explained.
According to Difuntorum, much of the talk around AI in law forgets that the underlying issue isn’t new. Attorneys have long relied on sample briefs, form documents and templates. A competent attorney knows they must always verify the citations. This includes checking that the case is still good law, that it actually says what you need it to say, and that nothing else in the opinion hurts your client’s position. The same discipline applies to AI-generated research.
“This case is a reminder of what happens when that discipline breaks down, not an argument against AI itself,” she added.


