Key points:
- Court filings in Texas and Pennsylvania cited fictitious cases likely generated by AI tools.
- Judges demanded proof or issued sanctions against attorneys submitting the false citations.
- Legal experts warn improper use of AI tools is undermining trust in tech-driven legal research.
Courts in Texas and Pennsylvania are confronting a recurring legal ethics issue: the submission of fictitious case citations, apparently generated by artificial intelligence tools. In two recent cases, attorneys cited nonexistent precedents in their briefs, drawing judicial rebukes and raising broader concerns over the role of generative AI in legal practice.
In Texas, Judge Nancy Kennedy of the Dallas Court of Appeals ordered an attorney to prove the existence of four cited cases that neither the court nor opposing counsel could locate. The questionable citations appeared in a May 2024 brief filed in a contract dispute, Lauren Rochon-Eidsvig and Heidi Rochon Hafer v. JGB Collateral, LLC, case number 5-24-00123-CV. The judge gave counsel 10 days to submit the cases for verification, as reported by Texas Lawyer.
Meanwhile, in a Pennsylvania federal court, attorney Nicholas L. Palazzo of Defino Law Associates was sanctioned after submitting briefs containing erroneous and apparently fabricated citations in a product liability case, Patricia Bevins v. Colgate-Palmolive Co. and BJ’s Wholesale Club, case number 2:25-cv-00576. The court determined that the referenced cases either didn’t exist or contained significant factual errors.
Legal professionals increasingly point to AI-generated “hallucinations”—fabricated content produced by platforms like ChatGPT—as the source of such missteps. “Attorneys are referencing cases that don't exist and typically what's happening is that these individuals are using AI platforms that really aren't meant for legal research,” said Frank Ramos, a partner at Goldberg Segalla, who posted about both incidents on LinkedIn.
Ramos stressed the need for traditional verification methods: “Even if you're using LEXIS or Westlaw, there was a study last summer that showed they get it wrong about 30% of the time.” He added, “Most people are treating AI as the last step in legal research and really it should be step one or two of 10 steps.”
This is not the first time courts have encountered AI-generated citations. A New York case in 2023 became the first known federal proceeding where fake case law—sourced from ChatGPT—was submitted in a legal filing. Since then, concerns about AI’s reliability in legal drafting have only grown.
Doug Gladden, an attorney with the Harris County Public Defender’s Office in Houston, posted Judge Kennedy’s ruling to social media platform X, further fueling discussion. He declined to comment for the article.
Ramos believes stronger disciplinary measures may be needed to deter future misuse. “The fake cases being submitted into court will continue until the sanctions go beyond fines and include suspensions or other actions by State Bar associations,” he said.
While AI has the potential to expand access to justice, Ramos warned that its misuse undermines trust. “Each time this happens, it sets back the adoption of AI in law,” he said. “Used correctly, AI can be powerful—but it must be verified at every step.”







