Key Points:
- A Supreme Court advocate used generative AI to recreate and improve his own oral argument performance.
- The experiment showed AI can generate persuasive and creative responses, including relevant case law not in the briefs.
- Experts warn that fluency and confidence do not equal legal accuracy or judgment, and misuse carries malpractice risk.
For decades, popular culture imagined machines taking over the courtroom. Now, generative AI has moved from fiction into serious discussion among appellate lawyers. In a recent experiment posted on his Substack newsletter, Supreme Court advocate Adam Unikowski tested whether a large language model could deliver stronger oral argument answers than he had in a real case. The discussion appears in coverage of the experiment on YouTube.
Unikowski, a partner at Jenner & Block who has argued before the Supreme Court 13 times, described a familiar frustration: replaying oral arguments and dwelling on answers he wished he had delivered differently. Curious whether artificial intelligence could improve on his performance, he uploaded briefs from a Supreme Court case he argued into a large language model and prompted it with the same questions posed by the justices.
He instructed the system to respond as a seasoned Supreme Court advocate. He then created an AI generated voice clone of himself to deliver the responses. The results, he said, were striking. The AI produced calm, direct answers and, in at least one instance, cited a relevant Supreme Court precedent that had not been referenced in the briefs. In a particularly challenging exchange regarding a statute of limitations question, the AI’s response was crisp and grounded in authority.
The experiment drew attention, including from members of the Court. Unikowski concluded that in some respects the model performed better than he had. But outside observers urged caution. Cornell Tech and Cornell Law School professor James Grimmelmann noted that persuasive delivery is not a proxy for correctness. A convincing answer can still rest on faulty reasoning or incomplete understanding.
The distinction matters in appellate advocacy, where accuracy, context and legal judgment are essential. Large language models generate outputs based on patterns in training data. Sometimes those patterns align with reality. At other times, they produce hallucinations or confidently stated errors. Courts have already sanctioned lawyers who submitted AI generated briefs containing fabricated case citations.
Grimmelmann warned that in complex areas such as commercial law, an AI could generate arguments that sound plausible yet misread statutory provisions, potentially persuading a generalist judge unfamiliar with the field. The risk is not tone, but substance. Lawyers using AI must treat outputs with skepticism and independently verify citations and reasoning.
Unikowski acknowledged those risks but argued that generative AI can serve as a creative partner. In one test, he asked the model to connect the Twenty First Amendment to a civil rights case where it had no apparent relevance. After pushing the system to complete the thought experiment, it produced a novel, if imperfect, argument. The value, he suggested, lies in the speed and range of ideas that AI can surface.
Current Supreme Court rules do not permit an AI to advocate before the justices. Few expect that to change soon. For now, generative AI is more likely to function as an internal tool rather than a courtroom representative.
For corporate legal departments and law firms, the lesson is not that machines will replace advocates. It is that AI can augment preparation, stress test arguments and surface alternative reasoning paths. Used carefully, it may sharpen advocacy. Used carelessly, it can introduce risk.
As one commentator in the discussion put it, generative AI resembles a powerful chainsaw. In skilled hands, it can clear dense underbrush quickly. Without discipline, it can cause serious damage. In appellate practice, precision remains the difference between innovation and error.







