“We’re Talking About Practice”-ing Law with A.I.: A Cautionary Tale
No, we’re not talking about legendary Philadelphia 76ers star, Allen Iverson. We’re talking about Artificial Intelligence and its developing role in the practice of law. Let’s walk through (a touch more polite than a “step over”, if you catch my drift) a recent cautionary tale regarding the use of A.I. by tenured attorneys.
New York attorney Steven A. Schwartz has been practicing law for over thirty (30) years. He was retained by Roberto Mata to pursue claims against Avianca airline for injuries allegedly sustained on one of their flights in 2019. When counsel for Avianca filed a motion to dismiss, Schwartz filed a brief in opposition citing over a dozen cases to support his client’s argument. There was only one problem: Avianca’s counsel could not locate many of the cases cited by Schwartz to review. When asked by the court to produce the cases, Schwartz provided an incomplete “compendium” of case law. When asked to explain why he did not include all cases cited, Schwartz was compelled to acknowledge that, not only could he not produce those cases, but that the cases did not exist.
Schwartz revealed to the court that he had relied on ChatGPT, an artificial intelligence (“AI”) chatbot designed to respond to user inquiries, to prepare his brief. When Schwartz asked the platform to produce case law supporting his client’s position, ChatGPT provided six cases that fit, including “Martinez v. Delta Air Lines” and “Zicherman v. Korean Air Lines.” However, Schwartz and his co-counsel did not confirm the veracity of the cases and, sure enough, ChatGPT had created them in response to Schwartz’s question.
OpenAI, the creator of ChatGPT warn users on its website that the chatbot has limitations, including “writ[ing] plausible sounding but incorrect or nonsensical answers.”[1] OpenAI advises that the ChatGPT model is simply the latest step in OpenAI’s “iterative deployment” of AI systems that they will learn from as they craft the next wave of AI technology. Thus, ChatGPT’s convincing and reassuring responses must be reviewed with skepticism because not everything it generates is true.
Using AI, as any other new technology, as part of practicing law implicates many Model Rules of Professional Responsibility (“MR”), including competence, (MR 1.1), diligence (MR 1.3), and confidentiality (1.6). This is not to say that ChatGPT and other AI technology will not have their place in the practice of law, but rather, that attorneys need to proceed with extreme caution when using those programs.
Turning back to Mr. Schwartz, at a June 8, 2023 hearing regarding Schwartz’s imaginary citations, he testified he was remorseful for his actions. he claimed he believed ChatGPT was a “super search engine” he could rely on to conduct legal research. He even asked the chatbot if the cases provided were real, to which it replied in the affirmative. Schwartz’s case serves as a potent cautionary tale for lawyers to understand ChatGPT’s limitations before using it for research and to thoroughly vet any information received from it.
Most recently, on June 22, 2023, Mr. Schwartz, his colleague Peter LoDuca, and their law firm were ordered to pay $5,000 in sanctions each. The court noted it would not have been inclined to impose sanctions, however, Mr. Schwartz’s deceptiveness and declining to reveal the true nature of the cases when he learned about them pushed him to do so. So, not only is this tale a sobering reminder to verify any cited case law, but also to prioritize candor to the court, even if doing so may hurt your client’s case. Otherwise, you can miss an easy lay-up with maintaining the court and opposing counsel’s trust.