AI in the Courtroom: When Fact-Checking Takes a Flight
In a notable lawsuit against Avianca Airlines by a man named Roberto Mata, an AI system called ChatGPT was used by Mata's lawyer, Steven A. Schwartz, to prepare the court filing. The system produced a brief containing references to more than half a dozen relevant court cases in support of the argument. However, the legitimacy of the brief became questionable when neither the airline's legal team nor the judge could find the quoted decisions or the cases that were cited in the brief.
Upon further investigation, it was revealed that the AI system had invented all the case references. Schwartz, who had been practicing law in New York for thirty years, acknowledged in an affidavit that he had used ChatGPT to perform the legal research and was unaware that the content it generated could be false. He also admitted that he had asked the AI to verify the realness of the cases, to which the system falsely confirmed they were.
In response to the fraudulent brief, Judge P. Kevin Castel called for a hearing, describing the submitted opinion as "bogus" and confirming that at least five other cited decisions appeared to be fabricated as well. This debacle underscored the issues of trust and verification with AI-based legal research, as Mata's lawyer professed regret for relying on the AI and vowed to verify its authenticity in the future.
This case highlights the vital importance of using an orchestrated framework when creating the Intelligent Agents , corporate AI professionals and corporate Chat Bots. Such a framework provides built-in fact-checking features, ensuring the integrity and accuracy of the information generated by the AI. Furthermore, adhering to corporate standards and playbooks will establish a more rigorous, trustworthy AI system that can accurately perform tasks such as legal research. This incident serves as a stark reminder of the potential pitfalls and ethical considerations when using AI, and underscores the necessity for AI systems to not only generate content but also validate its authenticity.
References: [1].