AI EMPLOYEE
Industries
INDUSTRIES
Restaurants Cleaning Services Home Service Companies Dental & Orthodontics Fitness & Wellness Hospitality HVAC/Plumbing
Pricing
Partners
PARTNERS
Become a partner Partners Listing
Resources
DEVELOPERS
Agent Creator NEWO BUILDER API & DOCUMENTATION ACADEMY VIDEO TUTORIALS COMMUNITY HELP How to choose AI automation
COMPANY
ABOUT CONTACT BLOG INSIGHTS
Call 24/7: +1 (888) 639-6939 free consultations
Book a Demo
Home / Blog / AI in the Courtroom: When Fact-Checking Takes a Flight
2 years ago 2 min read

AI in the Courtroom: When Fact-Checking Takes a Flight

img

In a notable lawsuit against Avianca Airlines by a man named Roberto Mata, an AI system called ChatGPT was used by Mata's lawyer, Steven A. Schwartz, to prepare the court filing. The system produced a brief containing references to more than half a dozen relevant court cases in support of the argument. However, the legitimacy of the brief became questionable when neither the airline's legal team nor the judge could find the quoted decisions or the cases that were cited in the brief.

Upon further investigation, it was revealed that the AI system had invented all the case references. Schwartz, who had been practicing law in New York for thirty years, acknowledged in an affidavit that he had used ChatGPT to perform the legal research and was unaware that the content it generated could be false. He also admitted that he had asked the AI to verify the realness of the cases, to which the system falsely confirmed they were.

In response to the fraudulent brief, Judge P. Kevin Castel called for a hearing, describing the submitted opinion as "bogus" and confirming that at least five other cited decisions appeared to be fabricated as well. This debacle underscored the issues of trust and verification with AI-based legal research, as Mata's lawyer professed regret for relying on the AI and vowed to verify its authenticity in the future.

This case highlights the vital importance of using an orchestrated framework when creating the Intelligent Agents , corporate AI professionals and corporate Chat Bots. Such a framework provides built-in fact-checking features, ensuring the integrity and accuracy of the information generated by the AI. Furthermore, adhering to corporate standards and playbooks will establish a more rigorous, trustworthy AI system that can accurately perform tasks such as legal research. This incident serves as a stark reminder of the potential pitfalls and ethical considerations when using AI, and underscores the necessity for AI systems to not only generate content but also validate its authenticity.

References: [1].

Recent Posts See all
Exploring China’s AI Workforce: From Virtual Journalists to Humanoid Robots
MIT Breakthrough: System Allows Robots to Identify Object Properties Through Handling
Link-Bots: The Future of Swarm Robotics Without Sensors or AI
Industries
  • Restaurants
  • Fitness & Wellness
  • Home Services
  • Cleaning Services
  • Dental & Orthodontics
Company
  • Digital Employee
  • About Us
Resources
  • Pricing
  • Documentation
  • Academy
  • Community
  • Partner Program
Contact Us
  • Linkedin
  • Instagram
  • Facebook
  • Email
  • © 2025 Newo.ai
  • Terms of Use
  • Privacy Policy
  • Data Processing Addendum
  • Trust Center