Scroll Top
19th Ave New York, NY 95822, USA
FireShot Capture 070 - Free Vector - Special effects design abstract concept illustration. S_ - www.freepik.com

AI hallucinations in legal practice: risks, real cases, and solutions

Generative AI has rapidly entered the legal industry, offering tools for document drafting, legal research and case analysis. While these tools promise increased efficiency, they also present significant risks – one of which is AI hallucinations. These occur when AI generates content that appears plausible but is entirely inaccurate. In the legal world, using such inaccurate information can lead to serious problems, including presenting false facts or misinterpreting cases and legal precedents.

What are AI hallucinations?

AI hallucinations stem from the pattern-matching processes of generative AI tools, which predict words or phrases based on their training data. Since these models cannot independently verify the accuracy of their outputs, they may provide incorrect or fictitious legal information.

Michael Cohen’s legal team faced this problem when they used Google’s Bard to draft a motion seeking Cohen’s early release from prison. The AI created citations of cases that did not actually exist, causing confusion in court. Although the cited cases had names resembling real ones, their content and details were entirely inaccurate. The motion was submitted as part of an effort to convince the court to end Cohen’s supervision early, with the main argument being that Cohen had completed his sentence and complied with the conditions of his release (see more information here).

Other cases of AI hallucinations

  • European Court of Justice missteps: similar issues have arisen in the EU, particularly involving the European Court of Justice (ECJ). An AI system tasked with summarising ECJ rulings provided fabricated legal citations that did not correspond to any actual rulings. Although the output appeared authoritative, the AI tool misinterpreted historical case patterns, generating fake references to support a legal argument. This incident raised concerns about the reliability of AI in managing the complexities of EU law, particularly when dealing with multilingual case documents and varying regulations across Member States.
  • The French data breach case: in France, an AI-powered legal tool used in a data breach litigation case produced inaccurate summaries of privacy laws and misinterpreted GDPR regulations. The tool fabricated non-existent amendments to the GDPR, claiming they had been enacted by the European Parliament. Luckily, the lawyer recognised the error before presenting the findings in court, underscoring the dangers of relying on AI-generated outputs without human oversight.
  • GPT Hallucinations: a study by Stanford’s Institute for Human-Centered AI found hallucination rates as high as 88% in legal queries posed to AI models like GPT-4 and Llama 2. The study indicated that these models often hallucinate when faced with complex legal questions, such as interpreting the relationships between legal precedents. For instance, the AI might fabricate case law or misinterpret legal principles, particularly when dealing with lesser-known district court rulings.

How to manage AI hallucinations

  • Human oversight: always have a qualified legal professional review AI-generated outputs. This includes verifying citations, checking the accuracy of facts and ensuring that legal arguments, clauses and any references are coherent and relevant.
  • Cross-reference outputs: prompt AI to provide citations for its outputs. By obtaining sources, users can better verify the authenticity of the information. In any case, before using any information produced by AI, cross-reference it with trusted legal databases or sources.
  • Use clear and specific prompts: AI hallucinations can sometimes result from poorly structured or overly complex prompts. Design prompts that are clear, short and precise. If needed, break complex questions into simpler components to improve the quality of responses.

Mitigating AI hallucinations in contracts

For businesses that rely on AI tools, specific contractual provisions can help mitigate the risks associated with AI hallucinations. These provisions can address transparency, quality control, and accountability:

  • Transparency: ensure that contracts with AI service providers include provisions that require transparency regarding the AI system’s training data and algorithms.
  • Quality of training data: specify that the AI provider must maintain high standards for the quality of training data. Contracts could require that AI tools be trained on verified and reliable databases rather than unverified sources.
  • Liability and indemnification: include indemnification clauses that protect your business against liabilities arising from AI-generated inaccuracies. Define the circumstances under which the provider bears responsibility for damages caused by their AI tool.
  • Audits and compliance checks: establish provisions for regular audits of the AI tool to assess its performance and compliance with agreed standards and applicable laws. For example, the EU’s AI Act mandates that AI system providers comply with specific requirements related to transparency, data governance, and human oversight.

Conclusion: balancing risk and innovation

Generative AI is changing the legal landscape, but it comes with associated risks. Legal professionals must prioritise strategies for verification and oversight, ensuring that qualified experts check AI outputs. Establishing strong contractual safeguards for transparency, quality control, and liability for AI tools can help mitigate potential issues caused by AI.

If you are interested in exploring this topic further, feel free to book a complementary 20-minute call with our lawyers to discuss how we can address these challenges together.

Image by vectorjuice on Freepik.

Kiara Brunel Fink

Legal Intern

kiara.fink@loganpartners.com