Publishers, authors, and artists are suing AI companies for using their work to train models without permission. 

  • The New York Times v. OpenAI & Microsoft (2023-2026): Accuses OpenAI of using millions of articles to train ChatGPT, competing directly with the publisher. The Times has demanded over 20 million private ChatGPT conversations in discovery.
  • Getty Images v. Stability AI (US & UK): Alleged that Stable Diffusion infringed copyrights and replicated watermarks. In the UK, a High Court ruled in Nov 2025 that while Stability AI did not violate copyright through the training process, it was liable for limited trademark infringement.
  • Authors Guild v. OpenAI & Microsoft: A class-action lawsuit filed by authors alleging unauthorized use of their books to train large language models (LLMs).
  • Bartz v. Anthropic (2025): A US district court ruled that while training AI on purchased books can be “fair use” (due to being transformative), using pirated works is not. Anthropic agreed to a large settlement in late 2025.
  • Kadrey v. Meta (2025): A California judge ruled that while Meta’s use of books for training was “highly transformative,” the case proceeds over whether this causes “market dilution” for authors.
  • Universal Music Group v. Anthropic (2026): A $3.1 billion lawsuit alleging Anthropic built its Claude AI on a foundation of torrented, pirated lyrics. 
  1. AI in Court: “Hallucinations” and Evidence

Judges are increasingly dealing with AI-generated false information and fabricated evidence. 

  • Fake Case Citations (2023-2025): Numerous cases have occurred where lawyers or pro se litigants (acting as their own counsel) submitted briefs containing “phantom” cases created by ChatGPT. In 2025, a Pennsylvania pro se litigant was fined $1,000 and had their suit dismissed for this.
  • Flycatcher Corp v. Affable Avenue LLC (2026): A US case where the court imposed terminal sanctions for submitting false legal citations generated by AI.
  • Mendones v. Cushman & Wakefield (2025-2026): One of the first cases where a judge detected that a litigant submitted AI-generated “deepfake” video evidence as authentic.
  • Chandra v. Royal Mail Group (2026): A UK employment tribunal case highlighting the risks of using AI for witness statements. 
  1. AI Liability, Bias, and Safety
  • Mobley v. Workday (2024-2025): A class-action lawsuit allowed to proceed against Workday, alleging its AI screening tool discriminated against applicants over age 40.
  • SafeRent Solutions (2022-2023): Settled for $2.275 million after allegations of algorithm bias against Black renters in Massachusetts.
  • Brewer v. Otter.ai: A class action alleging the “Otter Notetaker” records private conversations without consent.
  • Huckabee v. Meta Platforms (2024): A suit regarding AI algorithms enabling fraudulent ads, where the court examined Section 230 immunity for algorithmic content curation. 
  1. Key Legal Precedents (2025-2026)
  • No AI Authorship: Courts have confirmed that copyright only protects “the fruits of intellectual labor” created by human minds, meaning AI-generated works cannot be copyrighted.
  • Patentability: The UK Supreme Court in Feb 2026 (Emotional Perception AI) issued a landmark ruling regarding patentability of artificial neural networks (ANNs), generally seen as a boost for AI innovation.
  • Transformative Use: The “transformative” nature of AI training (using data to create something new rather than just copying it) is the central argument for AI companies defending against infringement suits, as seen in Kadrey v. Meta
  1. Regulatory Responses
  • Rule of Evidence 707 (Proposed): Would subject machine-generated evidence to the same reliability standards as expert testimony.
  • AI Task Force: California and other jurisdictions are developing guidance to help judges evaluate AI-generated content. 

Note: The AI legal landscape is shifting rapidly, with many of these cases expected to see further appeals and settlements throughout 2026.