The AI Copyright Wars
From artists to Hollywood studios, every creative industry has sued. None have won decisively.
The question of whether AI models trained on copyrighted works constitute infringement was theoretical until it wasn't. Generative AI crossed the line from research curiosity to commercial product in 2022, and the lawsuits followed almost immediately.
The core tension is structural. Large language models and image diffusion models require vast training datasets. Those datasets inevitably contain copyrighted material: books, images, music, journalism. The companies building these models argue that training constitutes fair use, a transformative process that produces new works rather than copying existing ones. Rights holders counter that the outputs directly compete with their original works, generated from a foundation of unpaid labor.
What makes the AI copyright question different from previous technology disputes like Napster, YouTube, and Google Books is scale. A single model can absorb millions of works, and its outputs can substitute for any of them. The economic displacement is not theoretical.
The lawsuit (Andersen v. Stability AI) was filed in the Northern District of California, alleging copyright infringement through use of artists' work in training data for image generation models. It established the legal framework that subsequent cases would follow: training on copyrighted works without permission constitutes infringement, not fair use.
Getty's case was significant because it involved a single rights holder with a clear catalog of registered works — making damages easier to quantify than the class-action approach. The parallel UK filing tested whether AI training constituted infringement under a different legal framework.
The Times' lawsuit became the most closely watched case because it pitted the world's most prominent newspaper against the world's most prominent AI company. The complaint included examples of ChatGPT reproducing near-verbatim excerpts of Times reporting, raising questions about memorization versus generation.
The RIAA lawsuits represented the music industry's coordinated response to AI-generated music. Filed on the same day against the two dominant platforms, the cases alleged that Suno and Udio trained on copyrighted recordings without permission, producing outputs that could substitute for licensed music.
The Perplexity lawsuits extended the copyright battle from generative AI to AI-powered search. The complaint alleged not just training on copyrighted content but active reproduction of paywalled articles in search results, with evidence that Perplexity ignored robots.txt restrictions.
The settlement marked a turning point from pure litigation to negotiated coexistence. Rather than seeking to shut down AI music generation, UMG opted to shape it — licensing its catalog for AI training in exchange for revenue sharing and creative controls. The deal became a template for how other rights holders might approach the same question.
Seedance 2.0 generated videos featuring Marvel, Star Wars, SpongeBob, and South Park characters, among others. Disney accused ByteDance of a 'virtual smash-and-grab' of its IP. The MPA called it 'unauthorized use of copyrighted works on a massive scale.' The backlash was the first time Hollywood studios, talent unions, and trade associations acted in unison against a single AI tool.
Japan's Minister for AI Strategy Norimi Onoda announced the investigation at a press conference, stating: 'If existing copyrighted works are being utilized without the permission of the rights holders, this is not something that can be overlooked.' The probe tested Japan's AI Act enforcement mechanisms for the first time.
Aftermath
The legal campaign has escalated across every creative medium.
Visual art was the first battleground. In January 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class-action lawsuit against Stability AI, Midjourney, and DeviantArt. Three weeks later, Getty Images sued Stability AI in both US and UK courts, alleging infringement of over 12 million photographs.
Journalism followed at the end of 2023. The New York Times filed what became the highest-profile case: a federal lawsuit against OpenAI and Microsoft alleging millions of articles were used without permission to train ChatGPT and Bing Chat. In October 2024, Dow Jones and the New York Post sued Perplexity for reproducing paywalled content with minimal attribution. By December 2025, the Times had also sued Perplexity.
Music entered the arena in June 2024, when the RIAA filed simultaneous lawsuits against Suno and Udio, the two leading AI music generators, alleging mass copyright infringement in training data. In October 2025, Universal Music Group settled with Udio, agreeing to jointly develop a licensed AI music creation service. It was the first major settlement in the AI copyright wars.
Film and video became the latest front in February 2026, when ByteDance's Seedance 2.0 video generator went viral, producing clips featuring Disney characters, Marvel heroes, and anime icons like Ultraman and Detective Conan without authorization. Disney issued a cease-and-desist letter accusing ByteDance of a "virtual smash-and-grab." SAG-AFTRA and the MPA condemned the tool. Japan's Cabinet Office opened a formal copyright investigation. ByteDance pledged to strengthen safeguards.
Industry Impact
As of February 2026, no US court has issued a definitive ruling on whether AI training constitutes fair use. The cases are proceeding through discovery and motions. But the legal uncertainty has already reshaped the industry.