Generative AI models are trained on vast datasets—trillions of words, images, and songs scraped from the internet. Creators argue this is infringement without compensation. Developers argue it’s “fair use”—a transformative process akin to a human artist being inspired by a gallery. The legal system is scrambling to catch up. The most complex disputes center on: 1. Input Infringement: Did the act of training the model on copyrighted data break the law? 2. Output Infringement: Does the image or text produced by the AI infringe on any specific work? A recent landmark UK High Court case provided some clarity, ruling that the Stable Diffusion AI model itself was not an ‘infringing copy’ because it does not store the original works. This decision shifts the battleground from the training data to the outputs— specifically, whether the AI reproduces trademarks (like watermarks). Governments worldwide are now racing to legislate, often debating a “text and data mining exception” to copyright law. The fundamental challenge is finding a sustainable legal standard that protects human creators whose work is the foundational fuel, while simultaneously allowing the generative technology to flourish. Until global IP law is redefined, the crisis of ownership will continue to shape the creative economy.