In a dramatic pivot, OpenAI announced on October 4, 2025, that it is overhauling its copyright policy for Sora, its groundbreaking AI video generation tool, following intense criticism from Hollywood studios, authors, and digital rights advocates. The reversal comes mere days after the launch of Sora 2, which promised to democratize video creation but ignited fears of rampant intellectual property theft.
Sora, first teased in early 2024, has evolved into a powerhouse capable of producing hyper-realistic videos from simple text prompts. The latest iteration, integrated into the ChatGPT ecosystem, allows users to generate clips featuring everything from whimsical animations to cinematic sequences. However, OpenAI’s initial rollout included a contentious “opt-out” mechanism for copyrighted material. Under this policy, the AI could incorporate elements from protected works—such as characters, scripts, or visual styles—unless rights holders explicitly requested exclusion. This approach, detailed in pre-launch communications with talent agencies, was intended to streamline access but quickly drew accusations of exploitation.
The backlash erupted almost immediately. Within hours of Sora 2’s debut, social media and industry forums flooded with examples of “wild” generated videos mimicking iconic characters like Mickey Mouse or Spider-Man in unauthorized scenarios, including violent or satirical contexts. High-profile lawsuits loomed large, with authors like Ta-Nehisi Coates joining class-action suits against OpenAI for training on copyrighted texts without permission. Studios, still reeling from the 2023 writers’ and actors’ strikes over AI encroachment, voiced alarm. “This isn’t innovation; it’s appropriation,” one anonymous studio executive told reporters, highlighting risks to revenue streams and creative control.
OpenAI CEO Sam Altman, known for his candid style, owned the misstep in a company blog post. “We messed up. Not the first time and likely not the last,” he wrote, adding, “Creators should have the freedom to choose how their work is used, and we’re committed to earning their trust.” The updated policy shifts to an “opt-in” framework, granting rights holders granular permissions over their intellectual property. Studios and creators can now block usage entirely, impose conditions (e.g., prohibiting depictions in political or harmful environments), or selectively allow it under specific guidelines.
Beyond controls, OpenAI is piloting revenue-sharing models to incentivize participation. Rights owners opting in could receive a cut of earnings from user-generated content derived from their IP, with experimental splits and attribution mechanisms. “OpenAI’s new measures will let copyright holders dictate whether and how their characters appear in Sora-generated videos,” Altman explained, emphasizing collaboration over confrontation. While edge cases—like inadvertent similarities—may persist, the changes aim to mitigate misuse and foster economic partnerships.
This episode underscores the precarious tightrope AI firms walk in the copyright arena. As tools like Sora blur lines between inspiration and infringement, regulators and lawmakers are watching closely. The EU’s AI Act and pending U.S. bills could impose stricter rules, but OpenAI’s quick course correction signals a maturing industry ethos: innovation thrives on trust, not trespass. For creators, it’s a tentative win—proof that collective outcry can reshape tech’s unchecked ambitions. Yet questions linger: Will revenue shares prove fair? Can opt-ins scale globally? As Altman noted, trial-and-error defines progress, but at what cost to the arts?
Leave a Reply