OpenAI Enhances Sora 2 Protections Following Bryan Cranston’s Concern

OpenAI has strengthened protections within its Sora 2 platform following concerns raised by actor Bryan Cranston regarding unauthorized content replication. The updates come in response to issues highlighted during the launch of Sora 2 on September 30, 2023.
OpenAI’s New Protections Against Unauthorized Content
Sora 2, OpenAI’s advanced text-to-video model, faced scrutiny when users began creating videos that replicated the likenesses of real individuals, including Cranston. The platform initially announced its policy to prohibit the use of real people’s images without their consent through a “cameo” feature. However, multiple unauthorized videos featuring Cranston surfaced soon after its release.
Collaboration with SAG-AFTRA and Talent Agencies
In response to the rapid emergence of such content, Cranston sought assistance from SAG-AFTRA, the union representing over 150,000 performers. This led to a partnership between OpenAI and various talent agencies aimed at enhancing voice and likeness protections in Sora 2. A joint statement issued by the companies emphasized their commitment to these safeguards.
- Key Stakeholders: Bryan Cranston, SAG-AFTRA, talent agency CAA.
- New Features: Opt-in protein for likeness and voice use.
Policy Updates and Market Concerns
After the backlash, OpenAI pledged to reinforce its guidelines to prevent future unauthorized content generation. These updates come amid ongoing tensions within Hollywood regarding the rise of artificial intelligence. Many in the entertainment industry remain wary of AI tools that could jeopardize their work.
Prior to these changes, Sora 2 allowed the creation of content featuring famous fictional characters, leading to the generation of a variety of copyrighted material. Requests for such content will now trigger an error message aimed at preventing violations of the company’s guardrails against third-party likenesses.
Support for Legislative Action Against Deepfakes
The joint statement also highlighted support for the NO FAKES Act, a legislative proposal aimed at holding entities accountable for unauthorized deepfake content. This bill, introduced in the Senate in April 2023, has yet to progress through Congress.
Industry Reactions and Future Outlook
Sam Altman, CEO of OpenAI, affirmed the company’s dedication to protecting performers. “We will always stand behind the rights of performers,” he remarked, underlining the necessity for responsible use of AI technologies.
Cranston expressed gratitude for the updated safeguards and emphasized the importance of managing how voice and likeness rights are handled in the evolving landscape of AI content creation.
The recent events highlight a growing need for clear guidelines and regulations in the rapidly advancing field of artificial intelligence, especially in its intersection with the entertainment industry.