Business US

Bryan Cranston, CAA, UTA Applaud OpenAI’s New Sora 2 Safety Features

OpenAI’s new generative AI video platform, Sora 2, has sparked significant attention from artists and industry organizations. Initially criticized for using the voices and likenesses of performers without consent, the platform has since strengthened its policies to address these concerns.

Bryan Cranston’s Response to Sora 2

Actor Bryan Cranston publicly expressed his concerns after discovering his likeness was used in Sora 2 without permission. He highlighted the potential for misuse, not only for himself but for the entire community of performers. In a statement released by SAG-AFTRA, he commended OpenAI for enhancing its policies regarding consent.

Improvements in Consent Policies

  • OpenAI’s commitment to requiring opt-in consent for the use of voice and likeness.
  • Strengthened guardrails to prevent unauthorized replication.
  • Public statements from SAG-AFTRA and OpenAI acknowledging previous lapses.

These improvements were a direct response to concerns raised by Cranston and the talent agencies involved. SAG-AFTRA confirmed that during the initial launch of Sora 2, unauthorized generations of voices and likenesses occurred, prompting the actor to address the issue with his union.

The Role of CAA and UTA

The involvement of major talent agencies, Creative Artists Agency (CAA) and United Talent Agency (UTA), has been crucial. These organizations were among the first to alert members about the risks presented by Sora 2. Now, they are highlighting their “productive collaboration” with OpenAI to ensure that artists retain the rights to their voices and images.

Continuing Challenges for Artists

Despite advancements, concerns remain. Sean Astin, the recently elected SAG-AFTRA president, emphasized that Cranston’s case is indicative of broader issues facing many performers. The need for durable opt-in protocols is critical for securing artist rights in the face of evolving AI technologies.

Legislative Measures: The NO FAKES Act

A legislative proposal, the NO FAKES Act, currently in Congress, aims to safeguard performers’ likenesses. The act mandates explicit consent for any AI-generated replicas. OpenAI has expressed support for this bill, reinforcing its commitment to performer rights.

Broader Implications for AI Use

The conversation around Sora 2 reflects ongoing discussions about the ethical use of AI. Previous incidents involving the estate of Martin Luther King Jr. have highlighted the need for responsible practices in AI-generated content. OpenAI is taking steps to mitigate issues, assuring the public of its dedication to protecting individual rights as it navigates this complex landscape.

In conclusion, while OpenAI’s enhancements to Sora 2’s policies represent a step forward in addressing the concerns of performers, much work remains to be done to establish a secure and ethical framework in AI technology usage. The collaboration between artists, unions, and agencies will play a pivotal role in shaping the future landscape of digital rights.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button