scarlett johansson accuses openai of mimicking her voice for chatgpt

scarlett johansson accuses openai of mimicking her voice for chatgpt

2024-05-21 data

Scarlett Johansson claims OpenAI used a voice similar to hers for ChatGPT without consent, highlighting ethical concerns over AI-generated voices.

The Controversy Unfolds

Scarlett Johansson’s accusation against OpenAI has ignited a significant debate over the ethical use of AI in commercial applications. The issue began when Johansson declined an invitation from OpenAI CEO Sam Altman last September to voice the new ChatGPT system. Despite her refusal, OpenAI proceeded to unveil a new conversational interface for ChatGPT featuring a synthetic voice eerily similar to Johansson’s. This development has raised substantial concerns about the misuse of AI-generated voices, especially when used without explicit consent from the individuals they resemble.

OpenAI’s Response

In response to Johansson’s claims, OpenAI has taken steps to address the controversy. The company paused the use of the synthetic voice named Sky, which Johansson argued sounded too much like her own. OpenAI CEO Sam Altman clarified that the voice actor for Sky had been hired before reaching out to Johansson, and that the voice was never intended to mimic hers. However, Johansson’s legal team demanded further clarification on the origin of the voice, leading to a temporary halt in its usage to prevent further disputes.

The Broader Implications

This incident is not just a legal battle but also a stark reminder of the broader implications of AI technology in the digital age. Generative AI has made it possible to create highly realistic synthetic voices, which can be used in various applications from customer service to entertainment. However, the ethical considerations of such technology cannot be overlooked. The misuse of AI to replicate someone’s voice without consent can lead to significant legal and personal ramifications, as seen in Johansson’s case.

The Technology Behind AI Voices

OpenAI’s voice cloning technology is highly sophisticated, capable of creating realistic synthetic voices from short audio clips. This technology, which can generate a synthetic voice from just a 15-second clip, has enormous potential in various fields, from accessibility tools to personalized virtual assistants. However, the potential for misuse is equally significant. OpenAI, based in San Francisco, has developed this technology with the aim of enhancing user interaction with AI systems, but the ethical boundaries of this innovation remain a point of contention.

The legal landscape surrounding AI-generated voices is still evolving. Johansson’s case underscores the need for clearer regulations and guidelines to protect individuals’ likenesses and voices. The actress has emphasized the importance of transparency and legislation to safeguard personal identities against unauthorized use. The ongoing disputes between OpenAI and various artists and creatives highlight the urgent need for legal frameworks that address these issues comprehensively.

Future Directions

As AI technology continues to advance, companies like OpenAI must navigate the fine line between innovation and ethical responsibility. The controversy with Scarlett Johansson serves as a pivotal case that could shape future policies and practices in the AI industry. Ensuring that AI developments respect individual rights and consent will be crucial in maintaining public trust and fostering responsible innovation. OpenAI’s commitment to pausing the use of the controversial voice and engaging in dialogue with affected parties is a step towards addressing these complex issues.

Bronnen


AI ethics voice AI