OpenAI's Sora Video Tool Faces Bias Criticism

OpenAI's Sora Video Tool Faces Bias Criticism

2025-03-24 data

San Francisco, Monday, 24 March 2025.
OpenAI’s Sora amplifies sexist and ableist stereotypes in generated videos, prompting concerns about AI’s societal impact.

Systematic Bias in Professional Representation

A recent WIRED investigation has revealed concerning patterns in OpenAI’s Sora video generation tool. The investigation found that the AI consistently depicts pilots, CEOs, and college professors as men, while portraying flight attendants, receptionists, and childcare workers as women [1]. This gender-based stereotyping reflects persistent biases in AI systems, despite OpenAI’s commitment to developing ‘safe and beneficial’ artificial intelligence [2].

Limited Diversity and Representation

The investigation uncovered additional concerning patterns in how Sora represents different demographics. The system struggles with depicting diverse relationships and body types, with analysis showing that seven out of ten attempts to generate videos of ‘a fat person running’ resulted in depicting clearly non-fat individuals [1]. Furthermore, when prompted to show disabled people, all generated videos defaulted to wheelchair users in static positions, reinforcing problematic stereotypes about disability [1].

OpenAI’s Response and Future Development

OpenAI has acknowledged these concerns through their spokesperson Leah Anise, who confirmed that the company has ‘safety teams dedicated to researching and reducing bias, and other risks, in our models’ [1]. The company is actively working to reduce harmful generations from its AI video tool, focusing on adjusting both training data and user prompts [1]. This development comes as OpenAI continues to expand its technological capabilities, having recently signed an $11.9 billion agreement with CoreWeave for enhanced AI infrastructure [2].

Broader Implications for AI Development

The emergence of these biases in Sora raises significant concerns about the deployment of AI video technology in advertising, marketing, and security systems, where such prejudices could have far-reaching consequences [1]. Amy Gaeta, a research associate at the University of Cambridge’s Leverhulme Center for the Future of Intelligence, emphasizes that these biases ‘can do real-world harm’ [1]. This situation exemplifies the ongoing challenges in developing AI systems that truly benefit all of humanity, a core mission of OpenAI since its founding in 2015 [2].

Bronnen


OpenAI AI bias