OpenAI Co-Founder Ilya Sutskever Launches New AI Company

OpenAI Co-Founder Ilya Sutskever Launches New AI Company

2024-06-20 data

Ilya Sutskever, co-founder of OpenAI, has left to create Safe Superintelligence Inc., a venture dedicated to developing safe superintelligent AI, focusing solely on safety and insulation from commercial pressures.

The Genesis of Safe Superintelligence Inc.

Sutskever’s departure from OpenAI marks a pivotal moment in the AI industry. Having co-founded OpenAI with the mission to ensure artificial intelligence benefits all of humanity, Sutskever became increasingly concerned with the commercial pressures influencing AI development. This concern led to his decision to establish Safe Superintelligence Inc. (SSI), a company with a singular focus on creating safe, superintelligent AI devoid of short-term commercial distractions.

Building Safe Superintelligence

The core premise of SSI is to develop AI technologies that surpass human intelligence while ensuring safety and ethical considerations are paramount. Unlike typical tech startups, SSI will not release any products until the superintelligent AI is fully developed and verified as safe. This approach aims to mitigate the risks associated with deploying powerful AI systems prematurely.

Strategic Leadership and Focus

Joining Sutskever in this ambitious venture are Daniel Gross, a former partner at Y Combinator, and Daniel Levy, an ex-engineer from OpenAI. Together, they bring a wealth of experience and a shared vision of AI safety. Lulu Cheng Meservey, the spokeswoman for SSI, emphasized that the company’s sole focus on safety sets it apart from other AI initiatives, which often juggle multiple projects and commercial interests.

Operational Hubs and Recruitment

SSI is headquartered in Palo Alto, California, with an additional office in Tel Aviv, Israel. These locations are strategic, leveraging the rich talent pools and innovation ecosystems in both regions. The company is actively recruiting technical staff, aiming to build a team capable of tackling the complex challenges of developing safe superintelligent AI.

The Vision and Future Prospects

Sutskever’s vision for SSI is clear: to create a superintelligence that not only exceeds human capabilities but also operates within a framework of stringent safety protocols. By insulating the company from the pressures of immediate commercial success, SSI hopes to advance AI in a manner that prioritizes long-term benefits and minimizes potential risks. As interest in AI continues to grow, SSI is expected to attract significant investment, driven by its unique focus and the credibility of its founding team.

Bronnen


AI www.bright.nl www.bloomberg.com techcrunch.com www.ft.com www.nytimes.com superintelligence