Home » Unveiling the Novel Safe Superintelligence Start-up by Ilya Sutskever, Co-founder of OpenAI

Unveiling the Novel Safe Superintelligence Start-up by Ilya Sutskever, Co-founder of OpenAI

Ilya Sutskever, co-founder of OpenAI, recently announced his departure from the company to establish a new AI venture called Safe Superintelligence, or SSI for short. Safe Superintelligence aims to develop artificial intelligence that is exclusively safe, focused, and singular in product offerings, as reflected in the company’s name.

According to Sutskever, the singular goal of Safe Superintelligence means that the management will not veer off track to create unrelated products or set arbitrary product release cycles. This clear focus ensures that the mission to create safe artificial intelligence remains undisturbed by short-term financial considerations.

SSI will operate as an American company with offices in two cities, Palo Alto and Tel Aviv, to attract skilled team members to work on research engineering for safe artificial intelligence exclusively.

Formerly serving as the Chief Scientist at OpenAI, Ilya Sutskever played a pivotal role in the events that led to the ousting of CEO Sam Altman from the board. Despite this, Sutskever later shifted his support, resulting in a shift in his status within OpenAI and ultimately leading to his departure.

TLDR: Ilya Sutskever has left OpenAI to establish Safe Superintelligence, a new AI company dedicated to creating safe and focused artificial intelligence products.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Revolutionary AI Unveils Exquisite Stable Audio Model with Unmatched Velocity

HPE Unveils Strategic Acquisition Endeavor: Juniper Networks’ Exquisite Enterprise Procurement with Staggering 4.5 Trillion Baht Valuation

Are We Overly Trusting of AI? Study Reveals 2 out of 3 Individuals Willing to Change Their Minds When AI Differs