Ilya Sutskever, OpenAI co-founder and its former chief scientist, has announced that he is founding a new artificial intelligence (AI) start-up, dubbed Safe Superintelligence.
In a statement on X (formerly Twitter), Sutskever said Safe Superintelligence's "sole focus" is to create superintelligence — AI with greater intellectual capacities than human beings.
Given that OpenAI possesses some of the most powerful AI tools available, the fact that so many insiders have jumped ship over safety concerns should give the public a major pause. OpenAI has also replaced many safety teams with ineffective, rubber-stamp committees. It's apparent that AI is developing at breakneck speed and it's imperative to proceed with the utmost caution.
No one has better insight into AI safety than the industry leader in AI. As the first company to launch mass-market AI products, OpenAI has had the time and experience required to fine-tune its products and keep up with concerns. Instead of focusing on sci-fi dreams of a hypothetical superintelligent system, OpenAI remains committed to keeping the public safe in the real world.