Former OpenAI Safety Lead Joins Rival Startup

Image copyright: Unsplash

The Facts

  • Jan Leike, a former lead safety researcher at OpenAI who resigned earlier this month, announced Tuesday that he would join Anthropic, a rival artificial intelligence (AI) startup.

  • Leike, who previously led OpenAI's superalignment team with the company's co-founder Ilya Sutskever, said he's eager "to continue the superalignment mission" at Anthropic. Superalignment is research regarding long-term AI safety, ensuring that "superintelligence" — a potential future aspect of AI that could be smarter than human beings — can be controlled and act safely in accordance with human values.[


The Spin

Narrative A

OpenAI focuses more on short-term commercial and societal success than long-term AI safety. That's why Leike and other high-profile researchers are moving from that company to different employment opportunities where they can create more safety-conscious AI models.

Narrative B

OpenAI has developed a new safety and security team to address the challenges associated with the technology. With or without its founders or former executives, the company will continue to evaluate and develop AI's safeguards as it works on its next AI model.


Metaculus Prediction


Go Deeper


Sign up to our daily newsletter