OpenAI Whistleblowers Allege NDAs Prevent Airing of Safety Concerns

Image copyright: Leon Neal/Staff/Getty Images News via Getty Images

The Spin

Narrative A

The concerns raised by this letter are disturbing, to say the least. There are other reports trickling out of OpenAI that claim that the company hastily rushed through safety screening to meet a release date in May, as safety takes a backseat to financial interest. OpenAI cannot be trusted to fulfill its mandate of altruistic AI development and must be held to account.

Narrative B

No one has a better understanding of AI safety than the undisputed industry leader, and OpenAI has always made safety paramount. The company has a new safety committee working on a report on their upcoming model and already has strong whistleblower protections. There's no evidence that OpenAI is engaged in any nefarious activity or suppression of dissent.

Metaculus Prediction

There's an 88% chance that an AI will be able to reliably construct bug-free code of more than 10,000 lines before 2030, according to the Metaculus prediction community.


Go Deeper


Articles on this story