In a statement released by the Center for AI Safety on Tuesday, artificial intelligence (AI) experts and tech CEOs — including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and "AI Godfather" Geoff Hinton — have issued a new warning about the severe risks posed by AI to humanity, including extinction.
Signed by AI experts and public figures, the one-sentence Statement on AI Risk asserts, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
These warnings should not be dismissed, as once AI systems reach a certain level of sophistication, it may become impossible to control their actions. By likening the threat posed by AI to nuclear war, these renowned AI experts want policymakers to focus on the technology's safety, which remains neglected. It is vital to pressure industry leaders and policymakers to establish guidelines and regulations for the responsible deployment of AI.
AI is the future, and trying to set back its development won't solve any problems. AI offers a revolutionary means to address some of the world's biggest challenges, including inequity and even climate change, and it must be kept on its current track. Rather than trying to reign it in, the tricky areas of the technology simply need to be identified, and work can be done to improve them while AI continues to develop at its current pace.
Artificial intelligence experts continue to overblow the technology's risk and engage in unrealistic fear-mongering. Although current technology is impressive, artificial general intelligence — the true concern — is still a long ways off, if attainable at all. By consistently releasing warnings about its far-fetched consequences, tech experts miss the mark and distract the public from debating existing or near-term harms and economic realities of AI.