US, UK, 16 Other Nations Ink Guidelines to Make AI 'Secure by Design'

Image copyright: Leon Neal/Getty Images News via Getty Images (Nov. 2, 2023)

The Facts

  • 18 countries have signed a non-binding agreement advocating for artificial intelligence (AI) companies to create systems that are "secure by design" — with the intention to prevent misuse causing harm to public safety.

  • The document was officially launched at the UK's National Cyber Security Centre, with the event including panelists from the Alan Turing Institute, Microsoft, and other cybersecurity agencies. The guidelines argue that potential AI threats must be considered "holistically" with cybersecurity.


The Spin

Pro-establishment narrative

Governments and the international community have finally woken up to the dangers of AI, and have responded by finally setting the wheels in motion to implement meaningful legislation. While many may be worried that such action has taken place too late, it's imperative that the world takes steps forward before there is a chance for AI malpractice to unethically influence a series of potentially globe-changing political events in 2024. As the US, UK, Germany, and others move towards general elections we must ensure that AI is safe, and can only be used for good.

Establishment-critical narrative

Current AI regulation proposals contain a host of problems that must be addressed. So far, there lacks a consensus as to what should be designated a "high" security risk, while the lack of binding legislation involving enforcement mechanisms means that great trust is placed in company transparency. While different continents continue to diverge in how they approach AI risk with no single regulator, the question of legislative limits to AI remains an unsolved global problem of extreme concern.


Metaculus Prediction


Go Deeper


Articles on this story