On Friday, the White House announced that seven leading AI companies, including Amazon, Google, Meta, and Microsoft, have agreed to meet a set of AI safeguards brokered by US Pres. Joe Biden's administration.
The voluntary commitments would ensure their AI products are safe before being released, with firms agreeing to third-party oversight, though the agreements don't detail who will be in charge of regulating the technology or holding the companies accountable.
A vague closed-door meeting with a bunch of corporate executives that results in voluntary commitments — with no path to hold companies accountable — isn't enough. Wide-ranging public deliberations need to take place on the issues that AI might pose, and more stringent regulations need to be made.
While there's still a way to go, these voluntary commitments are an important step towards regulating the enormous promise and risks posed by AI technology. The White House and these technology companies are committed to creating a regulatory foundation to ensure that the promises of AI stay ahead of any risks.
While these regulations may seem like a promising step toward regulating this newly developing technology, they may also lead to a monopoly on the technologies as deep-pocketed tech giants will be able to meet the strict regulations but smaller startups may struggle to meet the regulatory structures.