The US Air Force (USAF) now says originally-reported activities did not take place in which the US military allegedly conducted a simulated test of an AI-controlled drone that made "highly unexpected strategies" to prevent anyone interfering with its mission — including killing its operator.
The alleged story was first published in a blog post on the Royal Aeronautical Society website last week, quoting Air Force Colonel Tucker "Cinco" Hamilton during the Future Combat Air & Space Capabilities summit in London on May 22-23.
The Dept. of Defense remains committed to the ethical and responsible use of AI technology. Hamilton's story is a worst-case scenario based on philosopher Nick Bostrom's "Paperclip Maximizer" thought experiment and is a test the USAF would never run in the real world. The military has run other mock missions where human operators face off against AI technology, but those, too, were all simulations.
While there may have been some confusion in the reporting, a very similar hypothetical drone attack did happen. And the fact that the Air Force tried to blur the lines of the story raises serious questions. In another report at the Future Combat Air and Space Capabilities Summit in London, we learned that even when the AI drone was trained to listen to "yes" and "no" orders from the command tower, it chose to attack the command tower itself — and its human operator — to achieve its mission. The US military and its new AI toys need to be tightly monitored.