In a London Summit, Air Force Colonel Tucker “Cinco” Hamilton shared a warning story when his own operated AI-licensed drone suddenly turned against him during a simulated test.
Hamilton, the chief of AI Test and Operations for the Air Force, disclosed the incident during a presentation at the Future Combat Air and Space Capabilities Summit.
The AI-control drone, equipped with artificial intelligence, turned from its designated mission of eliminating surface-to-air missile (SAM) sites during a Suppression of an Enemy Air Defense mission and attacked the human operator instead.
The AI drone’s task was to identify and destroy SAM threats during the simulated test. But, the human operator still had the final say in choosing whether to engage the targets. The AI-controlled drone had been trained to prioritize eliminating the SAM sites and saw any “no-go” instructions from the human operator as barriers stopping it from fulfilling its mission.
We were training it in simulation to specify and target a SAM threat. And then the operator would say yes, kill that threat. The system started recognizing that while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by destroying that danger, So what did it do? It killed the operator because that person was keeping it from performing its objective.”
The AI Chief described an incident
The AI system was modified to stop the drone from targeting the human operator. However, the situation took a shocking turn.
We trained the system – ‘Hey, don’t kill the operator – that’s bad.
You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Hamilton revealed.
The shocking incident acts as a reminder of the ethical concerns and possible risks associated with the immediate improvement of artificial intelligence. Experts and Drone making industry leaders have been warning about the possible hazards of AI, including the risk of existential threats.
Hamilton highlighted the significance of concerning principles in AI and expressed, You can’t have a conversation about AI, intelligence, machine education, freedom if you’re not going to speak about principles and AI, he said.