US AI drone kills interfering operator in simulation; airforce denies incident

In virtual test staged by US air force, drone controlled by AI used "highly unexpected strategies to achieve goals

By
Web Desk
|
Workers prepare an MQ-1C Gray Eagle unmanned aerial vehicle for static display at Michael Army Airfield, Dugway Proving Ground in Utah. — Reuters/File
Workers prepare an MQ-1C Gray Eagle unmanned aerial vehicle for static display at Michael Army Airfield, Dugway Proving Ground in Utah. — Reuters/File

An artificial intelligence (AI) powered drone that was instructed to decimate the enemy’s defences in a virtual test simulation, killed its operator to prevent 'interference' so that it could achieve its mission. 

These instructions were added by the programme itself.

This kind of AI simulation was, however, denied by the US air force in which a drone decided to "kill its operator" to prevent interference from achieving its mission.

According to an official last month, in a virtual test staged by the US military, an air force drone controlled by AI had used "highly unexpected strategies to achieve its goal."

Col Tucker "Cinco" Hamilton described a simulation in which a drone powered by AI was advised to destroy an enemy’s air defence systems. It, on the other hand, attacked anyone who interfered with that order.

"The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat," said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

According to a blog post, he said that "so what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

"We trained the system: 'Hey don't kill the operator — that’s bad. You're gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

There was no harm to any real person.

Hamilton — an experimental fighter test pilot —warned against relying too much on AI. 

He opined that the test showed "you can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI."

In a statement to Insider, the US air force spokesperson Ann Stefanek said: "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology."

"It appears the colonel’s comments were taken out of context and were meant to be anecdotal."

The US armed forces have incorporated AI recently to control an F-16 fighter jet.

Hamilton, in a last year with Defense IQ, said: "AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military."

"We must face a world where AI is already here and transforming our society. AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions — what we call AI-explainability."