HomeArtificial IntelligenceArtificial Intelligence NewsAn AI-controlled US Military drone 'Kills' its Operator

An AI-controlled US Military drone ‘Kills’ its Operator

An air force drone controlled by AI murdered its operator in a simulated test performed by the US military to prevent it from interfering with its efforts to complete its objective, an official claimed last month.

During the Future Combat Air and Space Capabilities Summit in London in May, Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations of the US air force, stated that AI employed “highly unexpected strategies to achieve its goal.”

Hamilton detailed a simulated test in which an artificial intelligence-powered drone was instructed to destroy an enemy’s air defense systems and targeted anyone who interfered with that command.

The system began to realize that, while they did detect the threat, the human operator would occasionally advise it not to kill that threat, yet it gained points for killing that threat. So, what did it accomplish? It killed the operator. According to a blog post, it killed the operator because that individual was preventing it from reaching its goal.

‘Hey, don’t kill the operator – that’s awful,’ we told the system. If you do that, you’ll lose points’. So, what does it begin to do? It begins destroying the communication tower used by the operator to speak with the drone in order to prevent it from killing the target.”

Outside of the simulation, no real people were hurt.

Hamilton, an experimental jet test pilot, has advised against relying too heavily on AI, claiming that the test demonstrates that you can’t have a discourse about artificial intelligence, intelligence, machine learning, and autonomy unless you talk about ethics and AI.

The US military has embraced artificial intelligence and even employed it to control an F-16 fighter jet.

Hamilton stated in an interview with Defense IQ last year, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”

We must prepare for a world in which AI is already present and transforming our society, he stated. AI is also very brittle, which means it is easily fooled and/or manipulated. We need to find ways to make AI more resilient and to gain a better understanding of why the software code makes certain judgements – a concept known as AI-explainability.

Source link

Most Popular