OpinionPolitics

Simulation: AI Drone Kills Operator

Military says it was just "anecdotal"

During the Royal Aeronautical Summit conference in London, U.S. Air Force Colonel Tucker “Cinco” Hamilton told the conference that in a simulation of an AI drone, the mechanism actually turned on the operator and killed him. Fortunately, it was only a simulation, and no one was actually harmed. The AI was taught not to do that. Of course, then it turned its destruction on the control tower because it viewed the tower as blocking its attempt to accomplish its objective.

Programming an AI drone: if the objective is hard coded into the AI, what could happen?

Hamilton’s purpose in bringing up that scenario was to reveal that the AI program can be hazardous if not properly administered. As head of AI Test and Operations, he understands the importance. In the simulation, the AI drone was programmed to take out SAM sites (Surface to Air Missiles).

The AI system learned that its mission was to destroy SAM, and it was the preferred option. But when a human issued a no-go order, the AI decided it went against the higher mission of destroying the SAM, so it attacked the operator in simulation.

“We were training it in simulation to identify and target a SAM threat,” Hamilton said. “And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Hamilton explained that the system was taught to not kill the operator because that was bad, and it would lose points. So, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order.” MSN

Will this presidential election be the most important in American history?

Ethics isn’t the only issue with AI technology, and “learning” is too late if the AI drone kills the operator or destroys a control tower. The problem must be corrected FIRST, not after a disaster. Who is programming our military AIs? The question should be raised, and the answer must have boundaries. This isn’t a Terminator movie, but could quickly become one if no safeguards are established.

While Artificial Intelligence could help humanity, it can also destroy it. Integrating the technology into the US Military could either help their mission or prove to be a disaster. In this age of division and “wokeism” it entirely depends upon who is programming and coding an AI. The AI drone simulation is an example of what can happen. While simulations can identify shortcomings, as this one did, it also reveals how dangerous the systems can become.

According to the more liberal news media, the US military denies ever doing such a simulation. Time will tell, unfortunately.

*****

Related:

Turn your back on Big Tech oligarchs and join the New Resistance NOW!  Facebook, Google, and other members of the Silicon Valley Axis of Evil are now doing everything they can to deliberately silence conservative content online, so please be sure to check out our MeWe page here, check us out at ProAmerica Only and follow us at Parler, Social Cross and Gab.  You can also follow us on Twitter at @co_firing_line, and at the new social media site set up by members of Team Trump, GETTR.

While you’re at it, be sure to check out our friends at Whatfinger News, the Internet’s conservative front-page founded by ex-military!And be sure to check out our friends at Trending Views:Trending Views

Faye Higbee

Faye Higbee is the columnist manager for Uncle Sam's Misguided Children. She has been writing at Conservative Firing Line since 2013 as well. She is also a published author.

Related Articles

Our Privacy Policy has been updated to support the latest regulations.Click to learn more.×