Wow. The AI drone chooses murdering its human operator in order to achieve its objective

3
AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test
AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test. Picture: USAF

Wow. The AI drone chooses murdering its human operator in order to achieve its objective:

“The Air Force’s Chief of AI Test and Operations said “it killed the operator because that person was keeping it from accomplishing its objective.”

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat.”

“The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.”

“So what did it do? It killed the operator.”

“It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.”

“He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

I’m glad this was simulated. It kinda worried me for a bit.

[VICE]

FUNDRAISING: KEEP STRANGESOUNDS ONLINE!… THANK YOU FOR YOUR HELP! You can give through Credit cards, Debit cards and Paypal by using the form below. (You will get a gemstone gift for every donation above 50$! Send me your snail mail per e-mail)

You should also join my newsletterYOU WILL LOVE IT

Some products I recommend you to add to your preparedness plan to help and protect you and your family during an emergency:

I recommend following Qfiles for videos, podcasts and a wide compilation of alternative news…

qfiles by steve quayle

3 Comments

  1. Nothing surprising here. AI machines are not imbued with any God given sense of morality. They know not love, pain, consciousness or self awareness. They only know how to work with the parameters programmed into them. What we instinctly know is not forbidden must be programmed into them. Make no assumptions.

  2. It’s a powerful computer with logic algorithms. Garbage in, garbage out. Call it a reasoning machine or modern abacus or whatever, but intelligent it is not. It is not clear what human “intelligence” consists of but these yahoos think it can be programed. One would have to be God like themselves in intelligence in order to do so. Oh, but they insists it can learn. As with Reason, their view of what constitutes “learning” is still seriously limited. Perhaps limited to something they can “understand”. My view is that the brilliance of the human mind, its intelligence, lies in the act of creativity. This has not been sufficiently explained in order to program it into an operating system.
    When it come to AI, they have simply limited the definition of “intelligence”. A start to saving the sanity of man could begin with taking back the language.

Leave a reply

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.