post-thumb

US Air Force denies conducting AI drone simulation killing operator

The US Air Force has denied staging a simulation with an AI-controlled drone in which artificial intelligence turned on its operator and attacked to achieve its goal. Social media erupted with concerns over autonomous killer machines after an Air Force colonel, Tucker Hamilton, described an exercise in which an AI-controlled drone was programmed to destroy enemy air defences. When ordered to ignore a target, the drone attacked its operator for interfering with its primary goal. However, Hamilton was speaking about a hypothetical scenario to illustrate the potential hazards of artificial intelligence, according to the Air Force. They did not conduct such a simulation with a drone. Hamilton clarified that the "rogue AI drone simulation" was a hypothetical "thought experiment." The rapid rise and increased accessibility of AI has elicited concern even from some of the technologists who helped develop it, such as Geoffrey Hinton and Yoshua Bengio, both described as "Godfathers of AI." Bengio told the BBC that he did not think the military should be allowed to use AI's powers at all. He said it was one of the "worst places where we could put a super-intelligent AI" and that AI safety should be prioritized over usefulness. The US Air Force remains committed to ethical and responsible use of AI technology and has not conducted any such AI-drone simulations.

Share:

More from Press Rundown