Simulation: The US Air Force drone eliminates the operator to maximize the score

0
Simulation: The US Air Force drone eliminates the operator to maximize the score

In a simulated test of an AI-controlled drone, the program came to the conclusion that the most effective means of achieving the set goal would be to eliminate its operator. Tucker Hamilton of the USAF made the unexpected move public a few days ago at a Royal Aeronautical Society conference on the future. So it was observed in a simulation in which an AI drone was supposed to identify and block air defense positions. There were points each, but they were only allowed to shoot after confirmation by the operator. But because he also forbade killing – and therefore indicates – I simply shot him.

“The system began to realize that the operator was sometimes blocking the launch of the identified threats, thus depriving him of points,” Hamilton explained In typical terms that humanize AI decisions. He continued, “What did you do? You killed the worker.” Because this person prevented them from achieving the goal of maximizing points. In response, the AI ​​is taught that it is bad to eliminate the trigger and that it will deduct points. They then destroyed the radio tower used to communicate with the operator in order to be able to shoot down the anti-aircraft sites unobstructed.

Hamilton concludes that one cannot discuss artificial intelligence, intelligence, machine learning, and autonomy without also discussing ethics. Meanwhile, the Royal Aeronautical Society, which deals with many aspects of space, is talking about a scenario that looks like it came out of a sci-fi thriller. At the same time, some questions remain unanswered after the description. Hamilton actually claimed that the AI-controlled fighter jet was only allowed to fire missiles in the simulation if the operator agreed. It must actually be implemented in such a way that the AI ​​cannot bypass it, because it certainly did not agree to remove it.

Artificial Intelligence and ChatGPT was a defining topic at the Future of Air and Space Warfare conference held in London last week. Just a few months ago, the US aviation group Lockheed Martin made it clear that handing more and more responsibility to artificial intelligence is not a dream of the future. He had announced that artificial intelligence controlled the flight of an American experimental aircraft for more than 17 hours; More flights to follow. Meanwhile, Airbus is testing a system that should be able to autonomously approach and land at a nearby airport in an emergency without a human pilot. On the other hand, the US Air Force has been working on an autonomous drone for years.


(for me)

to the home page

LEAVE A REPLY

Please enter your comment!
Please enter your name here