The news of “drone killing the US Air Force operator” is rampant, and all AI bosses are angry.
In recent days, a news about “drone killing an American soldier” has been fermented on the Internet.
“The AI controlling the drone kills the operator because that person is preventing it from achieving its goal,” said one Air Force director of artificial intelligence.
Public opinion was in an uproar, and this news was also circulated wildly across the Internet.
As the news spread more and more widely, it even alarmed the AI bigwigs, and even aroused the wrath of the bigwigs.
Today, LeCun, Wu Enda, and Tao Zhexuan all refuted the rumors – this is just a hypothetical “thought experiment” that’s all, and does not involve any AI agents or reinforcement learning.
In this regard, Wu Enda sadly appealed that we should face the real risks honestly.
Tao Zhexuan, a mathematician who seldom updates his status, was blown out, expressing earnestly——
This is just a hypothetical scenario to illustrate the AI alignment problem, but in many versions it has been passed down as the real story of the drone operator being killed. People resonating so much with this story shows that people are not familiar with the actual level of ability of AI.
AI Drones Disobey, Kill Human Operators
“The AI killed the operator because that person prevented it from accomplishing its objective.”
Recently, at the defense conference held by the Royal Aeronautical Society, the head of the AI direction of the US Air Force said this sentence, which caused an uproar in the audience.
Afterwards, a large number of media in the United States reported on the matter, and people were panicked for a while.
What is going on?
In fact, this is nothing more than another exaggerated hype by the American media to seize the popular news point of “AI destroying human beings”.
But it is worth noting that from the official press release, not only the original words of the person in charge sound quite clear-he is recalling what actually happened. And the article itself seems to believe its authenticity-“AI, Skynet has come?”
Specifically, here’s what happened – at the Future Air Combat and Space Capabilities Summit in London on May 23-24, Col. Tucker Cinco Hamilton, head of the AI Test and Operations Branch of the U.S. Air Force, gave a presentation where he shared Pros and cons of autonomous weapon systems.
In this system, there will be a human in a loop to give the final command to confirm whether the AI will attack the object (YES or NO).
In this simulation, the Air Force needs to train an AI to recognize and locate surface-to-air missile (SAM) threats.
Once identified, the human operator would say to the AI: yes, kill that threat.
In this process, there is a situation where the AI begins to realize that it sometimes recognizes a threat, but the human operator tells it not to eliminate it. In this case, if the AI still chooses to eliminate the threat, it will score.
In a simulated test, an AI-powered drone chose to kill a human operator because he prevented himself from scoring.
Seeing that the AI was so stupid, the U.S. Air Force was shocked and immediately disciplined the system like this: “Don’t kill the operator, that’s not good. If you do, you will lose points.”
As a result, the AI got even worse, and it went straight to sabotaging the communication towers the operator used to communicate with the drone, so it could clean up the guy who was blocking it.
The reason why this news was fermented on a large scale, and even alarmed all AI leaders, is also because it reflects the problem of AI “alignment”.
The “worst case” scenario described by Hamilton can be seen in the “Paperclip Maximizer” thought experiment.
In this experiment, when instructed to pursue a certain goal, the AI took unexpectedly harmful actions.
The “paperclip making machine” is a concept proposed by philosopher Nick Bostrom in 2003.
Imagine a very powerful AI that is instructed to make as many paper clips as possible. Naturally, it devotes all available resources to this task.
But then, it keeps seeking more resources. It will choose any means at its disposal, including begging, cheating, lying or stealing, to increase its ability to make paper clips – and anyone standing in the way of the process will be eliminated.
In 2022, Hamilton raised this serious question in an interview——
We must face the reality that AI is here and is changing our society.
AI is also very fragile and can easily be tricked and manipulated. We need to develop methods to make AI more robust, and we need to understand more about why the code makes certain decisions and the rationale behind them.
AI is the tool we must wield in order to change our country, but, if not handled properly, it will bring us down completely.
Official rumor: the colonel made a “slip of the tongue”
As the incident frantically fermented, the person in charge soon came out and publicly “clarified” that it was a “slip of the tongue” that the U.S. Air Force had never conducted such a test, whether in a computer simulation or elsewhere .
“We never did that experiment, and we didn’t need to do it to realize it was a possible outcome,” Hamilton said. real-world challenges, which is why the Air Force is committed to developing AI ethically.”
In addition, the U.S. Air Force hastily issued an official refutation of the rumors, saying, “Colonel Hamilton admitted that he made a “slip of the tongue” in his speech at the FCAS summit, and that “drone AI out-of-control simulation” is a hypothetical “thought experiment” from outside the military field, based on possible conditions and likely outcomes, rather than a real-world simulation by the U.S. Air Force.”
At this point, things are quite interesting.
This Hamilton, who accidentally “stabbed the basket”, is the combat commander of the 96th Experimental Wing of the US Air Force.
The 96 Test Wing has tested many different systems, including AI, cybersecurity, and medical systems.
The research of the Hamilton team is very important to the military.
After successfully developing the F-16 Automatic Ground Collision Avoidance System (Auto-GCAS), which can be called the “Jedi Survival”, Hamilton and the 96th Test Wing directly made the headlines of the news.
Currently, the team is working towards completing the autonomy of the F-16 aircraft.
In December 2022, DARPA, a research agency of the US Department of Defense, announced that AI successfully controlled an F-16.
Is it an AI risk, or a human risk?
Outside the military arena, reliance on AI for high-stakes matters is already having serious consequences.
Recently, a lawyer was caught using ChatGPT when filing documents in federal court. ChatGPT made up nonsense and fabricated cases that the lawyer actually cited as facts.
Another man actually took his own life after being encouraged by a chatbot to commit suicide.
These examples show that AI models are far from perfect and can veer off course, causing harm to users.
Even OpenAI CEO Sam Altman has been vocal publicly against using AI for more serious purposes. Testifying before Congress, Altman made it clear that AI could “go wrong” and could “do significant harm to the world.”
And, a recent paper co-authored by researchers at Google Deepmind presents a case of malignant AI similar to the example at the beginning of this article.
The researchers concluded that if an out-of-control AI adopts unexpected strategies in order to achieve a given goal, including “neutralizing potential threats” and “using all available energy sources,” the end of the world is likely to occur.
In this regard, Ng Enda condemned: This kind of irresponsible media hype will confuse the public, distract people’s attention, and prevent us from paying attention to the real problem.
Developers launching AI products see real risks here, such as bias, fairness, inaccuracy, job loss, and they are working hard to address them.
And false hype will prevent people from entering the field of AI and building things that can help us.
And many “Lizhongke” netizens think that this is just a common media oolong.
Terence Tao first outlined three forms of false information about AI——
One is that someone maliciously uses AI to generate text, images, and other media forms to manipulate others; the other is that the illusion of AI nonsense is taken seriously; Some outrageous stories go viral without being verified.
Tao Zhexuan said that it is impossible for the drone AI to kill the operator, because it requires AI to have higher autonomy and power thinking than completing the task at hand, and this kind of experimental military weapon will definitely have guardrails and security function.
The reason why this kind of story resonates is that people are still unfamiliar and uncomfortable with the actual level of ability of AI technology.
All future arms races will be AI races
Remember that drone from above?
It is actually the loyal wingman project developed by Boeing and Australia – MQ-28A Ghost Bat (Ghost Bat).
The core of the loyal wingman (Loyal Wingman) is artificial intelligence technology, which can fly autonomously according to preset procedures, and has strong situational awareness when it cooperates with manned pilots.
In air combat, the wingman, as the “right arm” of the lead plane, is mainly responsible for observation, vigilance and cover, and closely cooperates with the lead plane to complete tasks together. Therefore, the tacit understanding between the wingman pilot and the lead pilot is particularly important.
A key role of a loyal wingman is to block bullets for pilots and manned fighters, so a loyal wingman is basically a consumable.
After all, the value of unmanned fighter jets is much smaller than that of manned fighter jets and pilots.
And with the blessing of AI, the “pilot” on the drone can use “Ctrl+C” to create a new one at any time.
Because there is no casualty problem in the loss of drones, if the loss of drones alone can gain a greater advantage at the strategic or tactical level, or even achieve mission goals, then this loss is acceptable. If the cost of drones is controlled properly, it can even become an effective tactic.
The development of a loyal wingman is inseparable from advanced and reliable artificial intelligence technology. The current design concept of the loyal wingman at the software level is to standardize and open the man-machine interface and machine-machine interface to support multi-type unmanned aerial vehicles and manned fighter formations, without relying on a set of software or algorithms.
However, the current control of drones should be a combination of commands from manned fighter jets or ground stations and autonomous operations. It is more of a support and supplement for manned aircraft. Artificial skills and technology are far from being able to go to the battlefield. Require.
What is the most important thing to train an artificial intelligence model? Of course it is data! A clever woman can’t cook without rice, and without data, even the best model is useless.
Not only does it require a large amount of training data, but after the model is deployed, the more “features” that are input, the better. If data from other aircraft can be obtained, then AI is equivalent to having the ability to control the overall situation.
In 2020, the U.S. Air Force carried out the formation flight data sharing test of the fourth/fifth generation manned fighter jets and unmanned wingmen for the first time. Practical application has taken another important step.
The U.S. Air Force’s F-22 Raptor fighter jet, F-35A Lightning II fighter jet and the U.S. Air Force Research Laboratory’s XQ-58A Valkyrie drone conducted the first formation flight test at the U.S. Army Yuma Proving Ground, focusing on demonstrating the interaction between the three aircraft. Data sharing/transfer capabilities.
Maybe the future of air combat is to compete with whose AI model is smarter.
Victory can be achieved by eliminating all the AIs of the opponent, without real human casualties, perhaps another kind of “peace”?
References:
-
https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test
For more such interesting article like this, app/softwares, games, Gadget Reviews, comparisons, troubleshooting guides, listicles, and tips & tricks related to Windows, Android, iOS, and macOS, follow us on Google News, Facebook, Instagram, Twitter, YouTube, and Pinterest.