The United States government is strongly considering the use of AI-controlled drones that can make autonomous choices on whether to kill human targets, according to a report by the New York Times.
Countries such as the United States, China, and Israel are developing lethal autonomous weapons that can use AI to choose targets, the Times noted.
According to critics, the deployment of “killer robots” would be a frightening development, entrusting life and death battlefield choices to machines with little or no human oversight.
Several countries are pressing the United Nations for a binding resolution prohibiting the use of AI killer drones, however, the US is among a handful of nations, including Russia, Australia, and Israel, who are opposing any such move.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”
According to a notice released earlier this year, the Pentagon is working on deploying swarms of thousands of AI-enabled drones.
In an August address, US Deputy Secretary of Defense Kathleen Hicks stated that technologies such as AI-controlled drone swarms will allow the US to balance China’s manpower’s Liberation Army’s (PLA) numerical superiority in weapons and manpower.
“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, Reuters reported.
The Air Force secretary, Frank Kendall, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.
“I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”
According to The New Scientist, Ukraine utilized AI-controlled drones in its conflict with Russia in October, although it is unknown if the drones caused human casualties.
Stuart Russell, a senior AI scientist at the University of California, Berkeley, and others will screen the video on Monday during an event held by the Campaign to Stop Killer Robots at the United Nations Convention on Conventional Weapons.
The campaign issues a warning: “Machines don’t see us as people, just another piece of code to be processed and sorted. From smart homes to the use of robot dogs by police enforcement, A.I. technologies and automated decision-making are now playing a significant role in our lives. At the extreme end of the spectrum of automation lie killer robots.”
“Killer robots don’t just appear – we create them,” the campaign added. “If we allow this dehumanisation we will struggle to protect ourselves from machine decision-making in other areas of our lives. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanisation.”
The creation and employment of autonomous weapons, such as drones, tanks, and automatic machine guns, would be disastrous for human security and freedom, according to Russell, and the window for halting their development is rapidly closing.
“The technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance,” Russell said.
The Campaign to Ban Killer Robots also points out that because AI-powered machines are “relatively cheap to manufacture, critics fear that autonomous weapons could be mass produced and fall into the hands of rogue nations or terrorists who could use them to suppress populations and wreak havoc, as the movie portrays.”
“A treaty banning autonomous weapons would prevent large-scale manufacturing of the technology,” BKR notes. “It would also provide a framework to police nations working on the technology, and the spread of dual-use devices and software such as quadcopters and target recognition algorithms.”
“Professional codes of ethics should also disallow the development of machines that can decide to kill a human,” Russell said.
In a 2017 episode of “Black Mirror,” killer robot dogs roam the earth and relentlessly pursue human beings who are detected by the drones as a ‘threat.’
The creator of the memorable episode, Charlie Brooker, explained his thinking behind his vision of a robopocalypse in an interview with Entertainment Weekly.
Brooker was asked about his inspiration behind the story, which the interviewer framed as a cross between the “Boston Dynamics videos on YouTube crossed with Night of the Living Dead.”
“That’s actually scarily correct,” Brooker said. “It was from watching Boston Dynamics videos, but crossed with — have you seen the film All Is Lost? I wanted to do a story where there was almost no dialogue. And with those videos, there’s something very creepy watching them where they get knocked over, and they look sort of pathetic laying there, but then they slowly manage to get back up.”
Last week, the Los Angeles Police Department actually used a robotdog to end an armed standoff.
A SWAT squad member remotely controlled the robot as it approached the bus and entered through the entrance. It then traveled around the bus, sending officers live video feeds.
The robot’s speaker was also utilized to converse with the man and encourage him to surrender. After approximately an hour and 45 minutes, the guy awoke and exited the bus, where he was apprehended by police.
Artificial intelligence is also being used in aerial drones. DroneSense is a public safety drone software platform that transforms raw data gathered by drones into actionable intelligence for police, fire, and other emergency personnel. The DroneSense OpsCenter allows several drone users to work together, examine what each drone observes, and even track a drone’s flight path in real time.
Hundreds of teams have utilized the DroneSense public safety platform to address a range of public safety situations. The AI-powered program helps SWAT teams acquire scene intelligence, analyze damage after storms and tornadoes, and even uses thermal imagery to identify missing people.
Neurala is a deep learning neural network that assists drones in searching for and identifying people of interest in crowds. It can also check massive industrial equipment such as telephone towers and produce a real-time damage report. The startup says that their AI-powered software just takes 20 minutes to grasp the picture of an individual in order to scan crowds for an individual, rather than the industry-standard hours or days.
Scale uses AI and machine learning to assist in the training of drones for aerial imaging. Machine learning software assists drones in identifying, labeling, and mapping everything from residences in a community to particular objects such as vehicles.
AeroVironment makes autonomous military drones powered by AI. The company’s drones vary from a three-foot-long undetected spy plane to the Switchblade, which is outfitted with a precise attack payload for military operations.
Aerovironment manufactures a variety of AI-powered drones for various military applications. UAVs from the firm are also used in agriculture to map field acreage, detect crop health concerns, and assess irrigation difficulties.
While Artificial Intelligence is obviously a useful tool that can be used to immensely benefit mankind, it is also a dangerous weapon that can be used to impose a totalitarian regime on a populace.
While nuclear weapons were the existential threat to humanity that had to be contained by the Cold War, Artificial Intelligence is now the emerging threat in the Information Age.
It will take an emerging awareness of the pros and cons of AI, as well the development of new institutions to ensure transparency and accountability, in order to prevent AI from being abused by politicians to impose oppressive regimes upon humanity.