For decades, writers, artists, and scientists have wondered how robots and artificial intelligence would one day alter human life. Would they serve as an existential threat? Change the face of warfare? Eliminate menial labor? Or even act as friends and spouses? Today it seems one of the most fundamental fears regarding artificial intelligence has been confirmed: scientists in South Korea are allegedly working with defense contractors to create “killer robots” potentially amping up the arms race and changing warfare forever.
Killer Robots in South Korea?
Last month, more than 50 scientists specializing in artificial intelligence called for a boycott of the Korea Advanced Institute of Science and Technology (KAIST) over allegations that scientists within the institution are working with defense contractors to create killer robots. The university has denied claims that their scientists are looking “to accelerate the arms race to develop such weapons” i.e. killer robots.
A statement issued by the Australia based Centre on Impact of AI and Robotics (CIAIR) read:
As researchers and engineers working on artificial intelligence and robotics, we are greatly concerned by the opening of a “Research Center for the Convergence of National Defense and Artificial Intelligence” at KAIST in collaboration with Hanwha Systems, South Korea’s leading arms company. It has been reported that the goals of this Center are to “develop artificial intelligence (AI) technologies to be applied to military weapons, joining the global competition to develop autonomous arms.”
CIAIR refers to a statement issued in February by KAIST. The statement announced the university would be opening an AI research center in collaboration with the defense company Hanwha Systems. The statement has since been deleted from the university’s website. A newspaper in South Korea, the Korea Times, had reported early this year that the University planned to study killer robots and how advanced technology could allow unmanned drones, submarines, and missiles to make their own decisions in battle.
The statement from CIAIR goes on to declare a boycott of the university:
We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.
The statement from CIAIR also outlined the dangers of robotics and AI being used in war:
If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora’s box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them. We urge KAIST to follow this path, and work instead on uses of AI to improve and not harm human lives.
The President of KAIST, Shin Sung-chul, released a statement Wednesday claiming the university is not working to develop “lethal autonomous weapons systems or killer robots” and ensuring the university values “human rights and ethical standards to a very high degree.”
The organizer of the boycott, Toby Walsh told CNN he was happy with the statement, but is not convinced the boycott should be terminated, “I still have a few question marks about what they intend to do but broadly speaking they have responded appropriately.”
The Threats of Killer Robots
Walsh points out that the situation between North and South Korea is already tense enough without adding killer robots to the mix. “Developing autonomous weapons would make the security situation on the Korean peninsula worse, not better,” he told the Guardian. “If these weapons get made anywhere, eventually they would certainly turn up in North Korea and they would have no qualms about using them against the South.”
In the past, warfare required sending thousands of people into battle, which called for thousands and thousands of pounds of supplies and complicated supply routes. It was fought face to face where the loss in human life and culture could be measured by any onlooker or soldier. After the Vietnam War, when the devastating effects of war could be broadcasted directly into any living room, the antiwar movement grew exponentially. In short, war was expensive, complicated, high in casualties, and not exactly popular with the general population.
In the world of AI weapons, war could be waged with a click of a button, removing many of the factors that have hindered war todate. Walsh underscores this point, “Previously if you wanted to do harm you needed an army to do as you heeded, hundreds of people to follow your orders, now you just need one programmer.” Some of these effects have been seen with drones attacks in the Middle East. Suddenly, killer robots and AI driven warfare, once the fantastical fears of dystopian writers and the fanciful plots of Will Smith movies, now seems like an unavoidable reality.
As Walsh points out, AI can be used to improve human lives. Artificial intelligence and robotics could be developed to bring humanitarian aid to populations besieged by war. In Syria, hundreds are trapped beneath rubble and in areas far too dangerous for humanitarian aid to reach. Could AI instead be developed to reduce civilian casualties in war?
The UN has called for a ban on autonomous weapons and killer robots. On Monday, the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems will be meeting to discuss the threat of AI weapons and the potential for a worldwide ban on the development of autonomous weapons. 123 nations will take part in the meeting in Vienna.