Science, and more specifically artificial intelligence, could be used to combat religious extremism which aims to drag us back into the dark ages.
Writing in The Diplomat, Lucas Bento sets out the case for science disrupting religious belief systems and providing tactical benefits in current armed conflicts. He argues that science can “challenge extremist religious beliefs that do not hold up to observable experiments” and therefore reduce the recruiting power of terrorist groups.
Michael Gelband's hedge fund ExodusPoint ended 2021 on a strong note after its Rates strategies contributed 1.16% to overall performance in the month. According to a copy of the fund's December update to investors, which ValueWalk has been able to review, the ExodusPoint Partners International Fund Ltd rose by 1.95% during December, bringing its year-to-date Read More
Autonomous lethal technology raises ethical concerns
Right now science cannot penetrate these extremist groups, who use religious superstition to control others. It may be decades before this situation changes, but in the short term science could provide tactical advantages in the struggle against terrorism.
One way in which this could become a reality is through the use of killer robots, machines which use artificial intelligence to track and eliminate targets without the need for human intervention.
Some have raised concerns over killer robots, and scientists, NGOs and states have proposed a preemptive ban on their development and use. The fear is that by removing the danger to human life, killer robots could encourage conflict and alter the course of history.
Technological advances have provoked opposition throughout history
Worries over the seemingly unstoppable march of technology have been around for years. English writer Samuel Butler wrote in 1863 that “the machines will hold the real supremacy over the world and its inhabitants,” recommending that humans return to the “primeval condition of the race” as a precaution.
Throughout history the introduction of revolutionary technologies has caused technophobic attitudes to surge. In 19th century England, a group known as the Luddites smashed mechanical looms that they believed would cause unemployment, and 17th century Japan refused to use firearms.
At this point it must be said that artificial intelligence is different because it puts control and accountability out of human hands, but one possible use for the technology is in hunting terrorists such as those of ISIS.
ISIS ripping apart cultural fabric of Middle East
ISIS is conducting a brutal war against civilian populations using archaic tactics such as rape, beheadings and torture. The group has taken to destroying artifacts of human culture in addition to humans themselves, razing World Heritage Sites to the ground in an effort to enforce their extreme religious beliefs.
In particular, the destruction of the ancient city of Palmyra raised international ire, and was called a “war crime” by UNESCO. Irina Bokova, chief of the organization, said that ISIS had carried out the “most brutal, systematic” destruction of human heritage since World War II.
ISIS does not only want to destroy current members of communities, but they destroy the foundations on which these communities were built. The desire to destroy culture as a source of social identity makes the group particularly dangerous.
By destroying the history of art, music, knowledge and religion, ISIS threatens to return humanity to a less-developed state, just as Butler proposed. In response, Italian Minister of Cultural Heritage and Tourism, Dario Franeschini, proposed that the UN form an armed force to protect cultural sites.
Killer robots could be used to fight terrorist groups such as ISIS
While it might be unlikely that Western powers send ground troops, this would be the perfect situation in which to use killer robots. Bento believes that artificial intelligence could be used offensively, hunting terrorists down, but also defensively in the protection of convoys, refugee camps, schools, hospitals and museums.
He says that first-generation robots are likely to involve some level of human supervision, perhaps to approve the firing of weapons. But once later generations develop capabilities which satisfy international humanitarian law and operational safety it is possible that they could operate completely independently.
Although ISIS may be defeated using conventional weapons before killer robots become a reality, the group is a good example of a situation in which autonomous lethal technology could be used. The use of properly deployed killer robots which have the capability to determine between targets and non-targets could prove to be useful in the quest for world peace.
Killer robots may raise important ethical questions, but they would also reduce the political costs associated with putting soldiers into conflict zones and putting human lives at risk. Science could prove to be a powerful force for peace, or at least another weapon in the fight against terror.
The difficulty surely arises when it comes to restricting which conflicts killer robots should be involved in, and how to control their deployment for other purposes. Science fiction is not short of stories of dystopian worlds run by highly-intelligent machines, and it is important that mankind act responsibly.