Engineers from Google’s DeepMind division, which looks at developing artificial intelligence, in conjunction with scientists from Oxford University are working on a ‘kill switch’ for humans to be able to override any rogue AI agent.
AI and their human slaves
Fears of intelligent technology, turning on its human masters, has been a staple topic of books and films since the first computers were invented. From HAL in 2001: Space Odyssey to the Terminator franchise, it is something we have become used to seeing in fiction.
However, with advances in new technologies in the AI field, we are seeing AI have the potential to be ever more powerful. Only this year, Google’s AI managed to defeat Korean grandmaster Lee Sedol four to one in the complicated game of Go. This potential is great, but we need to make sure it is harnessed for good.
Humans to remain in control
An academic paper has been released showing how steps are being taking to combat this dystopian future of a world run by computers.
The danger is, that as these AI machines have greater capacity for learning and ‘thought’, could they learn to over-ride human influence. Many forward thinkers, with strong understanding of the science involved, have voiced concerns.
British scientist Stephen Hawking has publicly announced that he thinks human downfall may come from AI, while Elon Musk, the brains behind Tesla, has gone as far as to donate $10 million to the Future of Life Institute, which is a global research program aimed at keeping AI beneficial to humanity.
Doctors, Laurent Orseau from DeepMind and Stuart Armstrong, a scientist from Oxford University’s Future of Humanity Institute, fathered the paper and wrote, “Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions,”
The academic paper sets out the challenge. As AIs learn through reinforcement, how do we stop them learning to avoid or change any regular interventions made by humans?
A small, but perhaps pertinent example they gave of how there might be problems, was of an AI taught to play Tetris. The AI went on to learn to pause the game by itself so that it would never lose.
First Steps
Dr. Orseau acknowledged people’s fears, “It is sane to be concerned – but, currently, the state of our knowledge doesn’t require us to be worried.” He continued, “It is important to start working on AI safety before any problem arises. AI safety is about making sure learning algorithms work the way we want them to work.”
“No system is ever going to be foolproof – it is matter of making it as good as possible, and this is one of the first steps,” continued Dr. Orseau.
People working in the field have welcomed the research. A professor of artificial intelligence at the University of Sheffield, Noel Sharkey has commented, “Being mindful of safety is vital for almost all computer systems, algorithms and robots.”
He added, “Paramount to this is the ability to switch off the system in an instant because it is always possible for a reinforcement-learning system to find shortcuts that cut out the operator.”
Microsoft recently unveiled an AI chatbot to talk on Twitter. However, through the reinforced learning techniques, it soon found itself unleashing various racist and sexist comments and Microsoft had to shut it down after just one day.
AI is certainly an exciting field, and once we have the necessary checks and balances, it is going to be exciting to see where it can take us, but it is still early and adequate safety is paramount.