Google develops “professional level gamer” AI that can beat humans at most video games
The development of artificial intelligence continues to move forward at a fast pace. A new development in AI was reported this week, when Google researchers announced they had created a program that could consistently beat almost all humans in several classic arcade games.
The Google team created an artificial intelligence that can teach itself to play Atari 2600 video games, with only minimal background information provided to help the AI learn how to play.
Einhorn Tells Investors: Tesla Is Gaming S&P 500 Index Committee
The Federal Reserve has poured unprecedented levels of stimulus into the U.S. economy to deal with the pandemic, and most experts agree that inflation is just around the corner. David Einhorn has positioned his Greenlight Capital to benefit from inflation when it arrives. Q2 2020 hedge fund letters, conferences and more SORRY! This content is Read More
The new AI is designed to mimic some of the principles of the human brain, and as a result, the program can rapidly play the game at the same level as a professional human gamer, or even better, on almost all of the games.
The research was published on Wednesday, February 25, in the scientific journal Nature.
Statement from study coauthor
According to study co-author Demis Hassabis, an AI researcher with Google DeepMind based in London, this is the first time an artificial intelligence (AI) system that can learn to excel at a wide range of tasks has been developed.
He also noted that improved future versions of this AI could be used in other decision-making applications such as driverless cars or weather forecasting.
More on Google’s gaming artificial intelligence
Humans and other animals learn by reinforcement, ad the Google researchers say their new AI learns on the basis of the same principle of reinforcement. In humans reinforcement happens when pleasurable experiences cause the brain to release dopamine, for example. To be able to learn in the real world, the brain has to interpret input from senses/sensors and use the input to generalize past experiences and then apply them to new situations.
Although IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov,and the artificially intelligent Watson computer won the quiz show “Jeopardy” in 2011, these feats were really most based on preprogrammed abilities, Hassabis noted. The new Google DeepMind AI can learn on its own using reinforcement.
Hassabis and colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that continuously constructs more abstract representations of raw data. Google used deep learning some years ago to teach a network of computers to recognize cats based on a multitude of YouTube videos, but this type of deep learning algorithm is used in other Google products including search and translation.)
The tech titan’s newest AI is named the “deep Q-network,” and it can run on a normal desktop computer.