AI Teaches Itself Chess in 72 Hours, Plays at International Master Level

AI Teaches Itself Chess in 72 Hours, Plays at International Master Level

Neural networks are generally considered to be one of the most powerful and promising artificial intelligence technologies currently available. That’s why it’s big news that Matthew Lai at Imperial College London has developed an impressive new artificial intelligence called Giraffe based on neural networks.

With the September 4th publication of his dissertation, Lai announced Giraffe had learned to play chess by evaluating positions much like humans do and unlike conventional chess engines.

Of note, with just 72 hours of learning how to play chess, the new AI machine already played at close to the same level as the best conventional chess engines, most of which were developed and tuned over many years. Giraffe’s three days of training enabled it to perform at the level of a FIDE International Master, meaning that it is in the top 2.2% of global tournament chess players.

Value Partners Asia Bets On India In Hopes Of “Demographic Dividend”

Value Partners Asia ex-Japan Equity Fund has delivered a 60.7% return since its inception three years ago. In comparison, the MSCI All Counties Asia (ex-Japan) index has returned just 34% over the same period. The fund, which targets what it calls the best-in-class companies in "growth-like" areas of the market, such as information technology and Read More

Neural networks are a way of processing information inspired by the function of the human brain. The actual neural network is several layers of nodes that are connected in a way that changes as the system learns. The learning/training process involves analyzing millions of examples to fine-tune the connections to enable the network to produce a specific output given certain inputs, such as recognizing a face in a picture or the optimum chess move for a particular situation, for example.

Chess – More on neural networks and Giraffe

Neural networks have become much more powerful over the last few years due to two advances. The key advancement is an improved understanding of how to fine-tune the networks as they learn, and faster processors make a big difference here. The second development is the growing availability of massive annotated datasets on a range of subjects that can be used to train the networks.

This means that computer scientists can now train much bigger networks organized into many more layers. Keep in mind that “deep neural networks” are now among the most powerful computers on the globe, and routinely outperform humans in pattern recognition tasks such as face recognition or handwriting recognition.

Of interest, Lai’s network includes four layers that examine each position on the board in three different ways.

The first perspective breaks down the global state of the game, such as the number and type of pieces on each side, whose turn, castling rights and the like. The second perspective focuses on piece-centric features such as the location of each piece on each side, while the third perspective maps the squares that each piece can attack or defend.

Lai trains his network with a data taken from real chess games. Assembling the data set is a challenge as it must have a complete and correct distribution of positions. The dataset must also have many examples of  unequal positions besides those typically seen in high level chess matches.

Furthermore, the data set must be comprehensive. The huge number of connections inside a neural network have to be trained and this can only be done with a  very large dataset. If the dataset is not large enough, the network will not be able to recognize the wide variety of patterns that actually occur.

Lai generated his data by randomly choosing five million positions from a pre-existing database of computer chess games. He them added a random legal move to each position for greater variety before using it for training. His dataset included 175 million positions generated this way

Updated on

No posts to display