Michael Mauboussin is considered an expert in the field of behavioral finance and has some famous books on the topic including, Think Twice: Harnessing the Power of Counterintuition and More More Than You Know: Finding Financial Wisdom in Unconventional Places, see his latest piece from Credit Suisse below.
Michael Mauboussin: Lessons from Freestyle Chess
Merging Fundamental and Quantitative Analysis
“Weak human + machine + superior process was greater than a strong computer and, remarkably, greater than a strong human + machine with an inferior process.” – Garry Kasparov
- In the late 1990s, machine beat man in the game of chess. Software programs can now outplay humans in most board and card games, with the exception of poker and Go.
- In freestyle chess, humans are allowed to use computers to augment their play. Currently, man plus machine is better than man or machine.
- While chess and investing have important differences, they also have useful similarities.
- The question is whether a melding of fundamental and quantitative methods can improve on either approach by itself.
- Fundamental analysts can leverage the computer’s ability to gather data and crunch numbers.
- Quantitative analysts can leverage the analyst’s ability to sort causality and detect patterns.
Michael Mauboussin: Machine + Man > Machine or Man
You can mark May 11, 1997 as the date that machine beat man in chess. On that Sunday, Garry Kasparov, the world champion, lost the decisive last game to Deep Blue, a computer that IBM built. With that, Deep Blue defeated Kasparov in the six-game match 3 ½ to 2 ½. Kasparov, who was the number one player for an astounding 20 years and is perhaps the greatest player of all time, called that final showdown “the worst game of my career.”
Kasparov’s willingness to face IBM’s best demonstrated that he embraced machine play, and he has been a great ambassador for the game. But he has lingering misgivings about Deep Blue’s victory. “I don’t have any proof of foul play,” he wrote, but “I live in doubt.”2 There’s little doubt that the win gave IBM a boost: The stock’s advance the next day, net of the market’s move, added $1.7 billion to the company’s market capitalization.
Notwithstanding Kasparov’s reservations about IBM’s tactics in that match, it is now well established that machines can beat humans in chess. One way to measure the progress of computers is with the Elo rating system, which is a method to calculate the relative skill of players in head-to-head competition. Today’s best computer programs have Elo ratings of about 3,200, more than 300 points higher than the world’s greatest players. That advantage suggests that the stronger player is expected to win close to 90 percent of the points in a match.3 To add some context, a bright beginner would have a rating of about 600 and a grandmaster needs to achieve the level of 2,500.
Chess, which the renowned German writer Goethe reportedly called “a touchstone of the intellect,” was the gold standard for machine intelligence from an early date.4 But computers were beating humans in other games well before Deep Blue’s success. Exhibit 1 shows the date at which computers achieved superhuman status in a number of games over the past couple of decades. Most of these games are largely computational, which plays to the computer’s strength.
The victory of Watson, a “cognitive technology” also created by International Business Machines Corp. (NYSE:IBM), over champions of the game of Jeopardy! was especially striking because Watson had to be able to handle complex language as well as vast amounts of information.
Go is also notable in that software programs have yet to beat the best players. Go has different features than chess, including a larger board, fewer restrictions on moves, and the fact that pieces get added, not removed, as the game progresses. Still, artificial intelligence researchers expect computer programs to beat the world champion in about a decade’s time.