Michael Mauboussin – What Being Wrong Can Teach Us About Being Right

Updated on

Michael Mauboussin – What Being Wrong Can Teach Us About Being Right

Michael Mauboussin is the author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing (Harvard Business Review Press, 2012), Think Twice: Harnessing the Power of Counterintuition (Harvard Business Press, 2009) and More Than You Know: Finding Financial Wisdom in Unconventional Places-Updated and Expanded (New York: Columbia Business School Publishing, 2008). More Than You Know was named one of “The 100 Best Business Books of All Time” by 800-CEO-READ, one of the best business books by BusinessWeek (2006) and best economics book by Strategy+Business (2006). He is also co-author, with Alfred Rappaport, of Expectations Investing: Reading Stock Prices for Better Returns (Harvard Business School Press, 2001).

Visit his site at: michaelmauboussin.com/

Get The Timeless Reading eBook in PDF

Get the entire 10-part series on Timeless Reading in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues.

[drizzle]

Introduction

Information and circumstances change constantly in the worlds of investing and business. As a consequence, we have to constantly think about what we believe, how well those beliefs reflect the world, and what tools we can use to sharpen our decisions. Because we operate in a world where we can succeed only with a certain probability, we have to learn from our mistakes. Hence, the theme for the Thought Leader Forum in 2016 was “What Being Wrong Can Teach You About Being Right.”

This year’s forum featured a venture capitalist, a computer scientist, an economist who focuses on decisions, and a leading sports executive. Each explored an area of how our thinking and decisions can come up short of the ideal. We heard about how assumptions deeply shape how you assess a company’s potential and how well-intentioned incentive systems can go awry. There was an exploration of how computers, through machine learning, can serve as a new source of knowledge, complementing evolution, experience, and culture. Notwithstanding the potential benefits of augmenting our intelligence through computers, we discussed why we humans have an aversion to algorithms and how to overcome it. And then there is the issue of the old and new guard: how can we convince some who have been successful in an old regime to accept new and better ways of doing things?

The theme of “what being wrong can teach you about being right” has lessons to teach us about naïve realism, man versus machine, and the role of change. Naïve realism is the sense that our view of the world is the correct one. But when confronted with reality, we need to revisit our beliefs.

For example, when we face someone who has beliefs different than ours, we tend to adopt one of three attitudes so that we can perpetuate our position. First, we might assume the other person is merely unequipped with the facts, so simple sharing will swing them to our side. Next, we believe that even with the facts, the other person lacks the mental capacity to see the consequences as we do. We can write off those people. Finally, there may be people who understand the facts as we do but turn their backs on what we perceive to be the truth. We categorize those people as evil.

Machine learning and artificial intelligence are again hot terms. Google DeepMind’s AlphaGo program, which beat a human champion in the board game of Go much sooner than most experts had predicted, is emblematic. The question is how we divide the cognitive work between machines and human judgment. If you are in the information business—and the chances are good this is true if you are reading this—then you must consider carefully how you might integrate computers and humans.

All of this implies change, something we are loathe to do. Changing your mind takes time, effort, and humility. This is especially pertinent when you have been successful in your domain. Strategy in sports is a good analogy. There are traditional ways to do things, and often those ways are effective. But more careful analysis has revealed strategies that fly in the face of conventional wisdom that are clearly better. Defensive shifts in baseball are but one example. Convincing the old guard to change—and eventually, we are all part of the old guard—is a difficult hurdle.

The following transcripts not only document the proceedings, they also provide insights into how you can improve your own ability to learn from mistakes and improve your odds of being right in the future. Bill Gurley suggested that the high valuations for some technology startups (so-called “unicorns’) and the low level of liquidity is a balance that is not tenable. Pedro Domingos explained how computers might be able to complete tasks that are out of the grasp of humans. Cade Massey showed that we don’t readily embrace algorithms but that there is a way to overcome this aversion and improve decisions. And Paul DePodesta suggested that the bias against change has less to do with the game you are playing and more to do with how we humans think.

Michael Mauboussin - What Being Wrong Can Teach Us About Being Right

Good morning. For those of you whom I haven’t met, my name is Michael Mauboussin, and I am head of Global Financial Strategies at Credit Suisse. On behalf of all of my colleagues at Credit Suisse, I want to wish you a warm welcome to the 2016 Thought Leader Forum. For those who joined us last night, I hope you had a wonderful evening. We are very excited about our lineup for today.

I’d like to do a couple of things this morning before I hand it off to our speakers. First I want to highlight the levels at which you might consider today’s discussion about the idea of how being wrong can inform you about being right. I then want to discuss the forum itself, including what you can do to contribute to its success.

You might listen to today’s discussion at three different levels. Some of the points will span multiple levels, but these are some of the ideas that we’ll hear about throughout the day.
The first relates to the ideas of naïve realism. In psychology, this is the human tendency to believe that we see the world around us objectively and that people who disagree with us must be uninformed, irrational, or biased.

The second is man versus machine. This is a theme that is popping up everywhere. What are algorithms good at and what are humans good at? How do we use algorithms to augment our performance? Why do we struggle to defer to algorithms in many settings?

The final is the issue of change. Organizational inertia is a huge issue in many firms. How can firms keep up? How do we integrate new information? What is the psychology of change?

Let’s start with naïve realism. Here’s a cartoon I love: as you can see, there are two armies preparing to square off, and the quote is: “There can be no peace until they renounce their Rabbit God and accept our Duck God.” The picture shows that the flags of the competing armies are the exact same. This is based on the rabbit-duck illusion, an ambiguous picture that can be interpreted either as a rabbit or a duck.

The idea of naïve realism in psychology is that we all think that we have an objective reality of the world. As a consequence, we have a hard time accepting that others have different points of view. So we all walk around with beliefs that we think are true. Otherwise we wouldn’t hold onto those beliefs. Things become interesting when those beliefs confront the world.

Here’s a well-known experiment that demonstrates this point. A psychologist named Elizabeth Newton set up an experiment whereby there were “tappers” and “listeners.” The tappers were given a list of 25 well-known songs, such as “Happy Birthday to You,” and were asked to tap the rhythm of the song on the table. The task of the listener was to identify the song based on the taps.

She ran 125 trials of this. The listeners were able to identify only 3 of the songs, a success rate of about 2.5 percent. But when the researchers asked the tappers what percent they thought the listeners would be able to identify correctly, the answer was 50 percent! This is related to the curse of knowledge, which is also a huge impediment to communication. Again, we struggle to understand that others don’t see the world as we do.

So if you see the world one way and others see it a different way, you have to reconcile the views. And as we do so, we tend to assume one of three things. The first is that the other person simply doesn’t know the facts that you do, and hence is ignorant. The answer is simply to inform them so that they will then see your point of view. The second is that the person has the facts, but they are just too stupid to understand them properly. The last assumption is that people know the facts and can comprehend them, but they just turn their backs on the truth. Unbelievers in religion are an example.

Now consider how you assess people who don’t agree with you. Do you evoke one of these assumptions to reconcile their beliefs with yours?

We now turn to a theme that will spread through the day. I am calling it man versus machine but it may be just as accurate to say humans versus algorithms. The first point I want to make refers to what I call “the expert squeeze.”

The way to think of it is as a continuum. On one side there are problems that are rules-based and consistent. Here, experts are often proficient but computers are quicker, cheaper, and more reliable. Today, of course, you have to point to the success of AlphaGo—Google DeepMind’s program that beat a champion in Go.

At the other side of the continuum are problems that are probabilistic and in domains that change constantly. Here, the evidence shows that collectives do better than experts under certain conditions. Making sure those conditions are in place is crucial for a decision maker.

I’m now going to steal a bit of thunder from our second speaker and introduce various approaches to machine learning. But my point of emphasis is somewhat different. If your organization relies on fundamental research, do any of these approaches seem familiar?

For example, lots of investors like to appeal to analogies: this investment is like that investment from the past. The interesting question then becomes: what can we, as fundamental analysts, learn from what’s going on in machine learning? The next step is considering how we can integrate machine learning techniques into a decision-making process. If you are relying on quantitative methods, how do you think about the biases built into the algorithms?

The final issue I’ll mention for man versus machine is that we as humans tend to be uneasy letting our fate be decided by an algorithm, even if there’s abundant evidence that the algorithm is better than a human.

This scene from Moneyball captures the tone: the old timers have a difficult time grasping the signal from the statistical analysis. This is true for a few reasons. They generalize from their own experience. They overemphasize recent performance. And they rely on what they see versus cause and effect. We’ll talk today about how to overcome algorithm aversion, but it’s a huge issue.

The final topic is that of change, which is hard. The first impediment is organizational inertia. Back in the day, I was a food industry analyst, and I recall a story that captured this well.

When David Johnson took over as CEO of Campbell Soup about 25 years ago, the performance of the company lagged its peers. So he did a full review to understand how to improve operations.
He noticed that the firm did a huge annual promotion of tomato soup in the fall every year. Tomato soup was one of their largest and most profitable products. When he asked the executive why they did it, the executive responded, “I don’t know, we’ve always done it.”

In World War I, Campbell’s strategy was to grow its own tomatoes, harvest them, and them convert them to canned soup. With inventory up and the soup season still months ahead, Campbell used a promotion to clear its inventory. But of course the company long ago went to year-round suppliers, eliminating the post-harvest spike in supply. This evokes a quote from Peter Drucker: “If we did not do this already, would we go into it” now, knowing what we now know?

Perhaps the most challenging thing to do is to update your beliefs when you receive new information.

Here’s a famous example from Thinking, Fast and Slow by Daniel Kahneman [page 166].

“A cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data:

  • 85% of the cabs in the City are Green and 15% are Blue.
  • A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue rather than Green?”

The most common response is 80 percent, based on the reliability of the witness. But the correct answer is just a little over 41percent. In Phil Tetlock’s terrific book, Superforecasting, he has a great line: “Beliefs are hypotheses to be tested, not treasures to be guarded.” This is really easy to say and very difficult to do in practice. Changing our minds takes time, effort, in some cases technical skills, and can be embarrassing. Most of us would prefer to keep believing what we believe.

My final thought is on loss aversion. Everyone in this room deals with decisions that work only with some probability. We suffer losses more than we enjoy comparable gains. So we tend to stick to conventional ways of doing things because if we fail, we have lots of company.

There are lots of instances of this in sports. One example is the decision to go for it on fourth down in football. Most coaches prefer the more conservative route even if it gives them a lower probability of winning, because the potential pain of getting stopped on fourth down is a lot worse than the upside of a fresh set of downs.

Before I speak about the goal of the forum, I want to mention the talented folks from Ink Factory.

Dusty and Ryan will be graphically recording all of our presenters today. This means they will be synthesizing the words of our speakers into images and text to capture the key concepts. Their slogan is “you talk. we draw. it's awesome.” And we think you will agree. Please feel free to take pictures of the artwork and to tweet the images. And we encourage you to ask them questions—after they are done drawing of course!

Let me end by highlighting what our goals are for the day. First, we want to provide you access to speakers whom you may not encounter in your day-to-day interactions but who are nonetheless capable of provoking thought and dialogue. Second, we want to encourage a free exchange of ideas. Note that our speaking slots are longer than normal. This is in large part because we want to leave time for back-and-forth.

Second, we purposefully call this a “forum” instead of a “conference” precisely for this reason. We want to encourage an environment of inquiry, challenge, and exchange.

Finally, we want this to be a wonderful experience for you, so please don’t hesitate to ask anyone on the Credit Suisse team for anything. We will do our best to accommodate you.

See full PDF below.

[/drizzle]

Leave a Comment