LEGG MASON: Thought Leaders Forum 2011: Daniel Kahneman on Probability, Behavioral Finance, and Luck

Published on

Daniel Kahneman is a Eugene Higgins Professor of Psychology, Princeton University, and Professor of Public Affairs, Woodrow Wilson School of Public and International Affairs. He was the winner of the 2002 Nobel Prize in Economic Sciences for his pioneering work integrating insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty. Kahneman is the coauthor of several academic works, which include Heuristics and Biases: The Psychology of Intuitive JudgmentChoices, Values, and FramesJudgment under Uncertainty: Heuristics and Biases; and Well-Being: The Foundations of Hedonic Psychology. He is the author of Thinking, Fast and Slow.

Professor Kahneman was born in Tel Aviv but spent his childhood years in Paris, France, before returning to Palestine in 1946. He received his bachelor’s degree in psychology (with a minor in mathematics) from the Hebrew University in Jerusalem, and in 1954 he was drafted into the Israeli Defense Forces, serving principally in its psychology branch. In 1958, he came to the United States and earned his Ph.D. in Psychology from the University of California, Berkeley, in 1961.

Kahneman is a member of the American Academy of Arts and Sciences and the National Academy of Sciences. He is a fellow of the American Psychological Association, the American Psychological Society, the Society of Experimental Psychologists, and the Econometric Society. He has been the recipient of numerous awards, among them the Distinguished Scientific Contribution Award of the American Psychological Association, the Warren Medal of the Society of Experimental Psychologists, and Hilgard Award for Career Contributions to General Psychology, and the Award for Lifetime Contributions to Psychology from the American Psychological Association (2007).

Link to podcast-http://www.thoughtleaderforum.com/default.asp?P=909655&S=945705

Michael Mauboussin: Well, I hope you all enjoyed the lunch session. It’s my honor to introduce our final speaker
of the day, Danny Kahneman. Danny is the Eugene Higgins Professor of Psychology at Princeton University and a
recipient of the 2002 Nobel Prize in Economic Sciences.
In the last couple of decades there’s been a burgeoning area of work in behavioral economics or behavioral
finance, and this whole movement can be traced directly back to the seminal work done by Professor Kahneman
and his collaborator, Amos Tversky, from the 1970s.
Kahneman and Tversky laid the groundwork for what is now known as the heuristics and biases camp, which is
essentially the study of the limits of judgment and decision-making under uncertainty. This work has been
extraordinary and has earned Professor Kahneman numerous awards and honors – too many for me to list, but
obviously the most visible of those being the Nobel Prize.
As I was writing my last book, Think Twice, I had to do a great deal of research, and what struck me as I moved
from topic to topic was that I kept running into the unbelievable contributions from Professor Kahneman. He’s truly
a towering figure in the world of psychology and certainly one of my intellectual heroes.
Professor Kahneman is the co-author of several academic works, including Heuristics and Biases and Judgment
Under Uncertainty, and he has a new book that will be out shortly called Thinking Fast and Slow, and I certainly
have pre-ordered it and I highly recommend it.
Please join me in welcoming Professor Danny Kahneman.
[applause] Professor Daniel Kahneman: Thank you. Well, there is a growing agreement, I think, and it’s been very clear in
the talks today, that we don’t understand the world very well. Nassim Taleb, who’s been mentioned a lot and is one
of my heroes, is writing a book now, and what I really like is the subtitle of the book, and the subtitle is How to Live in a World That We Do Not Understand. A very good question.
We systematically underestimate the amount of uncertainty to which we’re exposed, and we are wired to
underestimate the amount of uncertainty to which we are exposed. It is actually extremely difficult to accept how
much uncertainty there is. You can do an exercise on yourself. When you think about “Harry Potter” really, you still
think it must be exceptional. When you think of Mozart, was it luck that Mozart is what Mozart is, or could it have
been Salieri?
What we really learned today, what we could have learned from Matthew Salganik’s presentation, was that there
are hundreds of books that could have been just as important as Harry Potter. There is nothing special about Harry
Potter within the class of books that are not failures. And the choice, and this is what Matthew was telling us, the
choice is random, it is unpredictable. There is no system to it, there is no logic to it, that’s just the way it happens.
Very difficult to accept.
And part of the difficulty of understanding how much luck, the role that luck plays in our lives and in the
determination of these events, is that as soon as something happens, we understand why it happened. And this is

one of the things that Nassim went into. He has learned quite a bit of psychology, actually, and that is a very
important bit of psychology, which is that we are really not as surprised as we ought to be by surprises.
And the reason we are not as surprised is that as soon as something happens that we really had not anticipated,
we understand it. We work it out. That’s a mistake we’ll never do again. Our view of the world immediately
changes, and furthermore we are systematically mistaken about what we used to think earlier.
A very simple thought experiment will convince you of that. There are two football teams, and it’s the beginning of
the season. Make them college teams. So far as you know, they’re well-matched.
Now, they play a game and one of them destroys the other. Now they’re no longer equal. Now one of them is much
stronger than the other. You will not be able to undo in your mind the thought that one of them is stronger, and
somehow that you knew it was stronger. You will forget that you thought they were even. You will forget that there
was no particular reason.
So now that is was stronger, the fact that it won by so much is no longer surprising. That’s the mechanism. So the
mechanism is that by wiping out the surprises as we go along, we create an illusion of the world that is much more
orderly than it actually is.
One of the major influences on my thinking in that domain is Phil Tetlock. And there is the question of why do those
pundits and CIA analysts do so badly? And notice that we are inclined to think that the CIA analysts do badly.
We’re inclined to think that the television stations or chains that rejected American Idol missed something. They
made a mistake. If you think that, you have not assimilated the lesson of this morning.
It’s not that the pundits do badly. It’s not that the television chains made a mistake. They didn’t make a mistake.
The world is incomprehensible. It’s not the fault of the pundits. It’s the fault of the world. It’s just too complicated to
predict. It’s too complicated, and luck plays an enormously important role.
In thinking about Phil’s research, I came up with a thought experiment that sort of, for me at least, dramatized the
amount of luck there is.
Now, think of Adolf Hitler and his role in the history of the 20th century. Now, that was an important figure in the
history of the 20th century. Now, at the moment of conception, it could have been Ms. Hitler. There was a fifty-fifty
chance that that fertilized egg would be female. Looking back there is a one-eighth probability of a 20th century that
doesn’t have Hitler in it, or Stalin, or Mao. That wouldn’t be the same 20th century.
So you can see the role that sheer luck plays. And, you know, fewer things are more lottery—like than the
fertilization of an egg. Sheer luck plays an enormous role, and we can’t accept it. We cannot accept the extent to
which luck is a determinant.
Our mental machinery is designed to make sense of the world. Our mental machinery is designed to tell us stories,
and those are stories we believe, and the stories tend to be simple. They tend to be causal, and yet, internally
coherent.
And the quality of those stories plays a very significant role in our mental life. So I’ll talk about that a bit. I’ll talk
about the difficulty in integrating statistics with thinking about single cases, and I will talk of the phenomenon of
overconfidence, which I think is an important phenomena in the psychology of judgment.
Let me tell you a story that I’ve often told before, but it’s a useful story because it brings together several themes
that are important, to me at least. Many years ago, when I was still teaching at Hebrew University, but already
working on the topic of judgment under uncertainty, I had the idea of developing a curriculum for judgment and
decisionmaking under uncertainty for high schools. And it was to be without mathematics.

So we had that idea, talked to the ministry of education, which provided a small grant so that we could work on
that. I made up a team, and we went to work. And it really went quite well for a while, I think probably for about a
year before the incident that I’m going to tell you about. You know, we developed a few chapters. We had an
outline.
We’d given one or two practice lessons, and it was a team that… I was leading it, but there was another important
professor there, who was the dean of the School of Education and an expert on curriculums. And there were some
teachers and some of my graduate students. And on a Friday afternoon, I don’t know what possessed me, I had
the idea of doing an exercise of the kind of forecasting exercises that we were thinking about.
And the exercise was considering how long it was going to take us to complete our book. It seemed like a good
topic. Now, we hadn’t thought of that before, which is rather, you know, I’m not proud of it, but we had not asked
ourselves that question. And it looked like an interesting question to ask. Now, I conducted that, and I think that’s
about the only thing I did right, actually, but I conducted that meeting properly.
And the proper way to deal with a question like that is not to have a debate. The proper way is to ask everybody to
write down their answer on a slip of paper. That’s how you get independent judgments, and the quality of the
average of these judgments is going to be probably better than the quality of what comes out of a discussion. And
then anyway, you can discuss it, but you learn a lot by doing it that way. So we did it that way.
And I put down the answers on the board, and we had a distribution, and it was all between 18 months and 30
months, between one-and-a-half and two-and-a-half years. All of us, myself included, and Seymour, the dean of
the School of Education, was in that range, too.
And then, I had an idea. I asked Seymour whether he knew of other teams like ours that had developed a
curriculum where none existed before, a truly original curriculum. And he said he knew several. This was a period
of intense ferment, actually, in the world of education. Lots of people were generating new curricula in the late ’70s.
And so, I asked him, “Do you know enough about these teams so that you could locate them, approximately where
we are, in terms of the progress that they have made?” And he said yes, he could. It was obvious what to do next,
so I asked him, “Well, how long did it take them to complete the book?”
It took him a while, actually, to generate the answer because he was embarrassed by the answer. He said, “You
know, actually, it had never occurred to me before, but not all of them completed the book.” He said about 40
percent never finished. Now, that was a completely new thought for us, the idea that we would fail just hadn’t
occurred to us. And it was something that was under control, clearly a manageable task. We were going to do it.
Now, 40 percent of teams had failed.
Then, I asked him, “Well, and those who finished? How long did it take them?” And he said, “I can’t think of any that
took less than seven years. And I can’t think of any that persevered more than 10. So, somewhere between seven
and 10 years.”
Now, this is a very rich story, in terms of what you can pull out of it. But one of the things you can pull out of it is
about Seymour. First of all, he had the information in his head about the statistics. And he didn’t use it. It’s not that
he decided not to use it, it didn’t seem relevant.
When he tried to forecast how long it would take us to complete our task, he did what people do. He extrapolated,
you know, he imagined, he looked into his crystal ball. It was all about us and about our project. He was dealing
specifically with our project. I call that the inside view, looking at our problem, at our forecasting problem from the
inside.

What I had him do, by asking him the question of the other team, was to take what I call the outside view, that is,
view our case as a specific instance of a broader category. This is statistical thinking. It’s a very different kind of
thinking. And there are several observations to be made about that.
The first is, the inside view comes much more naturally. The natural way to think about things is what Seymour did
and what all of us did. We thought about how long it would take us to complete the work and we imagined, we used
our imagination to forecast the outcome. Turns out, that’s a miserable way of doing it. The outside view is clearly
the correct approach to a case like that.
I should add, in case you are curious, there was a book. The book was finished. The book was finished eight years
later. I was no longer there. Nobody could have forecast all the vicissitudes that caused that miserable project to
take eight years. Furthermore, by the time the book was finished, the Ministry of Education had lost interest in the
project. It was never used. So, it was a complete waste of time.
But the point is that all the other teams that had been in Seymour’s mind, I’m sure they all made the same mistake.
They didn’t know the odds they were facing. They didn’t know that their probability of failing was 40 percent and
that it might take them between seven and 10 years if they were lucky and successful.
They had no idea, because they were all thinking the way we were thinking. They had a plan, they had an idea of
how things should work, and they were using their plan as an anchor to make their forecasts. We call that the
planning fallacy, by the way. The planning fallacy, which is endemic, is, you have a plan, which tends to be, by and
large, a best case scenario. Then you adjust it.
What happens, you have to think about how plans fail, and the failures of plans are not predictable. I mean, it’s
clear that something will go wrong, but you don’t know, usually, what will go wrong. There are many, many reasons
that can cause a project like that to take eight years. You can’t anticipate all these reasons. To some extent, you
could think of them as luck, they are noise in the system. They are unpredictable.
Well, let me first tell you that there have been developments. The outside view – that practice in which you have a
forecasting problem, and you look at the statistics – that now has a name, an official name, it’s called “reference
class forecasting.” It’s got a champion. His name is Bent Flyvbjerb and he is a professor at Oxford. And it is now,
actually, the recommended practice, by, I think, the American Planning Association – there is such a thing – which
passed a resolution endorsing reference class forecasting.
That is, when you make a plan, try to take the outside view into consideration, and see if the plan has any realism
to it. Flyvbjerb has collected a lot of information about plans and their realization. And of course, we are not
surprised to hear that the plans are typically wildly optimistic. And we now have numerical information about certain
classes of plans. He has studied, in particular, transportation plans. And the forecast of both utilization and cost
and time are systematically wrong.
Now, this is not always innocent. I mean, some people deliberately make promises or make optimistic plans in
order to suck the resources of the organization. But even when people do their best, they are going to
underestimate the role of luck and uncertainty in their outcomes.
The story suggests a way of making predictions, which sometimes can be useful.
Oh, I forgot the detail, an important detail, in my story about Seymour. When he had told us the bad news, seven
years, 10 years, and so on, I was grasping at straws. And so, I asked him, “When you think of us, in comparison to
the other teams that you just told us about, how do we stack up? Are we stronger, are we weaker?” And that I will
never forget, because he was very quick this time. He said, “I would say we’re below average. But not by much.”

Now, I hope you won’t ask me why we continued, because that is why we went on with the project, which
obviously, we should have stopped that day. But I can’t help myself but tell you, this is truly the most idiotic part of
the story. Of course we should have quit. And to try to explain why we didn’t quit, is again, to go back to this illusion
of understanding and to the difficulty we have with statistical thinking.
I mean, I believed it. It’s not that I didn’t believe what he told me. It just didn’t seem all that urgent to quit the project
just because of some statistical facts. You have the sense that the statistical facts are not germane, are not
pertinent to you. Base rates just don’t matter.
Now, if you want to make a prediction, and there are many cases where this is going to apply, you can take the
outside view. And what the outside view tells you, if you do it right, is it generates what I would call a baseline
prediction. The baseline prediction is what you would say if you only knew that the case belongs to that category,
and nothing else. So, that’s the baseline. And in this case, clearly, the baseline was closer to the mark than our
best estimate.
Once you have the baseline, adjust. So, if we had been a lot stronger or a lot weaker, the rational estimate would
have moved a bit from the parameters that he proposed. And that notion of a baseline forecast is a very important
notion, in which we try to ignore the information we have about the case, because it is intrusive, because we are
likely to overweight the information that we have. That’s a very important part of the story. The information we have
makes a story. And as I indicated earlier, the brain is wired to make up stories and believe them.
The statistics do not make a good story. And for an interesting reason, by the way. The statistics are not causal.
You know, it’s just numbers. A story is causal, there are causes, there are effects, there are things that cause, bring
about other things. That’s how we make sense of the world.
Statistics just doesn’t compete. And so, I look back at that incident and ask how could we ignore what he was
telling us, but we did. We actually ignored it. And it was because our sense of progress, the way we felt about that
particular incident seemed so much more compelling than cold statistics, that we couldn’t bring ourselves to follow
the statistics.
And this happens a lot, in the difficulty that people have in integrating statistics with causal stories.
OK. I will tell you a riddle. It’s better if I project it, but I think you can follow me. And anyway, you’re not supposed to
get it right, so it really doesn’t matter. There is a town in which 85 percent of the cabs are green and 15 percent are
blue. And there was a hit-and-run accident involving a cab at night, and there a witness. Conditions of visibility
were so-so, and the witness basically said, “I’m 80 percent sure that it was a blue cab, one of the smaller
company.”
And people are asked, presented that problem, and they’re asked what is your probability that the cab involved in
the accident was blue. Hundreds of people have been asked this question, and the most frequent answer is 80
percent. You know, there was a witness there. He was tested, and actually, you know, we say that under the
visibility conditions, he was 80 percent accurate when he said green or when he said blue.
They go with the witness. Actually, this is the wrong answer. The correct answer is slightly less than 50 percent that
it’s blue, because the base rate continues to be relevant. The number of cabs continues to be relevant. Very hard
to see it. You don’t see it. I’m not going to explain it now. I mean, some of you do, but that’s because you studied
base theorem somewhere else, but if you didn’t have the mathematics, why not trust the witness.
Now, let me tell you a variation on that story, and all of you, I think, will immediately feel the difference. There are
two cab companies in the city: 50 percent of the cabs are green and 50 percent of the cabs are blue. But 85
percent of the accidents involve green cabs. Now, there was a witness, and the rest of the story is the same. Do

you feel the difference? Nobody wants to ignore the fact that 85 percent of the accidents are caused by green
cabs. I mean, the drivers in that company must be insane. This is the way that people see it.
You immediately infer a causal propensity. You make a causal inference from that statistic, and now that is used.
When people combine that with a witness, they get roughly the correct answer. So there is a real profound
difference between the way our mind deals with arbitrary statistics and with causal stories. And sometimes
statistics enable us to make a causal story, because, as in this case, you immediately felt that something must be
wrong with the green drivers, and then you use it.
Now, let me talk about an error of prediction, a way of prediction. I’ll give you a detail, a fact. So, it’s about Julie,
she’s a graduating senior. I’m going to ask you to guess her GPA, and I’m going to tell you one fact about her. She
read fluently when she was four years old. That’s all I’m going to tell you. All of you have a GPA in mind, and I
could do magic, not only Apollo Robbins. I know what GPA you have in mind. I mean, it’s high. It’s really quite high.
It’s probably is a 3.7, somewhere there.
There is not much variability either, because I think I also know how you do it, and this is how you do it. You take
the information that she read fluently at age four, and because of the capability of our intuitive mind, that
immediately translates because we know the world, into a sense of how extreme is it, how precocious is it to read
fluently at age four. Where did that put her, Julie, on the distribution? What percentile is she at, for precociousness
of reading?
And you have an idea. Furthermore, my guess is that your idea is pretty good, because this is something we do
learn about the world. We learn frequencies. We’re quite good at learning frequencies, so we know the age at
which children learn to read. We know how exceptional it is, not as exceptional as reading fluently at age two-anda-
half, but it’s good to read fluently at age four.
Now, what did you do to get the GPA? Very easy. You picked a GPA that is as extreme as her reading ability.
That’s what people do. There’s a lot of evidence that this is what people do. This is crazy. This is absolutely wrong.
That’s intuition. You know, you didn’t deliberately do it. There was a GPA that came to your mind when I told you
about Julie. And that GPA is the GPA that matches, because there is another facility in our brain, and in mine,
which is… I call that intensity matching.
I’m not the first one to deal with it, but the idea is that you can take any dimension, which is an intensity dimension,
and sort of match how intense it is to almost any other dimension. So I could ask you, if among the incomes of
teachers, how high an income is as extreme as Julie’s reading ability? You’ll give me an answer. I mean,
something will come to your mind.
Among children who are 10 years old, how tall a child would have to be, you can do it. People really do it quite well.
They match across intensity dimensions, which enables you sometimes to answer a question with a lot of
confidence, when it’s the wrong question. That is, you’ve been asked about a GPA, but you really answered a
question about her precociousness, without knowing that you had switched from one to the other.
Now, this is too extreme. You’re not supposed to predict, to make extreme predictions on the basis of weak
evidence, and this is weak evidence. I mean, the correlation between reading fluency at age four and GPA, at best
it’s 0.3 or 0.4. My guess is it would be lower, but people predict as if the correlation was perfect.
And that is another characteristic of the stories that we tell ourselves. The quality of the story determines how we
predict, and it determines the confidence we have in our predictions. We judge it by the quality of the story, but you
can take a wonderful story on the basis of evidence that is false or unreliable or very, very sparse.

It doesn’t take a lot of information to create. You created a story out of the fact that Julie read fluently at age four.
It’s something that happens automatically in our brain. And the best you could do was to match it to the question
that was asked and that’s the answer that people give.
We are not wired properly for statistics. We’re really good at telling stories, but we’re not wired properly for
statistics. Now, I’ve been going on a theme of confidence and the confidence that people have. And it’s obvious
from what I say that we can expect people to be way overconfident, because they have that ability to tell good
stories, and because the quality of the stories is what determines their confidence. The extent of that
overconfidence is actually quite remarkable.
There was a study reported by my friend, Dick Thaler, in a column, I think, in the New York Times, but you may not
know it. The business school at Duke University conducts a survey of the CFOs of Fortune 500 companies, and
they have a substantial sample. And they ask them every year to state their confidence interval, an 80 percent
confidence interval for I think it’s the S&P [500] Index for the next year. So they state their confidence interval.
Now, that goes back to something that Phil was talking about, to the issue of calibration. If they were properly
calibrated, then 80 percent of the time the true value would fall inside their confidence interval, and 20 percent of
the time it would fall outside their confidence interval. That’s not what happened. Actually, instead of 80 percent
falling inside the confidence and 20 outside, it’s 36 percent for inside and 64 percent for outside. The confidence
intervals are ridiculously narrow, if you compare it to what people know.
Now, that’s not the only kind of information we have. We also know that the CFOs have no idea what they’re talking
about. When you look at the correlation between their predictions about the S&P 500 and what actually happens to
the S&P 500, the correlation is actually not zero. It’s slightly negative. I mean, they’re a little worse than chance, not
by much. They don’t know a thing.
So I asked the people who did that study to carry out a computation and to figure out the correct confidence interval
that you should give about the S&P 500 when you know as much as these CFOs do. And the answer is there is an
80 percent probability that the S&P 500 outcome will be between minus 10 percent and plus 30 percent.
You are meant to smile when I say this, because this is ridiculous. I mean, a CFO who would say that would be
kicked out of the room. You’re supposed to say something, to say a little more than that. I find it astonishing, you
know, the width of that confidence interval. Clearly, it is much, much wider than I expect, and I’ve been studying
that problem for a long time.
So there is a vast amount of overconfidence. CFOs have it. All of us have it. And it’s related to our ability, to a
storytelling ability, which in turn is related to the belief that we have that the world can be understood and that
outcomes can be forecast. What I really think is a question you should ask yourself, what did you really learn about
Harry Potter today? What did you really learn aboutMozart?
I can’t believe that Harry Potter is not exceptional. I don’t know about you, but I can’t believe it. I can’t believe that
Mozart is not exceptional, but Mozart, I know that Mozart is exceptional. I’ve listened to him so much and I love him
so much the idea that it might have been Salieri is scandalous to me. I find it shocking.
The world retrospectively makes more sense than it should given how little we understand it. I found it remarkable,
actually, the extent to which we seem, all of us today, the last three speakers, to converge on this conclusion. To
my surprise Phil seems to be the more optimistic among us. He’s searching for a pattern.
I, under his influence, turned into a radical pessimist. I think you probably can forecast short and medium term.
Long term, I think, is completely hopeless because long-term I think the world is chaotic and random. Many
phenomena that look to us highly regular are, in fact, chaotic and random.

Full transcript in scribd:

Behavioral Finance

Leave a Comment