Wharton’s Cade Massey and Joseph Simmons discuss how to overcome ‘algorithm aversion’

Mathematical models have been used to augment or replace human decision-making since the invention of the calculator, bolstered by the notion that a machine won’t make mistakes. Yet many people are averse to using algorithms, preferring instead to rely on their instincts when it comes to a variety of decisions. New research from Cade Massey and Joseph Simmons, professors in Wharton’s department of operations, information and decisions, and Berkeley J. Dietvorst from the University of Chicago finds that control is at the core of the matter. If you give decision-makers a measure of control over the model, they are more like to use it. Massey and Simmons spoke to [email protected] about the implications of their research.

algorithms Algorithm
Image source: Edgeworks Limited – Flickr

An edited transcript of the conversation follows.

[email protected]: Could you give us a brief summary of this research? This paper is a follow-up to something you had done recently.

Joseph Simmons: We’re studying a phenomenon called “algorithm aversion,” which is the tendency for people to not want to follow specific evidence-based rules when they make decisions, even though a lot of the research shows that’s exactly the way you should be making judgments and forecasts. A lot of people just want to rely on their gut or go by the seat of their pants. They don’t want to rely on consistent, evidence-based rules — and they should.

We’ve been studying for a couple of years now why or under what circumstances don’t they want to rely on these algorithms. Our second paper is about how to get people to be more likely to rely on algorithms. We basically found that if you tell people, “You can go with an algorithm that is going to give you some advice, or you can go with your own opinion,” and you ask them, “What do you want to do?” — they’re actually OK with saying, “I’ll use the algorithm.”

However, once you give them some practice and let them see how their algorithm performs, all of a sudden they don’t want to use it anymore. That’s because they see the algorithm make mistakes. Once they see algorithms or computers make mistakes, they don’t want to use it anymore, even though the algorithm or computer is going to make a smaller mistake or more infrequent mistakes than they are going to make.

“A lot of people just want to rely on their gut or go by the seat of their pants. They don’t want to rely on consistent, evidence-based rules — and they should.”–Joseph Simmons

[email protected]: The algorithms are supposed to be perfect.

Simmons: Right. People want algorithms to be perfect and expect them to be perfect, even though what we really want is for them to simply be a little better than the humans. Our first paper is kind of pessimistic and shows that once people see the algorithms do their thing, they don’t want to use them. Our second paper shows that you can get people to use algorithms as long as you give them a little bit of control over them. You say, “The algorithm tells you that this person is going to have a GPA of 3.2. What do you think their GPA is going to be?” They don’t want to just go with the 3.2 that the algorithm [predicts]. But if you say, “You can adjust it by .1,” then their response is: “OK, I’m fine to use the algorithm.” We find as long as you give people a little bit of control over these things, they’re more likely to use them. And that’s pretty good news.

Cade Massey: We operationalize this in an experimental context, but we’re motivated by real world context. Some of the early ideas for this research came from working with companies where we would go in with models for decision-making about hiring and recruiting new employees. Based on many years’ worth of data and some pretty good analytics, we were sure that we had the best advice going. Yet those organizations were reluctant to use [these models] because they wanted to rely on just their intuition.

It’s very common in hiring, it’s very common in performance evaluation, it’s even common increasingly in some fields where they automate decision-making, like how to manage a hedge fund or what should the sales forecast be for some product. Those are all places where increasingly automatically generated forecasts or advice is available. We call it an algorithm. And the final decision-maker has discretion over whether they listen to that advice, use their own [instincts] or use a blend.

[email protected]: Your key takeaway was that people are less averse to using algorithms if they have some control. But there is a conclusion that surprised you in how much control you had to give them to make them feel better. Tell us about that.

Massey: We were agnostic on how much control would be necessary to get them to buy in. The downside to giving them control is they start degrading the algorithm. In most domains, they’re not as good as the model. The more of their opinion is in there, the worse it performs. In some sense, you’d like to give them as little control as possible and still have them buy in. We didn’t know what the answer to that would be. We got early evidence that it wasn’t going to be very much; then we started testing the limits of it and found that we could give them just a little bit of control. You know, move something around 5% or so and they would be much more interested in using the algorithm. If you give them more, it doesn’t increase the lift at all. Give them a little bit, and it’s about the same as giving them moderate influence.

Simmons: What’s nice about that is when they adjust the algorithms, they make them worse. But if they can only adjust it [a little bit], they can only make it that much worse. And since they are more likely to use it in that case, their final judgments will wind up being correlated with the algorithm close to perfectly. We can’t get people to use algorithms 100%, but we can get them to use algorithms 99%, and that massively improves their judgments.

“We can’t get people to use algorithms 100%, but we can get them to use algorithms 99%, and that massively improves their judgments.”–Joseph Simmons

[email protected]: If I’m a business owner or someone who is going to be charged with using one of these algorithms, how might I apply this research in real life?

Massey: The overarching lesson would be that you don’t simply impose a monolithic model or black box model and say, “This is how you use judgment. This is how you should codify your decision-making.” People will fight that. You want to let them have discretion. That’s going to look different in different places. Consider a graduate school making admissions — they rank their applicants and, at some point, they cut a line and make exceptions. They move people around. You can automate some of that process. Even if you

1, 2  - View Full Page