Why Algorithms Lock In Inequality

Updated on

Depending on whom you ask, inequality is driven by globalization, tax policies, crony capitalism or some other macro-economic force. But what if something more sinister is preventing poor people from advancing?

Inequality

“[I]f a prosecutor attempted to tar a defendant by mentioning his brother’s criminal record or the high crime rate in his neighborhood, a decent defense attorney would roar, ‘Objection, Your Honor!’ And a serious judge would sustain it. This is the basis of our legal system. We are judged by what we do, not who we are.”

So asserts Cathy O’Neil, author of the recent book, Weapons of Mass Destruction: How Big Data Increases Inequality and Threatens Democracy, available from the link on this page. But when a recidivism algorithm evaluates whether or not a prisoner is a risk to society and should be paroled, the algorithm does precisely what is not allowed in court. It evaluates the subject based on his background, his family, and the neighborhood he grew up in. And it does this in an opaque manner, using computer codes that are “proprietary” and inaccessible to the prisoner or anyone else because they belong to for-profit companies.

Recidivism algorithms are just one example among many in O’Neil’s book of Weapons of Math Destruction, or WMD. Another example is auto insurance algorithms that determine how much a driver will be charged based on his credit rating rather than on his driving record. Why? Partly because credit ratings are so easily accessible, but also because low credit ratings correlate well with stress-induced behavior; people with low credit ratings are usually under stress. Whether that means that low credit ratings actually correlate better with a driver’s future driving record than the driver’s past record – which seems unlikely – we don’t know, because the algorithm and the reasons for its construction are not transparent.

The chief message of O’Neil’s book is that these algorithms – which are becoming increasingly standardized and widely-used –tend to lock poorer people into poverty. For example, when a person has to pay a higher insurance rate because of a low credit rating, or is denied a job – because credit ratings are used in algorithms that rate potential employees too – that person’s poverty will be sustained, or they will become poorer, and hence their credit rating will fall further. Thus, poverty will be more difficult to climb out of.

And if a person fights back against an algorithm – as some teachers fought back when the algorithm that evaluated them produced obviously erroneous results – the evidence that the algorithm was wrong may be considered “soft,” even if it is perfectly clear to anyone that the evaluation was wrong. By contrast, the computer algorithm is considered “objective” and thus infallible by assumption. “The human victims of WMDs, we’ll see time and time again, are held to a far higher standard of evidence than the algorithms themselves,” O’Neil says – algorithms whose details are unavailable to anyone, least of all to their victims.

O’Neil admits that when potential employees, or applicants for loans, or candidates for parole, were evaluated in the past, they may have been evaluated by the subjective judgments of biased individuals. But at least those judgments, she says, were diverse across different employers, parole officers, etc. This might allow an individual to crack open a door to squeak through the bias against them, either by finding an unbiased human evaluator, or by finding one who happens to be biased in the individual’s favor. But when the bias is set in concrete in standardized, opaque algorithms and used universally for its supposed “objective” merit, a person who – for example – has a low credit rating will be unable to find a way around it.

By Michael Edesess, read the full article here.

Leave a Comment