KDnuggets Home » News » 2016 » Nov » Opinions, Interviews » The Foundations of Algorithmic Bias ( 16:n42 )

The Foundations of Algorithmic Bias


We might hope that algorithmic decision making would be free of biases. But increasingly, the public is starting to realize that machine learning systems can exhibit these same biases and more. In this post, we look at precisely how that happens.



Reposted with permission from approximatelycorrect.com

This morning, millions of people woke up and impulsively checked Facebook. They were greeted immediately by content curated by Facebook’s newsfeed algorithms. To some degree, this news might have influenced their perceptions of the day’s news, the economy’s outlook, and the state of the election. Every year, millions of people apply for jobs. Increasingly, their success might lie, in part, in the hands of computer programs tasked with matching applications to job openings. And every year, roughly 12 million people are arrested. Throughout the criminal justice system, computer-generated risk-assessments are used to determine which arrestees should be set free. In all these situations, algorithms are tasked with making decisions.

Courts deploy computerized algorithms

Algorithmic decision-making mediates more and more of our interactions, influencing our social experiences, the news we see, our finances, and our career opportunities. We task computer programs with approving lines of credit, curating news, and filtering job applicants. Courts even deploy computerized algorithms to predict “risk of recidivism”, the probability that an individual relapses into criminal behavior. It seems likely that this trend will only accelerate as breakthroughs in artificial intelligence rapidly broadened the capabilities of software.

Turning decision-making over to algorithms naturally raises worries about our ability to assess and enforce the neutrality of these new decision makers. How can we be sure that the algorithmically curated news doesn’t have a political party bias or job listings don’t reflect a gender or racial bias? What other biases might our automated processes be exhibiting that that we wouldn’t even know to look for?

The rise of machine learning complicates these concerns. Traditional software is typically composed from simple, hand-coded logic rules. IF condition X holds THEN perform action Y.  But machine learning relies on complex statistical models to discover patterns in large datasets. Take loan approval for instance. Given years of credit history and other side information, a machine learning algorithm might then output a probability that the applicant will default.  The logic behind this assessment wouldn’t be coded by hand. Instead, the model would extrapolate from the records of thousands or millions of other customers.

On highly specialized problems, and given enough data, machine learning algorithms can often make predictions with near-human or super-human accuracy. But it’s often hard to say precisely why a decision was made. So how can we ensure that these decisions don’t encode bias? How can we ensure that giving these algorithms decision-making power doesn’t amount to a breach of ethics? The potential for prejudice hasn’t gone under the radar. In the last year alone, MIT Technology Review [1], the Guardian [2], and the New York Times [3], all published thought pieces cautioning against algorithmic bias. Some of the best coverage has come from ProPublica, which quantitatively studied racial bias in a widely used  criminal risk-assessment score [4].

Each article counters the notion that algorithms are necessarily objective. Technology Review invokes Fred Berenson’s assertion that we are susceptible to ‘mathwashing’. That is, we tend to (misguidedly) assume that any system built with complex mathematics at its core must somehow be objective, devoid of the biases that plague human decision-making.

Alas, the public discourse rarely throws light on the precise mechanisms by which bias actually enters algorithmic decision-making processes. Tech Review for example, points to the abundance of men working in computer science without explaining how this might alter the behavior of their algorithms. You might think that the bias seeped through via the air filtration system. The Guardian makes a compelling argument that the “recidivism” predictor encodes racial bias, producing evidence to support the claim. But they never discuss how this came to be, describing the algorithms simply as black boxes. Similarly, the New York Times piece calls attention to bias and to the opacity of FaceBook algorithms for new curation, but doesn’t elucidate the precise mechanisms by which undesirable outcomes manifest. Admirably, in the ProPublica piece, author Julia Adwin sought the risk-assessment algorithm itself, but software-company Northpointe would not share the precise proprietary formula.

It’s encouraging that these pieces have helped to spark a global conversation about the responsibilities of programmatic decision-makers. However, the mystical quality of the discussion threatens to stymie progress. If we don’t know how algorithms can become biased, how can we know when to suspect them? Moreover, without this understanding, how can we hope to counteract the bias?

To bring some rigor to the dialogue, let’s first run through a crash-course on what algorithms are, how they make decisions, and where machine learning enters the picture. Armed with this information, we’ll then introduce a catalogue of fundamental ways that things can go wrong.

[ALGORITHMS]

To start, let’s briefly explain algorithms. Algorithms are the instructions that tell your computer precisely how to accomplish some task.  Typically, this means how to take some input and producing some output. The software that takes two addresses on a map and returns the shortest route between them is an algorithm. So is the method that doctors use to calculate cardiac risk. This particular algorithm takes the age, blood pressure, smoking status, and a few other inputs, combines them according to a precise formula, and outputs the risk of a cardiovascular event.

Compared to these simple examples, many of the algorithms at the heart of technologies like self-driving cars and recommender systems are considerably more complex, containing many instructions, advanced mathematical operations, and complicated logic. Sometimes, the line between an algorithm and what might better be described as a complex software systems can become blurred.

Consider the algorithms behind Google’s search service. From the outside it might appear to be monolithic, but it’s actually a complex software system, encompassing multiple sub-algorithms, each of which may be maintained by large teams of engineers and scientists and consisting of millions of lines of code.

There’s little that can be said universally about algorithms. Collectively, they’re neither racist nor neutral, fast nor slow, sentient nor insensate. If you could simulate your brain with a computer program, perfectly capturing the behavior of each neuron, that program would itself be an algorithm. So, in an important sense, there’s nothing fundamentally special about algorithmic decisions. In any situation in which human decisions might exhibit bias, so might those made by computerized algorithms. One important difference between human and algorithmic bias might be that for humans, we know to suspect bias, and we have some intuition for what sorts of bias to expect.

To dispense with any doubt that an algorithm might encode bias, consider the following rule for extending a line of credit: If race=white THEN approve loan ELSE deny. This program, however simple, constitutes an algorithm and yet reflects an obvious bias. Of course, this explicit racism might be easy to detect and straightforward to challenge legally. Deciphering its logic doesn’t require formidable expertise.

But today’s large-scale software and machine-learning systems can grow opaque. Even the programmer of a system might struggle to say why precisely makes any individual system. For complex algorithms, biases may exist, but detecting the bias, identifying its cause, and correcting may not always be straightforward. Nevertheless, there exist some common patterns for how bias can creep into systems. Understanding these patterns may prove vital to guarding against preventable problems.

[MACHINE LEARNING]

Now let’s review the basics of machine learning. Machine learning refers to powerful set of techniques for building algorithms that improve as a function of experience. The field of machine learning addresses a broad class of problems and algorithmic solutions but we’re going to focus on supervised learning, the kind directly concerned with pattern recognition and  predictive modelling.

Most machine learning in the wild today consists of supervised learning. When Facebook recognizes your face in a photograph, when your mailbox filters spam, and when your bank predicts default risk – these are all examples of supervised machine learning in action.

We use machine learning because sometimes it’s impossible to specify a good enough program a priori. Let’s say you wanted to build a spam filter. You might be tempted to implement a rule-based system with a blacklist of particularly spammy  words. Is any email referring to “Western Union” spam? Perhaps. But even so, that only describes a small percentage of spam. There’s still the solicitations from illegal drug companies, pornographic sites, and the legendary Nigerian prince who wants to wire you millions of dollars.

Suppose now that through herculean effort you produced a perfect spam filter, cobbling together 1000s of consistent rules to cover all the known cases of spam while letting all legitimate email pass through. As soon as you’d completed this far-fetched feat and secured some well-deserved sleep, you’d wake up to find that the spam filter no longer worked as well. The spammers would have invented new varieties of spam, invalidating all your hard work.

Machine learning proposes an alternative way to deal with these problems. Even if we can’t specify precisely what constitutes spam, we might know it when we see it. Instead of coming up with the exact solution ourselves by enumerating rules, we can compile a large dataset containing emails known either to be spam or to be safe. The dataset might consist of millions of emails, each of which would be characterized by a large number of attributes and annotated according to whether it’s believed (by a human) to actually spam or not. Typical attributes might include the words themselves, the time the email was sent, the email address, server, and domain from which it was sent, and statistics about previous correspondence with this address.