Big Data Ethics: racially biased training data versus machine learning

Writing in Slate, Cathy "Weapons of Math Destruction" O'Neill, a skeptical data-scientist, describes the ways that Big Data intersects with ethical considerations.

O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics.

We talk a lot about algorithms in the context of Big Data but the algorithms themselves are well-understood and pretty universal — they're the same ones that are used in mass-surveillance and serving ads. But the training data is subject to the same problems experienced by all sciences when they try to get a good, random sampling to use in their analysis. Just like bad sampling can blow up a medical trial or a psych experiment, it can also confound big data. Rather than calling for algorithmic transparency, we need to call for data transparency, methodological transparency, and sampling transparency.

The ethical data scientist would strive to improve the world, not repeat it. That would mean deploying tools to explicitly construct fair processes. As long as our world is not perfect, and as long as data is being collected on that world, we will not be building models that are improvements on our past unless we specifically set out to do so.

At the very least it would require us to build an auditing system for algorithms. This would be not unlike the modern sociological experiment in which job applications sent to various workplaces differ only by the race of the applicant—are black job seekers unfairly turned away? That same kind of experiment can be done directly to algorithms; see the work of Latanya Sweeney, who ran experiments to look into possible racist Google ad results. It can even be done transparently and repeatedly, and in this way the algorithm itself can be tested.

The ethics around algorithms is a topic that lives only partly in a technical realm, of course. A data scientist doesn’t have to be an expert on the social impact of algorithms; instead, she should see herself as a facilitator of ethical conversations and a translator of the resulting ethical decisions into formal code. In other words, she wouldn’t make all the ethical choices herself, but rather raise the questions with a larger and hopefully receptive group.

The Ethical Data Scientist
[Cathy O'Neill/Slate]


(Image: Cluster sampling, Dan Kernler, CC-BY-SA)