“Undercover Economist” Tim Harford (previously) has a new book out, Messy, which makes a fascinating and compelling case that we are in real danger from the seductive neatness of computers, which put our messes out of sight, where they grow into great catastrophes.
In a long excerpt from the book on The Guardian, Harford describes the catastrophic events of the May 31, 2009 Air France Flight 447 from Rio to Paris: a trio of pilots experienced a disastrous combinations of well-understood failures, each feeding into the last, and all attributable to the way in which automation in aviation has left pilots ill-equipped to manage disasters.
The problem is that planes fly themselves, until they don’t. There is no human being who can be attentive to things that never happen. Skills you’ve acquired through practice will atrophy without that practice, because your brain is zero-sum, and the material devoted to those skills is recruited for skills you’re actually using.
Pilots do sometimes practice flying without automation, but they only do so when it’s absolutely safe to do so, so they never get to practice controls in scary, disastrous situations.
This has immediate application: we are entering an era of increasingly automated cars (whether or not all the cars are eventually self-driving is another matter, but each year’s automotive lines have more automated safety features that correct driver errors). That means that drivers are going to be increasingly less attentive to their cars, and also increasingly less equipped to correctly maneuver their cars when the automation encounters a situation it can’t cope with.
Harford’s future is one where every highway has many Flight 447s, every day — this may be less lethal than the current situation, but it is still frightening. And it’s not just cars: automation in sentencing, policing, and other domains will allow an “inexpert operator to function for a long time before his lack of skill becomes apparent”; where experts are made incompetent because “automatic systems erode their skills by removing the need for practice”; and where “automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response.”
Harford distinguishes between systems that handle routine cases so that we can pay attention to the difficult ones and systems that handle many routine cases that we just pay attention to something else — until it’s too late. He proposes an elegant solution: “reverse the role of computer and human”: Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene.
The rarer the exception gets, as with fly-by-wire, the less gracefully we are likely to deal with it. We assume that the computer is always right, and when someone says the computer made a mistake, we assume they are wrong or lying. What happens when private security guards throw you out of your local shopping centre because a computer has mistaken your face for that of a known shoplifter? (This technology is now being modified to allow retailers to single out particular customers for special offers the moment they walk into the store.) When your face, or name, is on a “criminal” list, how easy is it to get it taken off?
We are now on more lists than ever before, and computers have turned filing cabinets full of paper into instantly searchable, instantly actionable banks of data. Increasingly, computers are managing these databases, with no need for humans to get involved or even to understand what is happening. And the computers are often unaccountable: an algorithm that rates teachers and schools, Uber drivers or businesses on Google’s search, will typically be commercially confidential. Whatever errors or preconceptions have been programmed into the algorithm from the start, it is safe from scrutiny: those errors and preconceptions will be hard to challenge.
For all the power and the genuine usefulness of data, perhaps we have not yet acknowledged how imperfectly a tidy database maps on to a messy world. We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes. This is not to say that we should call for death to the databases and algorithms. There is at least some legitimate role for computerised attempts to investigate criminal suspects, and keep traffic flowing. But the database and the algorithm, like the autopilot, should be there to support human decision-making. If we rely on computers completely, disaster awaits.
Gary Klein, a psychologist who specialises in the study of expert and intuitive decision-making, summarises the problem: “When the algorithms are making the decisions, people often stop working to get better. The algorithms can make it hard to diagnose reasons for failures. As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.”
Decision experts such as Klein complain that many software engineers make the problem worse by deliberately designing systems to supplant human expertise by default; if we wish instead to use them to support human expertise, we need to wrestle with the system. GPS devices, for example, could provide all sorts of decision support, allowing a human driver to explore options, view maps and alter a route. But these functions tend to be buried deeper in the app. They take effort, whereas it is very easy to hit “Start navigation” and trust the computer to do the rest.
Messy [Tim Harford/Riverhead]
Crash: how computers are setting us up for disaster
[Tim Harford/The Guardian]