Rob Smith is an eminent computer scientist and machine learning pioneer whose work on genetic algorithms has been influential in both industry and the academy; now, in his first book for a general audience, Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, Smith expertly draws connections between AI, neoliberalism, human bias, eugenics and far-right populism, and shows how the biases of computer science and the corporate paymasters have distorted our whole society.
Smith’s book weaves the history of science and mathematics’ contribution to our understanding of probability and uncertainty, the philosophical quest for an understanding of the true nature of reality and its relationship to our perceptions and ruminations, the inextricable stories of evolutionary theory and eugenics, and the long project to design a thinking machine to show how the imperatives of neoliberalism and its way of valuing (and discounting) people combined with some of computer science’s most ill-advised and habitual simplifications to produce a form of statistical tyranny, one that tries to force humans to simplify their behaviors to suit the models, rather than adapting the models to suit the humans.
On the way, Smith shows how the parts of machine learning that do work refute some of the uglier philosophical ideas that have risen in currency as algorithms have taken over our society — just as the Victorians had their “blind watchmaker,” the rise of evolutionary algorithms has given a new lease on life to eugenic theories about survival of the fittest and the need to purify and protect the “best” among us.
Smith’s own work in genetic algorithms — woven into a memoir about growing up white in the post-Jim Crow south where racial prejudice was thick on the ground — has shown that diversity is always more robust than monoculture, a theme that recurs in everything from ecology to epidemiology and cell biology.
But, as Smith demonstrates, we are being penned in and stampeded by algorithms, whose designers have concluded that the best way to scale up their statistical prediction systems is to make us all more predictable — that is, to punish us when we stray from the options that the algorithms can gracefully cope with. In a way, it’s the old story of computing: forcing humans to adapt themselves to machines, rather than the other way around — but the machines that Smith is critiquing are sold on the basis that they do adapt to us.
This is a vital addition to the debate on algorithmic decision-making, machine learning, and late-stage platform capitalism, and it’s got important things to say about what makes us human, what our computers can do to enhance our lives, and how to to have a critical discourse about algorithms that does not subordinate human ethics to statistical convenience.
Smith’s book comes out in the UK this week, and it’ll be out in the USA in August.
(Image: Cryteria, CC-BY)