Maciej Cegłowski (previously) gave this talk, “Superintelligence: The Idea That Eats Smart People,” at Web Camp Zagreb last October, spending 45 minutes delving into the origin of the idea that computers are going to become apocalyptic, self-programming, superintelligent basilisks that end all live on Earth (and variations on this theme) and then explaining why this fundamentally evidence-free, fuzzy idea has colonized so many otherwise brilliant people — including people like Stephen Hawking — and why it’s an irrational and potentially harmful belief system.
As a science fiction writer, I’ve spent a fair bit of time noodling with these ideas in both story form and essays: True Names, the novella I wrote with Ben Rosenbaum about this, was nominated for a Hugo Award; and Charlie Stross and I wrote a novel on the theme, Rapture of the Nerds, and then there’s my essay on the Singularity as a spiritual belief system that can pass for a scientific prediction.
One thing I’m keenly aware of is that the aesthetic appeal of futuristic Singularity predictions is firmly rooted in the here-and-now: it’s nice to think that there is a thing called “progress,” and that we’re in the midst of it; it’s nice to think that when progress outstrips your capacity to make sense of it, it’s because it’s transcended human comprehension (and not, say, because your time has past and you are becoming irrelevant to a field and discourse you once dominated); it’s nice to think that the privilege you enjoy in the midst of great deprivation is in the service to a better future for all humanity, and not a fundamentally unfair situation that you would rise up in fury over if the roles were reversed.
Cegłowski’s expert puncturing of the arguments for “AI Alarmism” was prompted by philosopher Nick Bostrom’s bestselling book Superintelligence: Paths, Dangers, Strategies, which is a fun read but which also palms a lot of cards in the construction of its arguments.
I believe the greater social meaning of AI Alarmism is the twin phenomena of the worry of an ever-larger class of have-nots whose lives are upended by uneven economic returns from technological disruption (in other words, the problem isn’t that only some of us have to clean toilets, while all of us have to use them; it’s that the dividends from self-cleaning toilets never accrue to the toilet-cleaners they displace); and the blithe dismissal of this worry by an ever-smaller, ever-richer 1%, who use the story of AI as a spiritual belief system that declares this division to be natural, inevitable, and, ultimately, beneficial.
At one point, Bostrom outlines what he believes to be at stake:
“If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”
That’s a heavy thing to lay on the shoulders of a twenty year old developer!
There’s a parlor trick, too, where by multiplying such astronomical numbers by tiny probabilities, you can convince yourself that you need to do some weird stuff.
This business about saving all of future humanity is a cop-out. We had the same exact arguments used against us under communism, to explain why everything was always broken and people couldn’t have a basic level of material comfort.
We were going to fix the world, and once that was done, happiness would trickle down to the point where everyday life would change for the better for everyone. But it was vital to fix the world first.
I live in California, which has the highest poverty rate in the United States, even though it’s home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us.
But if you’re committed to the idea of superintelligence, AI research is the most important thing you could do on the planet right now. It’s more important than politics, malaria, starving children, war, global warming, anything you can think of.
Because what hangs in the balance is trillions and trillions of beings, the entire population of future humanity, simulated and real, integrated over all future time.
In such conditions, it’s not rational to work on any other problem.
Superintelligence: The Idea That Eats Smart People [Maciej Cegłowski/Idlewords]