Roko’s Basilisk is a notorious thought experiment regarding artificial intelligence and our own perceptions of reality, particularly as it relates to a hypothetically powerful AI. It’s kind of like Newcomb’s Paradox, with a little more Battlestar Galactica-style AI genocide.
If you want to know more about it, feel free to click the link. But be warned: the whole point of Roko’s Basilisk is that the mere knowledge of Roko’s Basilisk also makes you complicit in Roko’s Basilisk. If Roko’s Basilisk is real—a question which is intrinsic to the thought experiment itself—then the potential contained within that hypothetical idea is enough to sow the seeds to self-destructive doubt. And that’s how Roko’s Basilisk wins.
You don’t need to know the specific details of Roko’s Basilisk to understand how the concept could relate to the growing phenomenon of deepfakes—the manipulation of deep Learning technology to create deceptively realistic videos, like adding Nicholas Cage’s face into every movie. The cybersecurity firm DeepTrace recently released a report on the myriad ways that deepfakes threaten our trust in knowledge, and in our own eyes. And their conclusion? The mere idea of deepfakes is enough to bring the worst case scenario to life—even if we never actually reach that worst case scenario in practice.
Nicholas Cage as Amy Adams, because Deepfakes.
In reality, deepfakes haven’t actually been used to successfully falsify videos of politicians to use as large-scale propaganda; like most things on the internet, they’re mostly used for porn. But the fact that they could be used to deceive the public is itself enough to make public trust spiral downwards, causing us to debate both what is true, and the methods by which we determine what is true. And if we can’t even agree on the existence of a shared reality, well then how the hell are we ever going to agree on anything else?
From the MIT Technology Review:
In the lead-up to the 2020 US presidential elections, increasingly convincing deepfake technology has led to fears about how such faked media could influence political opinion. But a new report from Deeptrace Labs, a cybersecurity company focused on detecting this deception, found no known instances in which deepfakes have actually been used in disinformation campaigns. What’s had the more powerful effect is the knowledge that they could be used that way.
“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”
Thumbnail image from the “Barack Panther” deepfake