In Do Androids Dream of Electric Sheep?, the Philip K. Dick novel that inspired the film Blade Runner, a bounty hunter pursues a group of androids who have been posing as human beings. He is eventually arrested and accused of being an android himself. The officers bring him to what turns out to be a counterfeit police station run entirely by androids, not all of whom are aware that they aren't human.
"What do you do," one of the robocops asks him, "roam around killing people and telling yourself they're androids?"
It's a complicated situation. But then, androids play a complicated role in Dick's fiction. On the most obvious level, they represent the inhuman and the mechanical: People have empathy and will, while robots are rigid and soulless. It's a familiar division in science fiction, though some storytellers prefer to put other monsters in the androids' place. Think of the aliens in Don Siegel's movie Invasion of the Body Snatchers. "You can't love or be loved! Am I right?" the film's hero asks one of the pod people. "You say it as if it were terrible," comes the reply.
But few things are so simple in a Philip K. Dick story. A recurring moment in Dick's fiction is for a character who thinks he's human to discover he's really a machine. If Dick had written Invasion of the Body Snatchers, the protagonist would probably learn halfway through the picture that he's a pod person who somehow forgot his origins. He doesn't feel like an inhuman menace: He genuinely loves his wife, he remembers the life of the man he replaced, and he doesn't want the world to be overrun by invaders any more than his friends do. Meanwhile, the cold, cruel authority figure that the hero was sure was an alien would turn out to be a regular Earthman who just happens to be a soulless thug. Who's human now?
So Dick's androids have a second role, not as our opposites but as our alter egos. As the author put it in a 1972 speech:
Someday a human being, named perhaps Fred White, may shoot a robot named Pete Something-or-Other, which has come out of a General Electrics factory, and to his surprise see it weep and bleed. And the dying robot may shoot back and, to its surprise, see a wisp of gray smoke arise from the electric pump that it supposed was Mr. White's beating heart. It would be rather a great moment of truth for both of them.
Dick wasn't afraid of our synthetic environment; he saw the possibility of humanity and life there. His fear was focused on a "kind of pseudo-human behavior exhibited by what were once living men." The "production of such inauthentic human activity," he warned in that speech,
has become a science of government and such-like agencies now. The reduction of humans to mere use – men made into machines, serving a purpose which although "good" in an abstract sense has, for its accomplishment, employed what I regard as the greatest evil imaginable: the placing on what was a free man who laughed and cried and made mistakes and wandered off into foolishness and play a restriction that limits him, despite what he may imagine or think, to the fulfilling of an aim outside of his own personal – however puny – destiny.
The deep problem, for Dick, wasn't that mechanisms might become more manlike. It's that men might be reduced to mechanisms.
In 1954, the French sociologist Jacques Ellul wrote The Technological Society, one of those books that manages to be illuminating even though it's deeply wrong. "Wherever a technical factor exists," Ellul argued, "it results, almost inevitably, in mechanization: technique transforms everything it touches into a machine." Ellul believed such transformations were spreading across 20th-century society, creating something new and ugly. When "technique enters into every area of life, including the human, it ceases to be external to man and becomes his very substance," he wrote. "It is no longer face to face with man but is integrated with him, and it progressively absorbs him."
That's one portrait of our relationship with technology. In 2003, the cognitive scientist Andy Clark offered another. In a wonderful book called Natural-Born Cyborgs, Clark argued that our brains are special precisely because of "their ability to enter into deep and complex relationships with nonbiological constructs, props, and aids." From cars to cochlear implants, our tools are extensions of our selves, our technology an extension of our humanity. Where Ellul warned that technology was absorbing us, Clark described it radiating from us.
Broadly speaking, I think Clark is right. But history, particularly the most recent century of history, is filled with episodes that look more like Ellul's image, times when powerful institutions have made tools of human beings. That's what a hierarchy is: a system that treats individuals as appendages. In those hierarchies' hands, technology has entrenched centralized power in countless ways, making it easier for institutions to spy, track, coerce, incarcerate, and kill.
It's just that those hierarchies don't have a monopoly on technology. It's in the hands of figures outside those institutions too, and it's in the hands of underlings who don't want to be treated as appendages anymore. Tools can extend or restrict our autonomy, depending on who controls them and how they're used. The result is not the dystopia Ellul predicted – what he called a "monolithic technical world" – but something much more chaotic and interesting.
The prophet of that chaos is Philip K. Dick, the writer who imagined both a man with a pump for a heart and a weeping, bleeding robot. In that same 1972 speech, he offered a prescient vision of a future that is both totalitarian and anarchistic, a world of both universal surveillance from above and casual monkey-wrenching from below. And because Dick was always more likely to give his readers a wry dark comedy than a heroic saga, his human resistance owed as much to our selfishness and stupidity as it did to our love and ingenuity:
The totalitarian society envisioned by George Orwell in 1984 should have arrived by now. The electronic gadgets are here. The government is here, ready to do what Orwell anticipated. So the power exists, the motive, and the electronic hardware. But these mean nothing, because, progressively more and more so, no one is listening. The new youth that I see is too stupid to read, too restless and bored to watch, too preoccupied to remember. The collective voice of the authorities is wasted on him; he rebels. But rebels not out of theoretical, ideological considerations, only out of what might be called pure selfishness. Plus a careless lack of regard for the dread consequences the authorities promise him if he fails to obey. He cannot be bribed because what he wants he can build, steal, or in some curious, intricate way acquire for himself. He cannot be intimidated because on the streets and in his home he has seen and participated in so much violence that it fails to cow him. He merely gets out of its way when it threatens, or, if he can't escape, he fights back. When the locked police van comes to carry him off to the concentration camp the guards will discover that while loading the van they have failed to note that another equally hopeless juvenile has slashed the tires. The van is out of commission. And while the tires are being replaced, the other youth siphons out all the gas from the gas tank for his souped-up Chevrolet Impala and has sped off long ago.
If Dick had lived longer, he might have imagined the delinquent taking off in one of Google's driverless cars instead. But that wouldn't change his point at all. The kid would be riding a robot, but the tool would still be answering to a human being.