What Blade Runner Tells Us About Modern AI
Pope Francis recently devoted his monthly prayer intention to the safe and beneficial development of artificial intelligence. Though the prayer was expressed primarily in terms of reducing inequality, a theme this Pope has touched on often, its final plea had an uncanny resonance: “Let us pray that the progress of robotics and artificial intelligence may always serve humankind… we could say, may it ‘be human.’”
The question of what it is to “be human,” as distinct from the rest of Creation, has concerned the Church for centuries. It was a driving force behind the philosophy of Descartes, whose strong mind-body dualism was in part an attempt to reconcile Church doctrine with the dawning rationalist tradition. Though Descartes was primarily concerned to distinguish us from animals, which he considered mere automata, the specter of a fully mechanical account of humanity looms over his work. Even in the last century, as AI has become less of a fantasy and more of a research program, this fundamental ambivalence remains. We strive to build machines in our own image, and to conceive of ourselves in mechanistic terms, and yet each step we make in this direction unnerves us. The prospect of closing the gap entirely, though it is the stated aim of both AI and cognitive science, strikes most people as horrifying.
Nowhere is this ambivalence better expressed than in Blade Runner, Ridley Scott’s 1982 sci-fi masterpiece. The film takes place in an imagined 2019, and though it may have overshot the mark in some of its technological details (no flying cars), it could not be sharper with respect to the anxieties that define our age. Scott imagined a world controlled by a few large corporations that have become enormously profitable through the development of intelligent machines. These humanoid robots, known as “replicants,” are primarily consigned to narrow, routine jobs, but there is a pervasive fear that they will infiltrate other areas of human life. The film tells the story of Deckard (a deliberate homonym of Descartes), a so-called “blade runner” charged with hunting down a group of replicants that have escaped from an off-world colony. Deckard disdains replicants, but in his pursuit, he unwittingly falls in love with one, and confronts the possibility that he might be a replicant himself.
This fear of mistaken identity is distilled, in the popular consciousness, in the image of the Turing Test. Originally proposed by Alan Turing as a test of whether machines could think, the connotations of the test have shifted in response to technological development. At its core, the meaning of the test is existential. As Brian Christian writes:
“The Turing test attempts to discern whether computers are, to put it most simply, ‘like us’ or ‘unlike us’: humans have always been preoccupied with their place among the rest of creation. The development of the computer in the twentieth century may represent the first time that this place has changed. The story of the Turing test, of the speculation and enthusiasm and unease over artificial intelligence in general, is, then, the story of our speculation and enthusiasm and unease over ourselves. What are our abilities? What are we good at? What makes us special?”
Blade Runner features a Turing Test analog known as the Voigt-Kampff Test, the purpose of which is to weed out replicants posing as human beings. The theoretical basis for the test is never quite made explicit; we know only that it has something to do with conversation and various physiological responses. But in the failed responses of one escaped replicant, Leon, we recognize shortcomings that still plague AI systems today.
Modern machine learning systems face a tradeoff: learn too little from the training data and performance will be too random, but hue too closely to the training data and performance will not generalize to new examples. In the jargon of the discipline, the former kind of error is called “underfitting”; the latter “overfitting.” Usually, engineers seek out a sweet spot between these two with respect to a particular narrow problem. By comparison with the goal of a truly flexible intelligence, though, all modern AI systems are drastically overfitted. They are highly specialized to a particular task and brittle in their application. Leon, too, was designed for a particular task: we’re told he can “lift atomic loads all day and night.” When Holden, the test administrator, begins to take him beyond the scope of that task into a hypothetical, he overfits, seeking too much specificity.
“You’re in a desert walking along the sand…”
“What one?”
“What?”
“What desert?”
“It doesn’t make any difference what desert. It’s completely hypothetical.”
“But how come I’d be there?”
Modern AI systems can often perform exceedingly well within their domain, giving an illusion of generality. A system from DeepMind, for example, learned to play various Atari games at a superhuman level. But a demonstration from the robotics startup Vicarious showed that the system lost all abilities when the pixels on the screen were slightly moved. This sensitivity to slight adjustments makes deep learning systems too vulnerable for many real-world applications. Leon, too, is highly sensitive to slight novelty.
“You look down and you see a tortoise, Leon. It’s crawling towards you.”
“Tortoise? What’s that?
“You know what a turtle is?”
“Of course.”
“Same thing.”
The ability to flexibly adapt to novel circumstances remains, for the time being, uniquely human. We reflect, in our cognitive capacities, the bottomless complexity of the physical and social world to which we are adapted. And indeed there is evidence that even roboticists pursuing purely pragmatic objectives of behavior in a real-world environment can best achieve these goals through the biological strategies of evolution and development. To give machines our abilities, it seems, we have to give them our histories.
It is no surprise, then, that the question that drives Leon over the edge and leads him to shoot Holden is one about his past.
“Describe in single words only the good things that come into your mind about your mother.”
Unlike the more recent model replicants in the film, Leon has not been given memories, and so has no personal history to speak of. The philosopher John Locke believed that the continuity of our memories was the seat of our selfhood, because it is only by virtue of this continuity that we know ourselves to be the same person from one moment to the next. And indeed, part of what makes interacting with even a cutting-edge AI system like GPT-3 so uncanny is the lack of a unitary identity. Paradoxically, this multifarious identity is part of what helps such systems behave intelligently. As Brian Christian writes, “[t]o be human is to be a human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggests that the line between intelligent machines and people blurs most when a purée is made of that identity.” Here, the division between intelligence and humanity, whatever that term may turn out to mean, becomes especially stark. A deep neural network trained on hundreds of thousands of conversations may be intelligent, but it cannot give us the comforting cues that tell us we are interacting with a particular person.
As Blade Runner makes clear, though, the various cues to our humanity are just that: cues. A replicant with memories is a person in every sense that counts. And as long as we don’t believe in a mystical ghost in the machine, we are forced to concede that there are no guarantees about which capacities and traits will remain uniquely human. Consciousness may persist as an unsolvable mystery, but when robots begin to imitate all the outward signs we use to attribute consciousness to our fellow humans, this mystery will lose its salience. As Sam Harris and Paul Bloom point out in a 2018 op-ed, “Anything that looks and acts like the hosts on ‘Westworld’ [or, indeed, the replicants in Blade Runner] will appear conscious to us, whether or not we understand how consciousness emerges in physical systems.” This future may be far off, but the questions it poses are already with us. With each new development, we’ll have to ask ourselves: how human is human enough?