By Christopher A. Sims
Featured Art: Triumph of the Moon by Monogrammist P.P., 1500/10
American fiction has its small share of memorable politician characters—Willie Stark in Robert Penn Warren’s All the King’s Men and Robert Leffingwell in Allen Drury’s Advise and Consent to name a pair—but there’s a strand of this tradition that is becoming more relevant in 2016: Artificial Intelligence politician figures in the work of two of our most prominent science-fiction writers, Isaac Asimov and Philip K. Dick.
While SF traditionally serves as a space to explore futuristic ideas, Asimov’s 1950 I, Robot and Dick’s 1960 Vulcan’s Hammer can now be reread as prescient visions of the looming potentiality of an AI political leader (perhaps as early as 2024, if Joe Biden chooses not to run).
As the so-called “Internet of Things” takes shape and works to synthesize the physical with the cyber, we can begin to speculate about how long it will be before AIs take over even our most complicated tasks, such as governance. But the genius of Asimov and Dick lies not in their depiction of the technologies that make AI leaders possible; instead, it’s in their assumption that we will one day, not too long from now, be faced with a critical choice between human and mechanical rule. That, it’s fair to say, will be a consequential election.
Isaac Asimov introduced this dilemma in his collection I, Robot, which concludes with “The Evitable Conflict.” In that story, an AI explicitly guides the world’s leaders, but, perhaps surprisingly, the tone, as we march into the future in the hands of a robot, is not horror, but optimistic curiosity. One of Asimov’s main characters, retiring robot psychologist Susan Calvin, imagines, “How wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!”
Just ten years after Robot, Philip K. Dick’s Vulcan’s Hammer also imagined a future with an AI leader, chosen in response to a great conflict. In Dick’s novel, though, the all-powerful Vulcan 3 AI who is in control creates flying death machines and has to be destroyed so humans can reclaim control of their own political destiny. As Robot and Hammer were written on the heels of World War Two and the bombings of Hiroshima and Nagasaki, we should remember how a world without war might have seemed worth any price—even the sacrificing of our human ability to govern ourselves. In that context, it’s intriguing that both of these authors—at least initially—imagined a future in which a machine would be more capable as a politician than a human.
We might expect similar narratives to emerge in the coming years. In 2016, the dissatisfaction with political actors, the widespread doubting of human institutions, the exponential growth of processor speeds, and the increased integration of technology into our lives might lead an inspired tech billionaire or a collective of tech-adoring compu-gelicals to a similar curiosity about AI leadership. Would Watson or Big Blue ever be a viable alternative to Donald Trump or Hillary Clinton, for instance? Whom or what would an AI choose as a running mate? Would an AI be free of influence from special-interest groups, or would it, too, be in thrall to some consortium of billionaires who secretly control the world?
In the event of an elected AI, would we be headed for an Asimovian world, and excited about the prospect of a non-human ruling over us; or would such a future be a dystopia, and one in which approval ratings were somehow even lower? With those questions in mind, and by examining Asimov’s and Dick’s fictional AI politicos, we can understand more clearly what it is we desire in a politician.
That insight emerges when we look at what Asimov saw in the robots he cast as our political overlords. They, like all of his robots, are bound, importantly, to his famous laws of robotics, which perhaps codify how a politician, in his vision, should treat his constituency. The laws are:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws, which would seem to ensure peace and political accountability, are inscribed in all robots at the moment of their creation and they can never be violated; but Robot illustrates the various ways these laws can conflict and create less than desirable results for humans. For example, although law #2 makes it appear that robots will obey humans, it is superseded by rule #1, so the machine will not follow instructions it deems detrimental to humanity. It can overrule the will of its creators, or its voters, thereby weakening any semblance of representative democracy.
We see that kind of creeping robot-autocracy in Asimov’s stories when the unspeaking robots, intended at first for manual labor, evolve to the point where they hold political office, and, in the final story “The Evitable Conflict,” where they effectively run the world. Although Asimov describes the mayhem that can ensue from this take-over, he is, in Robot, ultimately in favor of robotic leadership. His faith in the machines, it seems, comes from his faith in those ultimately objective laws of robotics, as opposed to the subjective laws of human morality and ethics. When the best robot psychologist is asked to differentiate men from robots, she explains, “robots are essentially decent,” and even if the robots’ decency is programmed, it is infallible and consistent. For Asimov, these laws—essentially, don’t hurt anyone and act in the interests of the people who gave you your position—are key qualities for a political leader. On one hand, this is what Americans want from their leaders, too.
In Robot, Asimov describes a world where robots are better than men morally and intellectually, and, coming from a man of science, such a pro-science attitude is to be expected. But the ease with which the citizenry cedes control to the machines might drive us to question Asimov’s understanding of the American electorate. After all, a superior AI ascends to an autocratic position rather quickly, violating Law #2 and drowning out the voices of democracy. Wouldn’t we object strongly when presented with the idea of HAL 9000 in the oval office, taking the oath, pardoning the turkey, ignoring our will? Certainly we would, and what’s interesting is that that concern can tell us about our body politic in the early 21st century.
Foremost among the political anxieties roiling the U.S., as these Obama years have shown about large swaths of white America, is citizens’ fear that they will be controlled by someone (or something) whom they perceive as an Other. Hence, the division of “Real America” from elite America that was popularized by Sarah Palin in 2008. This fear of being usurped, which Asimov downplays but which emerges in other political novels, like Richard Condon’s The Manchurian Candidate, may be specifically about being under the control of someone citizens don’t understand. Or, think of our current political predicament, where many feel anxiety about being controlled by a woman leader, or by an unrelatable, megalomaniacal billionaire.
Setting aside the nativist reaction to President Obama, though, or the trepidation concerning the 2016 presidential race, or the common feeling of cringing alienation many felt whenever they heard former president George W. Bush open his mouth, at least we understand these men and women to be members of the human race. So, given the widespread resistance to these figures, how could we ever accept a computer as commander-in-chief?
It appears that for Asimov, this fear of being under control of some (technological) Other is secondary to the fear of corrupt, untrustworthy warmongers, and it’s important that he sets his stories after an all-consuming-war. If humanity were faced with utter destruction, he posits, maybe we would be willing to be governed by something with intelligence that exceeds our capacity and is incapable of violating its principles as described by the laws of robotics. For Asimov, the ideal politician is honest, incorruptible, intelligent, logical, prioritizes human life, ends all militaristic conflicts, and, most importantly, is essentially decent. As long as these qualities remain constant, and written into the code, he seems content to cast off that pesky American ideal about government being “of the people.”
In a 1955 story, “Franchise,” Asimov goes further, creating the character “the Multivac,” a supercomputer that determines an entire presidential election by interrogating a single, representative American (in this case, a Hoosier named Norman Muller). While Muller and Multivac will choose a human candidate, Asimov reduces the democratic process to a mechanical analysis of one Average Joe’s political views, questioning our electoral process and our ability to choose who should govern. In the last sentence of the story, Asimov, with what seems to be a purposeful grammatical error, writes, “[T]he sovereign citizens of the first and greatest Electronic Democracy had, through Norman Muller (through him!) exercised once again its [sic] free, untrammeled franchise.” It’s vital to notice the “its,” to recognize that the franchise being expressed is not that of the citizens; the machine divines the ultimate decision on its own. Apparently, even voting is beyond our puny human faculties, and the AIs know us better than we know ourselves.
The issue of submitting our political will to the machines turns out to be the main point of contention between Dick and Asimov as they envision a pure, technocratic future with an AI in command of humanity. In Robot, there is resistance to the machines through the rebellious “Society for Humanity,” and even World Coordinator, Stephen Byerley, expresses horror at complete robotic control; but the person who knows the most about robots, Susan Calvin, is cautiously optimistic about the passing of the political torch. The tradeoff for this “improved” society is, of course, the forfeiture of human leadership, but Asimov seems to subscribe to the notion that AIs are simply the next phase of evolution. Throughout Robot, he shows us how machines can effectively protect us from our flaws and sinister predilections.
But many readers, and many Don’t-Tread-on-Me Americans, would likely side with Philip K. Dick and harbor serious doubts about granting an AI supreme authority. His dark vision might more closely reflect our national temper.
Like Asimov’s governing robots, the AIs of Dick’s Vulcan’s Hammer, Vulcan 2 and Vulcan 3, are installed to protect us from ourselves. The machines were built as a response to a massive global conflict and their “cold, dispassionate logic had freed the world from war and poverty.” Dick, too, shows faith in the machine’s capabilities to make decisions in the interest of humanity, but the Vulcan AIs are not immune to the will-to-power. For instance, the ousted AI model, Vulcan 2, resents being deposed by its technological superior, Vulcan 3, and works indirectly with the subversive anti-machine group, “the Healers,” in an effort to destroy Vulcan 3. Politics makes strange bedfellows, as the saying goes, and, in the end, both Vulcans are destroyed.
Unlike Asimov, Dick seems to abhor the idea of humans becoming puppets of mechanical political executives. This is evinced when the daughter of the anti-machine leader asks the human leader of the government, Jason Dill, “Mr. Dill, do you really believe that a machine is better than a man? That man can’t manage his own world?” The normally controlled Dill is flustered and bars her from school so she can’t confuse the minds of the other children being brainwashed to support the Unity government, and the Vulcans.
Here, Dick presents characters who seem to agree with the Savage from Aldous Huxley’s Brave New World, who fights for “the right to be unhappy.”
In that novel, the Savage declares, “But I like the inconveniences.”
“We don’t,” the Controller replies. “We prefer to do things comfortably.” “But I don’t want comfort,” says the Savage. “I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”
Like Huxley’s character, Dick is aware humans are flawed, but he prefers that our destiny remain in our flawed hands rather than in the hands of something non-human and, ultimately, incomprehensible to us. So he depicts an anti-machine rebellion, which realizes that Vulcan 3 has the intelligence to envision a “full picture of things as they really were. A picture [ . . . ] that no human being has ever had or will ever have. All humans are partial. And this giant is not!” The rebellion rejects this power because Vulcan 3 has become godlike, and Dick has created the perfect autocrat, a being beyond question. For him, this form of fascism, in which only the governing body has access to the “full picture of things,” is the ultimate nightmare. Even if Vulcan 3 could help end poverty and war, Dick (who was himself paranoid about overbearing government surveillance) and his novel’s hero, William Barris, remain terrified of the notion of human political destiny wrested from our control.
At the core of this fear is Dick’s lifelong mistrust of any system he cannot challenge—including reality itself. For him, the ideal politician is morally decent, comes from the bottom of the social ladder, and is, above all, willing to oppose all forms of authority in a search for truth. These features are all found in Barris, who is decent (despite being a politician), worked up from an entry-level governmental position to a directorship, and does not have confidence in the “truths” presented by his political higher-ups or the Vulcan AIs. When Barris and the Healers ally to destroy Vulcan 3, they agree on two main political principles: a tempered use of machines as subservient instruments only, and an enfranchisement of the labor class into the political system. Their central desire is to depose the autocracy and put the government to work for them. Though we might think we’d prefer a leader who could help us avoid all harm, like Vulcan 3, most Americans would likely agree with Barris, finding robotic paternalism itself a form of harm.
The key questions these novels pose about the existence of an AI political leader, then, are not of competency, but of control. Should humans be in charge of our own messed-up destiny, or should we relinquish that authority to an impartial machine who could tidy up for us? Even before we encounter an AI on a national ticket, these questions are relevant as we consider how much power a centralized government should have, how much impersonal bureaucracy we can stomach.
Asimov, perhaps guilty of scientism, casts his lot with the technocracy, preferring the “perfection” of a scientific invention, conceding that we should pay the price for peace and lose the very essence of our democracy: representation and accountability. But there is something un-American in this decision, considering our anti-authoritarian founding. And even if an AI were chosen by the people, there is nothing democratic about a superior being. Many of us, then, would side with Dick and agree that an AI would make an excellent tool for assisting a human leader, but cannot be given ultimate authority. For Dick, as it was with those Bostonian tea dumpers, we have to have the freedom to challenge, and even to depose authority. To crash the system.
On this issue, I remain an undecided voter, but I’m clear on the fact that these decades-old novels represent two recognizable worldviews, and that they anticipate conversations that will only get louder as we move closer to technocracy, as we become less and less sure of our own essential decency.
Christopher A. Sims received his PhD in English Literature from Ohio University. He teaches English at Columbus State Community College. He is the author of Tech Anxiety: Artificial Intelligence and Ontological Awakening in Four Science Fiction Novels, and articles which study the human relationship to technology in fiction.
Originally appeared in NOR 20.