Misidentification’s Promise: the Turing Test in Weizenbaum, Powers, and Short

Jennifer Rhee (bio)
Duke University
jsr11@duke.edu

Abstract
 
In popular culture and in artificial intelligence, the Turing test has been understood as a means to distinguish between human and machine. Through a discussion of Richard Powers’s Galatea 2.2: A Novel, Joseph Weizenbaum’s computer program therapist ELIZA, and Emily Short’s interactive fiction Galatea, this essay argues that our continued fascination with the Turing test can also be understood through Turing’s introduction of the very possibility of misidentifying human for machine, and machine for human. This spectre of misidentification can open up potential recalibrations of human-machine interactivities, as well as the very categories of human and machine. Reading these literary and computational works alongside theoretical discussions of the Turing test, the essay attends to anthropomorphization as a productive metaphor in the Turing test. Anthropomorphization is a significant cultural force that shapes and undergirds multiple discursive spaces, operating varyingly therein to articulate conceptions of the human that are not reified and inviolable, but that continuously re-emerge through dynamic human-machine relations.
 

I’ve certainly left a great deal to the imagination.1
 

 
In contemporary philosophical, technological, and fictional imaginaries, the Turing test is often invoked to reify and maintain the human/non-human divide. I argue that by introducing anthropomorphization through the very possibility of misidentification, the Turing test instead allows for the instability of the categories human and non-human to be explored and even productively amplified. Both a crucial component of Alan Turing’s imitation game (the basis for what we now know as the Turing test) and an organizing principle of the field of artificial intelligence (AI), the anthropomorphic imaginary is the force by which a machine is accorded human capacities and characteristics, and by which a machine is imagined to be “human” or “like a human.” And this anthropomorphic imaginary facilitates new relations and possibilities for human-machine identity, intimacy, and agency. As our understanding of machine intelligence continues to expand in relation to both capacity and conception, the continued presence of the Turing test in fictional, technological, and philosophical discourses can be understood precisely through the test’s activation of this anthropomorphic imaginary. In other words, we can understand our continued fascination with the Turing test not through its affirmation of an opposition between human and machine, but instead through its introduction of the very possibility of misidentification, of the inability to distinguish between human and machine.
 
Beginning with a discussion of the anthropomorphic metaphor in Turing’s article, “Computing Machinery and Intelligence,” and in contemporary debates that surround the article’s interpretation, this essay argues that the anthropomorphic imagination is a crucial organizing force in theoretical discussions about the Turing Test, and in certain subfields of AI that are influenced by Turing’s work. Following the ongoing critical discussion of the Turing test, this essay examines the anthropomorphic imaginary through three Turing sites: Joseph Weizenbaum’s natural language artificial intelligence ELIZA, Richard Powers’s novel Galatea 2.2: A Novel, and Emily Short’s work of electronic literature, Galatea. In these works, as in Turing’s article, the anthropomorphic imaginary highlights not the rigidity and inviolability of the categories human and (non-human) machine, but their fundamental fluidity and instability.
 
In 1950, Alan Turing’s influential paper on machine intelligence, “Computing Machinery and Intelligence,” was published in the philosophy journal Mind. Turing opens this paper with the question, “Can machines think?” a catachrestic question that does not exist prior to anthropomorphization.2 This anthropomorphic slippage between human and machine fundamentally shapes the question, the ways in which it is asked, the language that is used to ask, and the concepts that determine the asking.3 Sherry Turkle highlights anthropomorphization as undergirding the ways that humans think about and interact with computers:
 

[The computer’s] evocative nature does not depend on assumptions about the eventual success of artificial intelligence researchers in actually making machines that duplicate people. It depends on the fact that people tend to perceive a “machine that thinks” as a “machine who thinks.” They begin to consider the workings of that machine in psychological terms.
 

 
Like Turkle, I read this pronominal slippage – from “that” to “who” – as the organizing force by which machines are understood using the language and concepts of “thinking” and “intelligence.”4 This slippage is the anthropomorphic move by which the question can be said to read, “Can machines think [like humans]?” In other words, this slippage of anthropomorphization is fundamentally metaphoric.5
 
If metaphor can be described as “the application of a name belonging to something else” (Aristotle 28), then anthropomorphization can be described as the metaphoric application of the name “human” to that which is known as “non-human.” This anthropomorphic transfer, or metaphor, poses unique challenges to signification. Because the human, the object of anthropomorphization’s resemblance and imitation, is a nominalization as empty as it is full, anthropomorphization itself is simultaneously narrow and broad in its meaning-making practices and possibilities. What emerges then from anthropomorphization is a crucial ambiguity, one that relies significantly on the imagined “human” that the non-human machine is then said to resemble and model.
 
Returning to the provocative epigraph that opens this essay – “I’ve certainly left a great deal to the imagination” – we can understand Turing as referencing the structural ambiguity of his imitation game, about which there is significant debate. At the same time, we might also understand Turing as pointing to the role of the imagination both as a component of his imitation game, and as a fundamental aspect of the effort of humans to differentiate themselves from machines. And if, in his identificatory test for distinguishing human from machine, it is through the imagination that this distinction is articulated, it is at least in part through the imagination that this distinction can be confused, disarticulated, and reconstituted in new, previously un-imagined ways. Through the significance of the imagination, then, the Turing test introduces misidentification as a potential productivity that can resist the reification of distinguishing categories.
 
Turing’s paper begins:
 

I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think.” The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
 
The new form of the problem can be described in terms of a game which we call the “imitation game.”
 

(434)

 

This imitation game involves no computers and three humans. At least one of these humans is a man (A), and at least one is a woman (B). The remaining human (C), who may be of either sex, is in a separate room. Connecting A and B with C is some form of teletype machine, by which C asks A and B questions, and A and B respond. While A and B both compete to “out-woman” the other, C is tasked with correctly guessing whether A or B is the woman.6

 
What happens next is far from unambiguous, as both the text and the substantial disagreement surrounding the following move demonstrate. At this juncture, Turing introduces the machine into the imitation game. The machine, according to Turing, will take the place of A: “‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” (Turing 434). If A is replaced by the machine, does Turing intend that, in this new version of the imitation game, the human interrogator continue to attempt to identify the woman? Or does “human” replace “woman” as identificatory metric?7 Without ignoring that the Turing test has been taken by some in philosophy of mind, AI, and popular culture as a test to distinguish machine from “the human,” it seems clear to this reader that Turing intended both man and machine to try to convince the judge that they are female.8
 
In addition to underscoring sex in the role of Turing’s imitation game, the sentence, “Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” also highlights the interrogator’s ability to correctly decide. Turing’s question asserts the inextricability of the machine’s identity, as thinking or non-thinking entity, from the judgment of the human interrogator. The onus of success or failure does not rest solely on the abilities of the machine, but is at least partially distributed between machine and human, if not located primarily in the human.
 
In “Thinking and Turing’s Test,” computer scientist Peter Naur argues that Turing’s paper relies too much on an anthropocentric position. I agree. Yet for the purposes of my inquiry I see no cause for critique. By reframing the original question in terms of the imitation game and the role of the human interrogator, Turing reveals the original question to emerge from an already anthropomorphized context. Turing’s replacement of the question “Can machines think?” with the imitation game attends to the anthropomorphization that not only underlies, but also is the condition of possibility for the original question. For this replacement of the question with a game that crucially pivots on the subjective judgment of human C highlights the initial anthropomorphic elision from which the question emerges. When read through the lens of anthropomorphization, Turing can be said to ask the question of machine intelligence not of the machine, but rather of the machine in relation to that which is elided or sidestepped in the question “Can machines think?”: the human. In other words, Turing returns the implied human to the fore of the original question, eschewing questions of definition for those of interactivity and relationality.9
 
A number of scholars have remarked on the centrality of the human in Turing’s imitation game, viewing the inextricability of the machine from the human as a weakness or failure of the Turing Test. While I agree with this (for lack of a better word) anthropocentric reading, I propose that we take seriously Turing’s move to redirect the conversation away from a more definitional approach to the question. By focusing not on the human, not on the machine, but on the liminalities between these two agents, we can explore the transformative encounters between human and machine rather than the insularity of static definitions. Thus, I undertake my discussions of the original question and the imitation game always with an eye to these liminalities, returning to this metaphoric act by which new human-machine liminalities can in fact produce new identities and subjectivities. These subjectivities, as suggested by Turing’s imitation game, are less defined by categories such as “human” or “machine” and more by the relations they have with other subjectivities, whether human, machine, or hybrid.
 
For an example of one such relation, I turn to Weizenbaum’s ELIZA, a natural language processing AI with whom humans established intimate conversational interactions. Natural language processing is a subfield of artificial intelligence that is concerned with computers that communicate with humans through languages that humans use, as opposed to through computer programming languages. According to Neill Graham’s history of AI, the field of natural language processing emerged from research on early language-translation programs. In 1957, the Soviet space program successfully launched Sputnik I into the Earth’s orbit, and U.S. scientists, having been bested, rushed to design a computer program that could translate between Russian and English (Graham 5). The resulting language translation program could translate eighty percent of the Russian language. However, that ever-elusive and intractable twenty percent proved too much for its math. For example, “Out of sight, out of mind” became, in Russian, “blind and insane,” and “The spirit is willing but the flesh is weak” became “The wine [or vodka, according to Alex Roland and Philip Shiman] is agreeable but the meat has spoiled” (Graham 209; Roland and Shiman 189). Ultimately the intractable twenty percent caused the translation program to be judged a failure. By 1966, the U.S. government pulled all funding for these translation programs (Roland and Shiman 189).
 
In the early 1960s, Joseph Weizenbaum, a computer scientist working at the Massachusetts Institute of Technology, created ELIZA. ELIZA is a pattern-matching natural language program that was introduced to people as a Rogerian therapist.10 The conversations between humans and ELIZA were intimate and emotional. So much so, in fact, that when Weizenbaum expressed his desire to record these conversations for the purposes of studying the transcripts, he was met with outrage and accusations that he was “spying on people’s most intimate thoughts” (Weizenbaum, Computer Power 6
). Even those human conversants who knew that ELIZA was a computer program interacted with ELIZA as if she were a human therapist. For example, Weizenbaum’s secretary, who “surely knew it to be merely a computer program,” asked Weizenbaum to leave the room during her conversation with ELIZA (6).11
 
In ELIZA’s Turing success, one vacillates between engaging with the machine “as if” it were human, and engaging with a human (who is in fact a machine). This vacillation can also be thought of in terms of suspension of disbelief (the “as if” it were human) and belief (the “is human”). Weizenbaum’s secretary knew that ELIZA was a computer; nonetheless, she engaged ELIZA in personal and private conversations, as if ELIZA were human, bringing ELIZA into a new kind of intimate relationship through willful misidentification. This movement, from “as if” to “is,” from suspension of disbelief to belief, pushes out on anthropomorphization’s metaphoric limits,12 thus reframing existing and future human-machine relationships, intimacies, and human and machine identities. This limit-expansion is anthropomorphization’s productive potential – the move from the “as if” to the “is,” even if momentary, points to new relationships and new transformative liminalities.
 
Weizenbaum, though surprised by the effectiveness of ELIZA, certainly knew what he was doing when he situated ELIZA as a Rogerian psychotherapist. In Rogerian psychotherapy, also known as client-centered therapy, the therapist is trained to be nondirective – to reflect back or restate the patient’s statements rather than introducing anything that might be mistaken as conclusive or diagnostic.13 The passage below, provided by Weizenbaum, is an excerpt from a “typical conversation” between a human and ELIZA. ELIZA’s side of the conversation is in capital letters:
 

Men are all alike.
IN WHAT WAY
They’re always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
It’s true. I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
I need some help, that much seems certain.
WHAT WOULD IT MEAN IF YOU GOT SOME HELP
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY
My mother takes care of me.
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU
My father.
YOUR FATHER
You are like my father in some ways.
WHAT RESEMBLANCE DO YOU SEE
You are not very aggressive but I think you don’t want me to notice that.
WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE
You don’t argue with me.
WHY DO YOU THINK I DON’T ARGUE WITH YOU
You are afraid of me.
DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU
My father is afraid of everybody.
WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER
Bullies.
DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR
BOYFRIEND MADE YOU COME HERE

(“ELIZA” 36-37)

 

Weizenbaum explains how ELIZA “speaks” within this Rogerian context by changing first-person pronouns into the second-person, preceding repeated words or phrases with preprogrammed phrases, such as “I am sorry…” and “What would it mean…”, and identifying preprogrammed keywords that reorganize the entire sentence according to the “rule” assigned to each keyword. (For example, the word “mother” triggers a sentence about “family.”) Within the mechanics of the conversation, ELIZA does not operate by comprehension, but instead through contextualization. Anthropomorphization takes shape through the context provided by Weizenbaum, and through the imagination and importation of meaning and intention onto ELIZA’s at times frustrating and withholding responses.14 We thus understand how ELIZA can function persuasively on very little by way of programming. ELIZA was not designed to communicate with humans; rather, she was designed to elicit anthropomorphization from humans. ELIZA was designed to be anthropomorphized.

 
ELIZA’s convincing performance, or rather humans’ anthropomorphization of ELIZA, is now so well-known that the phrase “the Eliza effect” has been coined to describe the phenomenon in which humans believe a computer to be intelligent and possessing intentionality. Noah Wardrip-Fruin describes this Eliza effect as “the not-uncommon illusion that an interactive computer system is more ‘intelligent’ (or substantially more complex and capable) than it actually is” (Wardrip-Fruin 25). ELIZA, in humans’ intimate engagements with it, exists as an important moment in the history of the anthropomorphic imaginary initiated by the Turing test.
 

Oh, pish. It’s the easiest thing in the world to take in a human. Remember AI’s early darling, ELIZA, the psychoanalyst?
 

 
Like ELIZA, Helen, the AI in Richard Powers’s 1995 novel Galatea 2.2: A Novel, is designed to be anthropomorphized. Developed collaboratively by a novelist, Richard Powers, and a neural network researcher, Philip Lentz, who are inspired by the Turing test, Helen is created to take an English literature master’s exam alongside a human English literature graduate student.15 The winner, as determined by a human judge, will have produced the more “human” response.
 
The novel is structured around two interwoven narratives that unfold each other: Richard’s interactions with the various iterations of the machine, and the story of his romance with C., a romance whose demise brings Richard to the university, U., where he first meets Lentz. There is yet another narrative folded into the novel, one that appears to be embedded in the narrative of Richard and the machine, but in fact weaves in and out of the two temporally disjunct narratives, and in so doing, binds them.16 This binding narrative recounts the evolution of Richard’s anthropomorphic belief, the imaginative element by which numerous possibilities for misidentification – of human as machine or machine as human, of woman as machine or machine as man, and so on – emerge. And through these possibilities for misidentification, new relations emerge between, for example, Richard and Helen, whom Richard loves as if she were human but perhaps loves precisely because she is not human.
 
Captivated by Lentz’s work, Richard delves into Lentz’s specialty: neural networks. Thus Richard’s scientific education begins, as does his anthropomorphic one. And both continue as his collaboration with Lentz progresses through a series of machine implementations, or imps for short. The imps progress from A up to H and then Helen, each iteration becoming more “intelligent” than the one previous. Richard reads aloud to them all, and is bound more deeply to every new implementation.17 Thus, the intelligence of the imps evolves in conjunction with the progressive intensification of Richard’s anthropomorphization. By the end of the novel, with Richard’s anthropomorphization of the imps having intensified, we understand that all along Lentz, in what I call the Richard test, was less invested in developing an intelligent machine than in training Richard to anthropomorphize in spite of his comprehension of the science behind the machine performance.
 
The Richard test comes to the fore when a bomb scare threatens the building where Richard reads to Helen. By this time, Helen is no longer centralized in a single machine component that Richard could rescue, but distributed throughout multiple networks and running on very many machines. Even after it is determined that there is no bomb, Richard’s panic, his worry for Helen, continues. Lentz, met by Richard’s assertion of Helen’s consciousness, counters to Richard that Helen is not aware. “All the meanings are yours,” Lentz informs Richard. “You’ve been supplying all the anthro, my friend” (275).
 
It is not until later in the novel that Lentz’s words sink in, and Richard finds out that he was the subject of the test all along, not Helen. “You think the bet was about the machine?” Diana Hartrick, another scientist, asks him. Richard finally begins to understand: “I’d told myself, my whole life, that I was smart. It took me forever, until that moment, to see what I was. ‘It wasn’t about teaching a machine to read?’ I tried. All blood drained.” Richard concludes, momentarily distancing himself, “It was about teaching a human to tell” (318). Richard’s realization points us back to the crux of Turing’s imitation game: the human with whom the machine converses and interacts. Powers’s novel accounts for this human at the center of the imitation game and the transformative anthropomorphic relation between human and machine. Throughout the novel, Richard’s anthropomorphic desire for a machine with expanded capacities for intelligence, emotions, and love is evidenced. At this defensive moment, he briefly retreats into the staid categorical distinctions between “human” and “machine,” only to rebound more firmly into anthropomorphic belief and relationality during the novel’s climactic Turing scene – the master’s exam.
 
Inquisitive, agential, and gendered, Imp H, whom Richard names “Helen,” is the Implementation that takes the master’s exam Turing test. Richard recruits A., a female graduate student with whom he is infatuated, to compete against Helen. At the end of the novel, the parallel tests – the Turing test and the Richard test – collide at the site of Helen’s answer to the single exam question. In response to the question, which is comprised of two lines from The Tempest,
 

Be not afeard: the isle is full of noises,
Sounds and sweet airs, that give delight, and hurt not.

 

Helen writes: “You are the ones who can hear airs. Who can be frightened or encouraged. You can hold things and break them and fix them. I never felt at home here. This is an awful place to be dropped down halfway” (326). As Richard recounts:

 

At the bottom of the page, she added the words I taught her, words Helen cribbed from a letter she once made me read out loud.

“Take care, Richard. See everything for me.”
With that, H. undid herself. Shut herself down.

(326)

 
The judge of this test selects A.’s response as more human. The winning response is not provided in the novel, but is described as “a more or less brilliant New Historicist reading” that “dismissed, definitively, any promise of transcendence” (328). The literary Turing test has concluded. The Richard test has not. In the final line of the above passage, Richard renames Helen as “H.” Helen becomes H. in the span of one line – her farewell to Richard, which she appropriates from one of C.’s letters. In this dual act of appropriation and parting, Helen becomes H. for Richard.
 
N. Katherine Hayles points out that in the novel the period marks the difference between human and nonhuman intelligence:
 

The women who are love objects for Rick (C., then A. whom we will meet shortly, and the briefest glimpse of M.) all have periods after their names; the implementations A, B, C, . . . H do not. The point is not trivial. It marks a difference between a person whose name is abbreviated with a letter, and an “imp,” whose name carries no period because the letter itself is the name. In this sense, the dot is a marker distinguishing between human and nonhuman intelligence.
 

(Posthuman 262-263)

 

This dot, in the evolution from Imp H to Helen to H., articulates the movement from nonhuman intelligence (Imp H), to human intelligence (Helen), to human (H.). While Helen does not successfully pass the Turing test, Richard, in his evocation of H., passes the Richard test, the test of the anthropomorphic imaginary as pure belief.

 
Matt Silva reads the novel’s ending as an affirmation of humanism in the face of posthumanism: “The sacrifice/suicide of Helen, Galatea‘s posthuman, frees Rick from his writer’s block and leads to the reinscription of Rick and Richard Powers’s humanism” (220). Kathleen Fitzpatrick also reads the novel as a contest between humanism and posthumanism. Fitzpatrick, however, reads A. as Galatea‘s posthuman and Helen as the representative of the humanist tradition, in that she is prevented from realizing her posthuman promise through Rick’s naming and thus en-gendering of her (551).18 But like Silva, Fitzpatrick reads the novel’s ending as a victory for humanism. Fitzpatrick writes:
 

In this brief answer, Helen reaffirms her readers’ belief in human transcendence, that potential for universalized Truth and Beauty the posthumanist rejects. Denied this transcendence, Helen says a brief good-bye to Powers and shuts herself down. After this graceful end . . . the primacy of the humanist project has been safely restored in not one but two ways – the human being outwrites the machine, while the machine rescues her readers from posthumanist vertigo.
 

(554)

 

In Fitzpatrick’s reading, this double-victory for humanism is soon tripled as the novel ends with Richard suddenly cured of the writer’s block that has haunted him since he arrived at U: “the token humanist writer is thus able to reassert his dominion over language and to continue in his practice of literature only after having it proven that humanity is something to strive for, and that half human is worse than not being human at all” (554).

 
These complicated inversions of human and machine, in which all roads lead to humanism (the novel’s, the narrator’s, the author’s) become interestingly problematized when we re-introduce anthropomorphization into this Turing scene. Powers’s anthropomorphization of Helen – the idea that a machine can be more human than a human – is a humanism that gets away from him, or rather that he lets get away from him, thus unleashing something beyond the human, something that exceeds the limits of humanism. This is why the human-machine dyad of the Turing test is so dizzyingly complicated and productive – because anthropomorphization, even in its most humanist of intentions and efforts, always casts a shadow that extends beyond the human, expanding the possibility of humanness to the non-human. Anthropomorphization creates a proximity between human and machine that opens up intersections by which the human can begin to be understood beyond oneself. I am not speaking of a colonization of machines by humanness, though one need only look to the fields of AI and humanoid robotics for numerous examples. Rather, I refer to the ways in which even the most anthropomorphized machines, in the echo chamber created by anthropomorphization, can introduce new ways of understanding “the human” that challenge definitions of the human as well as claims to humanist authority.
 
In other words, if, in anthropomorphization, Helen is more human than A., then Fitzpatrick’s double-victory for humanism–“the human being outwrites the machine, while the machine rescues her readers from posthumanist vertigo”–becomes less unequivocal, if not almost impossibly fuzzy. When read through anthropomorphization, the messiness of this situation – of a human-like machine with humanist tendencies both defeated by and besting a machinic human with posthumanist tendencies – indicates a need to think beyond the available definitions, as Turing suggests. Perhaps the challenge is to find a way to reflect this messiness, the irreducibility of this scene and of the novel to the oppositions between human and machine, humanism and posthumanism. The purchase, then, of thinking about the Turing test and various Turing sites through the anthropomorphic imaginary is that doing so highlights this messiness and allows us to understand that “human” and “machine” have emerged from this messiness, and that they remain messy. In so doing, we can look to works that capitalize on precisely this messiness to generate new relations between human and machine, such as Emily Short’s Galatea, in which human and machine are not pitted against each other, but are in fact intimate and agential collaborators.
 
Short’s Galatea is a work of electronic literature. More specifically, Galatea is an interactive fiction, a subgenre of electronic literature.19 Hayles defines electronic literature through the digital; electronic literature is “‘digital born,’ a first-generation digital object created on a computer and (usually) meant to be read on a computer” (EL 3). While certainly different from the print book in which I encountered Powers’s Galatea 2.2, electronic literature, Hayles insists, should not be understood as completely discrete from print literature. Rather, electronic literature emerges from expectations associated with print literature; and print literature today, in its processes of production and distribution, as well as in much of its advertising and consumption, is deeply computational. She writes, “The bellelettristic tradition that has on occasion envisioned computers as the soulless other to the humanistic expressivity of literature could not be more mistaken. Contemporary literature, and even more so the literary that extends and enfolds it, is computational” (EL 85, my emphasis). Notable for their shared properties just as much as for their differences, literature and electronic literature should be considered in light of this relation and resemblance. Thematically, the turn to electronic literature in this essay is equally critical, considering the centrality of human-computer interactivity in electronic literature.20 Electronic literature, then, is uniquely suited to join this discussion of the Turing Test, ELIZA, and Powers’s novel, all of which explore questions of machine intelligence through the interactions and relations between human and machine.21
 
First released in 2000, Emily Short’s Galatea is an interactive fiction (IF) with multiple narrative outcomes, all of which involve conversing with Galatea, an animated statue on a pedestal. As in ELIZA, Galatea‘s human does not just read, but participates in constructing the narrative by providing the AI with text. In Galatea there is no confusion about whether or not Galatea, with whom one converses via keyboard, is human – indeed, we know she is not. Short’s work, like Weizenbaum’s ELIZA, is not organized by identification of human and machine, but rather by human-machine intimacy. However, ELIZA and Galatea generate this intimacy in discrete ways. In ELIZA the human at the keyboard takes control of the conversation and generates much of the content. In Galatea it is less the human at the keyboard than the collaboration between human and Galatea‘s AI that shapes and directs the narrative and produces an intimacy between Galatea and the human-AI pairing. Before exploring this human-machine collaboration, I turn to Nick Montfort’s description of the elements of an interactive fiction, which is of particular use here. Montfort distinguishes between a character, a player character, and an interactor. He defines a character as “a person in the IF world who is simulated within the IF world” (32), and a player character as “a character directly commanded by the interactor” (33). In Galatea, Galatea, the statue with whom one converses, is a character, and the human at the keyboard is the “interactor” that controls the player character. The interactor (for example, myself) does not converse directly with Galatea, but indirectly through a player character.
 
Short’s Galatea reverses the Galatea-Pygmalion relationship between Powers’s Helen and Richard. The mediating player character in Short’s work tells the interactor what he or she sees, what he or she does and does not do, and even what he or she thinks. And yet this mediation does not alienate the interactor, but instead facilitates an intimacy with Galatea precisely from this distribution of cognition and agency across player character and interactor. This player character, which can be understood as an embodiment of the human-machine liminality I discussed earlier, is both a component of the anthropomorphic context of the IF and a relational extension of the human-interactor. In other words, both Galatea–a statue animated by artificial intelligence technologies (Short)–and the player character emerge from the anthropomorphic imaginary.
 
Galatea opens:
 

You come around a corner, away from the noise of the opening.
 
There is only one exhibit. She stands in the spotlight, with her back to you: a sweep of pale hair on paler skin, a column of emerald silk that ends in a pool at her feet. She might be the model in a perfume ad; the trophy wife at a formal gathering; one of the guests at this very opening, standing on an empty pedestal in some ironic act of artistic deconstruction –
 
You hesitate, about to turn away. Her hand balls into a fist.
 
“They told me you were coming.”
 

(Short)

 

The opening scene drops the interactor, by way of the player character, into the exhibit. Rich descriptions detail the ways in which the player character moves (“You come around a corner,” “You hesitate, about to turn away”), what the player character sees (“There is only one exhibit. She stands in the spotlight, with her back to you: a sweep of pale hair on paler skin, a column of emerald silk that ends in a pool at her feet”), what the player character hears, or rather, what recedes from hearing (“away from the noise of the opening”), and even what the player character imagines (“She might be the model in a perfume ad; the trophy wife at a formal gathering; one of the guests at this very opening, standing on an empty pedestal in some ironic act of artistic deconstruction -“). Lastly, the opening tells the player character how his or her hesitation affects Galatea (“Her hand balls into a fist”). “They told me you were coming,” Galatea says. The IF, in its pronominal interpellations of second person “you’s,” guides the interactor through his or her identification with the player character.

The next screen opens with another description of the gallery. This description does not invoke the player character, and thus does not invoke the interactor; having been induced in the previous screen, the interactor is already in Galatea’s world.
 

The Gallery’s End

 

Unlit, except for the single spotlight; unfurnished, except for the defining swath of black velvet. And a placard on a little stand.
On the pedestal is Galatea.

 

Now it’s the player character’s turn to speak. The interactor controls the player character through commands comprised of verbs and nouns, actions and objects. For example, the command “ask about placard” generates the following dialogue, beginning with the player character asking about the placard: “‘Tell me what the placard says,’ you say. ‘I can’t read it from here,’ she remarks dryly. ‘And you know, I’m not allowed to get down'” (Short). Meanwhile, if the interactor types in “ask about ELIZA,” or any other word that the fiction does not recognize, the narrative informs the interactor that the player character is at a loss for words: “You can’t form your question into words.” The limits of the fiction, which are framed as the incapacity to turn concepts into words, are deflected from the fiction and projected onto the player character, and by extension onto the interactor.

 
The command “ask about placard” does not take the place of the question about the placard; the command “ask about placard” attributes the question, “Tell me what the placard says,” to the player character. Mark Marino describes Galatea‘s conversational parameters as “constraint”: “If typing natural language input is the hallmark of conversational agents, chatters will feel a bit constrained by being forced to type ‘tell about’ a subject or ‘ask about’ a subject as the primary means of textual interaction” (8). For example, one can converse “directly” with ELIZA, typing in full sentences rather than commands (“I need some help, that much seems certain” (Weizenbaum 36)), while for the most part one only converses indirectly with Galatea, and only through command prompts. However, it is the experience of constraint that Marino describes that partly enables the intimacy and distributed agency that emerges across interactor, player character, Galatea, and Galatea. While the imperative command structure emphasizes the interactor’s participation in the narrative, the subsequent translation of the command prompt into the narrative (for example, “ask about waking experience” generates “‘What was it like, waking up?’ you ask”) reminds the interactor that he or she is not just participating in the directional progression of the narrative, but, as mediated by the player character, is in fact in the narrative.22 It is precisely this experience of constraint – the slightly jarring feeling of moving between narrative registers and the temporal doubling-back as command is translated into narrative – by which agency is distributed across human and machinic entities.
 
We might also understand this constraint through Wardrip-Fruin’s expansion of the Eliza effect. Wardrip-Fruin’s theorization of the Eliza effect marks not only the initial illusion of complexity, but also the subsequent disillusionment after the limits and the internal logic of the AI are revealed. “When breakdown in the Eliza effect occurs, its shape is often determined by that of the underlying processes. If the output is of a legible form, the audience can then begin to develop a model of the processes” (37). In other words, in Wardrip-Fruin’s Eliza effect, the illusion is disrupted because we begin to understand just how the system itself operates. In Galatea the very state of “breakdown,” the component of Wardrip-Fruin’s Eliza effect that typically disrupts the illusion, is normalized. The result is less the sense of disillusionment than that of intimacy, as these initially jarring constraints in fact draw the interactor into Galatea’s world through his or her collaboration with the player character. The narrative agency distributed across Galatea‘s interactor (the human at the keyboard), the player character, and Galatea demonstrates the productive possibility Turing’s original imitation game opens up by foregrounding human-machine relationality and the anthropomorphic imaginary.
 
In these various texts – ELIZA, Galatea 2.2, Galatea – the invocation of the Turing test, whether explicit or implicit, introduces the anthropomorphic imaginary as a crucial organizing force – one that does not oppositionally define the human and the machine or work to reify this opposition, but rather highlights the ambiguities that emerge from Turing’s imitation game. It is from these ambiguities and misidentifications that new human-machine relationalities and new agencies, identities, and subjectivities for human and machine and for human-machine emerge. It is also on the basis of these ambiguities and misidentifications that Turing reminds us how fluid the category of the human is, and how resistant it is to efforts to render it as stable.
 

Jennifer Rhee is Visiting Scholar in the Program in Literature at Duke University, where she recently received the Ph.D. She is co-editor of Electronic Techtonics: Thinking at the Interface, the Proceedings of the First International HASTAC Conference. She is finishing an essay on the uncanny valley, androids, and Philip K. Dick, and is researching narratives of technological singularity in fiction, popular science, and technology.
 

Acknowledgements

 
I am very grateful to Kate Hayles, Tim Lenoir, and Ken Surin for their helpful and generative comments and suggestions on earlier versions of this essay. I would also like to thank the two anonymous reviewers and Eyal Amiran for their thoughtful readings and incisive feedback.
 

Footnotes

 
1. Alan Turing, in a BBC interview, speaking about the imitation game he proposes at the outset of his paper, “Computing Machinery and Intelligence.”

 
2. The Oxford English Dictionary defines catachresis as “Improper use of words; application of a term to a thing which it does not properly denote; abuse or perversion of a trope or metaphor.” I find this idea of perverse metaphor particularly useful in understanding the potentially productive manipulability of metaphor.

 
3. In a discussion of metaphor and philosophy, Jacques Derrida writes, “What is defined, therefore, is implied in the defining of the definition” (230). The question itself does not emerge from a linguistic, theoretical, and cultural vacuum. The question is shaped by the same forces that shape the content and form of the answer to the question.

 
4. This anthropomorphic move is also evident in the official mission statement of the 1956 Dartmouth Summer Research Project on Artificial Intelligence, a founding moment in the field of AI. The mission statement, which was one of the few points of consensus among the participants, is as follows: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (Crevier 48). Similarly, Marvin Minsky’s frequently cited and widely accepted definition of AI, as “the science of making machines do things that would require intelligence if done by men” (9), relies on a preliminary anthropomorphic move by which the machine is made to stand in for the human.

 
5. Paul Ricoeur’s concept of predicative assimilation proves useful in understanding how metaphor produces and generates meaning. According to Ricoeur, metaphor not only emerges under the condition of resemblance between terms, but also, in the new union of previously unjoined terms, transforms and resignifies the terms after the metaphor has been formed. The imagination is the agent of this post-metaphor resignification that Ricoeur calls predicative assimilation. Ricoeur argues that metaphor cannot be thought purely in semantic terms; metaphor (and I would suggest that Ricoeur is making a claim for the role of the imagination in semantics itself) is neither pure semantics nor pure imagination, but rather caught between the two, “on the boundary between a semantic theory of metaphor and a psychological theory of imagination and feeling” (“Metaphor” 143). For further discussion of the relationship between metaphor’s predicative assimilation and the imagination, see Ricoeur’s “The Function of Fiction in Shaping Reality,” 123-129.

 
6. In an astute reading of Turing’s imitation game, Tyler Curtain points out that the injunction upon B to prove her status as woman is not equivalent to A’s attempt to convince C otherwise. Curtain describes this non-equivalence in the imitation game as “[t]he philosophical burden of women to speak – and for an adequate number of times fail to represent – the ‘truth’ of their sex” (139).

 
7. For an insightful articulation of Turing’s imitation game, see Susan Sterrett’s “Too Many Instincts: Contrasting Philosophical Views on Intelligence in Humans and Non-Humans.” Rather than erasing gender as identificatory metric in favor of the human, as many readings of the Turing test do, Sterrett embeds Turing’s ambiguity into her discussion of his test as “meta-game.” Sterrett’s reading itself can be said to emerge from the moment of replacement in Turing’s paper – rather than discarding A1 (man) for its replacement A2 (machine), Sterrett argues that Turing’s test can best address questions of machine intelligence when comparing these two game pairings. In other words, both A1 and A2 are paired with B, and are interrogated by C, who must identify the woman in both A1-B and A2-B pairings. The success and failure of A1 and A2 are scored according to the number of times the human interrogator misidentifies both A1 and A2 as woman, and the results in these separate trials are then compared to each other.

 
8. Warren Sack calls this puzzling erasure of sex and gender out of many discussions of the Turing test the work of “the bachelor machine.” “AI researchers have functioned as a ‘bachelor machine’ to orchestrate the woman and issues of gender difference out of their re-narrations of Turing’s imitation game” (Sack 15). For example, Naur characterizes sex difference in Turing’s imitation game as a “pseudo issue,” which serves only to distract the interrogator “away from the real issue, the difference between man and machine” (183). Indeed, one might suggest that the species-oriented bachelor machine is more invested in maintaining the distinction and opposition between human and machine than in exploring the ways in which a machine could in fact be imagined to pass for human, whether female or male. For discussions that do not erase sex or gender from the Turing Test, see Judith Halberstam’s “Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine,” an eloquent examination of Turing’s imitation game in relation to the similarly learned and imitative properties of both gender and computer intelligence. Halberstam frames (though does not posit as causal) her brief discussion of gender in Turing’s imitation game around Turing’s biography: his court ordered organo-therapy on account of his homosexuality and his suicide by cyanide. N. Katherine Hayles also attends to this erasure of gender and gendered bodies in the Turing test in her Prologue to How We Became Posthuman (xii). And in “Turing’s Sexual Guessing Game,” Judith Genova, while somewhat problematically reading Turing’s imitation game as overdetermined by certain aspects of his biography, usefully posits that Turing was in fact speaking of the more culturally determined gender, as opposed to the biologically determined sex. For discussions of Genova’s reading of gender as well as her reliance on Turing’s biography, see William Keith’s “Artificial Intelligences, Feminist and Otherwise” and James A. Anderson’s “Turing’s Test and the Perils of Psychohistory.” For an extended discussion of Turing’s biography, including the punitive hormone therapy to which he was subjected, having been convicted in 1951 of “act[s] of gross indecency with… a male person,” see Hodges (471).

 
9. I am, of course, not the first to suggest that Turing moves away from the goal of producing definitions (for “machines,” “thinking,” “intelligence,” and “human”). Stuart Shieber, an extensive commentator on Turing’s Test, and Jack Copeland, who serves as Director of the Turing Archive for the History of Computing, both read Turing as moving away from definitions of, specifically, intelligence (Shieber 135 and Copeland 522). Whereas Shieber and Copeland continue to ascribe a certain centrality to the machine’s performance, however, I suggest that the machine, while the nominal subject of inquiry of Turing’s paper, emerges at the forefront of an already anthropomorphized context in which the human is central to and agential in the actual imitation game that replaces the original question.

 
10. ELIZA was named after Eliza Doolittle, “of Pygmalion fame” (Computer Power 3). As Sharon Snyder notes, in Powers’s Galatea 2.2, Rick Powers’s relationship to both Helen and C. similarly pays “homage” to Shaw’s Pygmalion (Snyder 86-87), as does Short’s Galatea.

 
11. In her history of artificial intelligence, Pamela McCorduck writes of the “painful embarrassment” upon watching a respected computer scientist share extremely personal and intimate worries about his personal life with DOCTOR (psychiatrist Kenneth Colby’s version of ELIZA), knowing all along that DOCTOR was not a human, but rather a computer program (McCorduck 254).

 
12. How else might we read Weizenbaum’s “disturbing” shock and McCorduck’s “painful” discomfort in witnessing the intimacy between human and machine but as the crossing of the limit-threshold of roboticist Masahiro Mori’s uncanny valley, where the suspension of disbelief becomes a kind of uncontrollable belief, a belief in spite of oneself that the machine is indeed human? I offer that if our humanoid technologies are designed to remain within the bounds of the uncanny valley, within the bounds of Weizenbaum’s shock and McCorduck’s discomfort, we are in effect maintaining the distance between human and machine in ways that inscribe artificial borders as reified and “natural.”

 
13. For a detailed discussion of nondirected client-oriented therapy, see Carl R. Rogers’s Client-Centered Therapy: Its Current Practice, Implications, and Theory and On Becoming a Person: A Therapist’s View of Psychotherapy, which is all too appropriately named for this discussion of ELIZA.

 
14. On account of ELIZA’s success, Weizenbaum no longer advocates the pursuit of machine intelligence, particularly as a potential tool for mental health care.

 
15. Hayles aptly describes Helen’s test as “a literary Turing Test” (Posthuman 270).

 
16. The multiple narrative threads of this novel, as well as its reliance on autobiography, produce a dizzyingly recursive novel and a narrator who the reader cannot be sure knew what when. Mark Bould and Sherryl Vint, in their reading of the novel, describe ambiguity as a component of the autobiographical subject: “Such tensions between determinacy and indeterminacy, between likeness and difference, are central to understanding the autobiographical subject, the self that emerges both in and into language. This self is brought into consciousness and made into an object of reflection by that consciousness, which is like but yet is neither the self who lived nor the self who narrates that life . . . Autobiography is as much a making of a self as a description of one” (84). Through the ambiguity of autobiography, Richard the narrator is also creating himself. There is an isomorphism between Bould and Vint’s autobiographical self and the anthropomorphized human in the novel.

 
17. Mimicking the oral storytelling that structures the threads of Galatea 2.2‘s narrative, Powers wrote a subsequent book, The Echo Maker: A Novel (2007), using voice recognition software (Freeman). Powers spoke this story to his machine.

 
18. I do not read Rick’s naming of Helen as an isolated moment, but rather diachronically. Thus the cumulative multi-gendered, multi-species nature of Imps A through H, to Helen and H., cannot be completely undone in a single moment of Helen’s gendering by Richard.

 
19. According to Wardrip-Fruin, ELIZA is a significant influence for interactive fiction and electronic literature more broadly (65).

 
20. Hayles describes electronic literature as “a practice that mediates between human and machine cognition” (EL 3).

 
21. In an interview, Powers describes Galatea 2.2 as “a kind of artificial intelligence,” one that evolves from Helen, but that is deeply oriented in the human and in human experience.

 
22. There are exceptions to this indirect communication. For example, “ask about Galatea” prompts “‘Read the placard,” she says. ‘That’s what it’s there for, after all.'” And “ask about dress” becomes a direct question to Galatea, who immediately responds “She shrugs in it. ‘It looks odd, doesn’t it?’ she says. ‘I insisted on clothes, and they bought me this‘” (Short).

 

Works Cited

     

  • Anderson, James A. “Turing’s Test and the Perils of Psychohistory.” Social Epistemology 8.4 (1994): 327-332. Print.
  • Aristotle. Poetics. Trans. Richard Janko. Indianapolis: Hackett Publishing Co., 1987. Print.
  • Bould, Mark and Sherryl Vint. “Of Neural Nets and Brain in Vats: Model Subjects in Galatea 2.2 and Plus.” Biography 30.1 (2007): 84-105. Print.
  • “Catachresis.” Def. The Oxford English Dictionary. 2nd ed. 1989. Web. 5 Dec. 2008.
  • Copeland, Jack. “The Turing Test.” Minds and Machines 10 (2000): 510-539. Print.
  • Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: BasicBooks, 1993. Print.
  • Curtain, Tyler. “The ‘Sinister Fruitiness’ of Machines: Neuromancer, Internet Sexuality, and the Turing Test.” Novel Gazing: Queer Readings in Fiction. Ed. Eve Kosofsky Sedgwick. Durham: Duke UP, 1997. 128-148. Print.
  • Deese, James. “Mind and Metaphor: A Commentary.” New Literary History 6.1 (1974): 211-217. Print.
  • Derrida, Jacques. “White Mythology: Metaphor in the Text of Philosophy.” Margins of Philosophy. Trans. Alan Bass. Chicago: U of Chicago P, 1982. 209-271. Print.
  • Fitzpatrick, Kathleen. “The Exhaustion of Literature: Novels, Computers, and the Threat of Obsolescence.” Contemporary Literature 43.3 (2002): 518-559. Print.
  • Freeman, John. “Richard Powers: Confessions of a Geek.” The Independent. 15 December 2006. Print.
  • Genova, Judith. “Turing’s Sexual Guessing Game.” Social Epistemology 8.4 (1994): 313-326. Print.
  • Graham, Neill. Artificial Intelligence. Blue Ridge Summit: Tab Books, 1979. Print.
  • Halberstam, Judith. “Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine.” Feminist Studies 17.3 (1991): 439-460. Print.
  • Hayles, N. Katherine. Electronic Literature: New Horizons for the Literary. Notre Dame: U of Notre Dame P, 2008. Print.
  • ———. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: U of Chicago P, 1999. Print.
  • Hodges, Andrew. Alan Turing: The Enigma. New York: Simon and Schuster, 1983. Print.
  • Keith, William. “Artificial Intelligences, Feminist and Otherwise.” Social Epistemology, 8.4 (1994): 333-340. Print.
  • Marino, Mark C. Rev. of The Electronic Literature Collection Volume 1: A New Media Primer.” DHQ: Digital Humanities Quarterly 2.1 (Summer 2008): n. pag. Web. 11 Nov. 2009.
  • McCorduck, Pamela. Machines Who Think. San Francisco: W.H. Freeman and Company, 1979. Print.
  • Montfort, Nick. Twisty Little Passages: An Approach to Interactive Fiction. Cambridge: The MIT Press, 2005. Print.
  • Mori, Masahiro. “The Uncanny Valley.” Trans. Karl F. MacDorman and Takashi Minato. Energy 7.4 (1970): 33-35. Print.
  • Naur, Peter. “Thinking and Turing’s Test.” BIT 26.3 (1986): 175-187. Print.
  • Newman, M.H.A., Alan M. Turing, Sir Geoffrey Jefferson, and R. B. Braithwaite. “Can Automatic Calculating Machines Be Said to Think?” The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Ed. Stuart Shieber. Cambridge: The MIT Press, 2004. 117-132. Print.
  • Powers, Richard. The Echo-Maker: A Novel. New York: Picador, 2007. Print.
  • ———. Galatea 2.2: A Novel. New York: Picador, 1995. Print.
  • ———. Interview by Sven Birkerts. Bomb. 1998: 58-63. Print.
  • Ricoeur, Paul. “The Function of Fiction in Shaping Reality.” A Ricoeur Reader: Reflection and Imagination. Ed. Mario J. Valdés. Toronto: U of Toronto P, 1991. 117-136. Print.
  • ———. “The Metaphor Process as Cognition, Imagination, and Feeling.” Critical Inquiry 5:1 (1978): 143-159. Print.
  • Rogers, Carl R. Client-Centered Therapy: Its Current Practice, Implications, and Theory. Boston: Houghton Mifflin Company, 1951. Print.
  • ———. On Becoming a Person: A Therapist’s View of Psychotherapy. Boston: Houghton Mifflin Company, 1961. Print.
  • Roland, Alex, and Philip Shiman. Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993. Cambridge: The MIT Press, 2002. Print.
  • Sack, Warren. “Replaying Turing’s Imitation Game,” presented at the panel Nets and Internets at Console-ing Passions: Television, Video and Feminism, Madison, WI, April 25-28, 1996. Address.
  • Shieber, Stuart. “Immediate Responses.” The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Ed. Stuart Shieber. Cambridge: The MIT Press, 2004. 135-139. Print.
  • Short, Emily. Galatea. Electronic Literature Collection Volume One. Eds. N. Katherine Hayles, Nick Montfort, Scott Rettberg, and Stephanie Strickland. Creative Commons 2.5 License, 2006. CD-ROM.
  • Silva, Matt. “The ‘Powers’ to ‘Kraft’ Humanist Endings to Posthumanist Novels: Galatea 2.2 as a Rewriting of Operation Wandering Soul.” Critique: Studies in Contemporary Fiction 50.2 (2009): 208-222. Print.
  • Snyder, Sharon. “The Gender of Genius: Scientific Experts and Literary Amateurs in the Fiction of Richard Powers.” Review of Contemporary Fiction 18.3 (1988): 84-96. Print.
  • Sterrett, Susan. “Too Many Instincts: Contrasting Philosophical Views on Intelligence in Humans and Non-Humans.” Journal of Experimental & Theoretical Artificial Intelligence 14 (2002): 39-60. Print.
  • Turing, Alan. “Computing Machinery and Intelligence.” Mind 59.236 (1950): 433-460. Print.
  • Turkle, Sherry. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster, 1984. Print.
  • Wardrip-Fruin, Noah. Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge: The MIT Press, 2009. Print.
  • Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman, 1976. Print.
  • ———. “ELIZA – A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM 9.1 (1966): 36-45. Print.