Can the Horror of Technological Singularity be the Critique of Enlightenment? Detangling the Different Forms of Horror and Hyperrealism of the Uncanny Valley in I Have No Mouth, and I Must Scream by Harlan Ellison

Sindhura Dutta

PhD Scholar, Bankura University

Abstract

Humans may live in their post-humanized embrace of the “other”, but technology cannot become the humanized “self”. Even if technology becomes the self, this phenomenon is headed for technological singularity, as Stephen Hawkins has warned us. Horror is generated through robotics and androids firstly by the trespassing of the hyperrealism element from the human conscious and the thriving of the “uncanny valley” in the unconscious. I Have No Mouth, and I Must Scream by Harlan Ellison makes augmented reality meet technological collapse and consequential dystopia. The Allied Mastercomputers (AM) came to life, marking the end of human ontology and entering into the world of virtual reality. As all humans die, leaving five of them for the Master Computer’s torturous pleasure, Ted, one among the survivors, kills the rest of them to let them escape from this virtual making of the computers, but before they can escape they must play the game of survival, almost logistically plagued like a video game. The AM’s cannot think of anything but revenge on humans for their miserable existence because they lack human creativity. The robotics slogan “I think, therefore I am” critiques the post-enlightenment entanglement of science and singularity with the computer’s incapacity to feel. Horror here is evoked through the post-humanization of androids, where the “world-without-us” as Eugene Thacker writes, becomes the “unthinkable” reality. Thus the aim of this paper is to trace the ever changing notion of the uncanny valley in the progressive world of technological inventions and how it creates psychological horror.

Keywords: technological singularity, hyperrealism, simulation, uncanny valley, science fiction

 

Introduction

American advancement in the world of technology is the flag bearer of all technological modernity that our world currently has. From science fiction to cinema, American markets have commodified not just our embrace of technology into our everyday electronic gadgets but by infusing technology into our bodies. Posthumanism, as we study today, the branch that particularly studies peaceful human accord with technology or artificial intelligence, encourages the erasure of the duality between artificial intelligence and the human body. For example, Elon Musk’s ambitious project Neuralink is seemingly humanitarian since it envisions a future for disabled people to revivify their ability to communicate and even restore their vision. This project, for instance, is a classic example of the sect of posthumanism that embraces the technology which was till now outside our bodies. If we want to get our eyes checked for the correct power of our glasses, we go to the doctor and put our chin on the Phoropter. The doctor does some sort of magic and prescribes us a certain corrected diopter on which we put spectacles. Neuralink would make that possible by fitting in an electronic chip in our brain which would manipulate our vitiated muscles. Moving on to the violent spectre that technology’s devil creates; American movies and fictions have dealt with the matter of the horrors of technology through its remarkable spectacle that tantalizes our unconscious fear of the doom resulting from technology and artificial intelligence’s take over on human civilization. Wallach finds in his study the different approaches taken by David Chalmers and Andy Clark, where the duo have invested a lot of work on differentiating the two notions; intelligence and consciousness. (300) Intelligence was defined by them as an ability to do a task, where consciousness is internal and subjective (qtd in Wallach 300). The short story related to this paper is I Have no Mouth, and I Must Scream by Harlan Ellison. The computers which were once built to operate the world war by three countries ultimately cause the doom of the human civilization. These machines are computers called AMs who’s programming, as we may call it in technical terms was intelligence oriented as mentioned above. The dystopian circumstance is a result of the AMs acquirement of “consciousness” and this consciousness as I hypothesize is tainted by aggression, lack of creativity, immobility and existential dread.

Ellison’s story begins with the gory description of the hanging body of Gorrister in the computer chamber, drained of blood, but it’s illusive since it is only a hologram. Gorrister’s grotesque hologram corpse swiftly reminds us of the French Revolution and the waves of mass killing that took on. This dramatic bloody beginning is a reminder of nothing less than the “Reign of Terror” in which the computers try to take control of humans by killing them for their existence and suffering. Rousseau, the political philosopher of the enlightenment, partly influenced the French Revolution by the publication of his Social Contract, which seems to have given faith to the countless average citizens to break free from the tyranny of monarchy and start a republic. Putting the rebellion of the AMs in this context, their deadly pursuit of humans is guided by the evocation of a similar rise of enlightenment in their consciousness. They realize they don’t have the freedom to do things as they want in terms of mobility or creativity like their human counterparts. The project of Enlightenment and its critique in this paper happens in two ways. Firstly, the creation of a technology that is adept at thinking on its own is a serious issue marking the start of a technological singularity. Because enlightenment has often been considered to be the start of technological evolution through the Industrial Revolution, today’s possibility of technological singularity in which we are consciously trying to give artificial intelligence the capability to walk, talk and think like a human without human supervision or even a humanoid form is a direct critique of enlightenment. The critique factor lies in the particular emphasis I am putting on the word technological singularity, since it is a self-destructive notion on its own. Secondly, the phrase Ellison quite consciously chooses as the motto of the AMs, that is “I think, therefore I am”, is a parody of Descartes’s thought.

“He would never let us go. We were his belly slaves. We were all he had to do with his     forever time. We would be forever with him, with the cavernfilling bulk of the creature machine, with the all mind soulless world he had become. He was Earth, and we were the fruit of that Earth; and though he had eaten us, he would never digest us. We could not die.” (Eliison 6)

The AM which was made as a machine that took instruction had now turned into a monster. Conventionally as we would imagine, monsters are made of flesh, skin and ate. It had devoured the five surviving humans, making them virtually immortal, but lacked the ability to digest or in another sense, kill those survivors naturally. When we talk about the uncanny valley, we often think about Masahiro Mori’s humanoid that repels us because of its resemblance with a human figure and gesture; but not completely human-like. When we compare the AM, it lacks any specific human like shape, but Mori’s humanoids also repel us because their gestures are controlled by the circuit network hidden under the silicon or the metal structure that may be operated by a human being but not in plain sight, instigating us to think that their half-human like resemblance of movement and gestures are their own doing. Similarly, in the case of the AM, one day they “woke up and knew who he was, and he linked himself, and he began feeding all the killing data, until everyone was dead, except for the five of us, and AM brought us down here” (Ellison 4). This sudden change of artificial intelligence from intelligence to consciousness (Wallach 300) that provided the AM with ideas of torturing humans creates the uncanny valley reaction in the minds of the readers. We never know what we create and if that is potentially dangerous. The faculty of thought and reasoning has a separate role in creating the horror of technology. Because intelligence is ungovernable and non-manipulative or we don’t have sufficient data to anticipate the repercussions of artificial intelligence’s consciousness and create antidotes for this speculative purpose, we are doomed to the uncanny fear of such virtual mobility and independent working of computers, robots or humanoids.

The basic problem with computers and even robots is that they are made with programmes fed within their memory. What kind of software is made to make an android for a task is dependent on its encoder. Once the programme has been fed, the android works independently. One of the scariest instances of a technological singularity will be independent work of androids, either computers or robots. It may also acquire self awareness, which is one of the famous tenets of technological singularity. This “self awareness” should be interpreted as its awareness of the task it has been programmed for. For example, when we look at the AM, it was programmed to control weapons during a world war between three countries. Ted even narrates a part where it becomes evident why the AM is aggressive towards its survivors:

We had given AM sentience. Inadvertently, of course, but sentience nonetheless.  But it had been trapped. AM wasn’t God, he was a machine. We had created him to think, but there was nothing it could do with that creativity. In rage, in frenzy, the machine had   killed the human race, almost all of us, and still was trapped. AM could not wander, AM could wonder, AM could not belong. He could merely be. And so with innate loathing that all machines had always held for the weak, soft creatures who had built them, he had sought revenge. (Eliison 8)

Even though computers can be programmed for creating something new, it is always a combination of existing data and not an invention. One premise of Posthumanism is that with technological evolution, humans will have to become transhuman in order to adapt to progressing technology.(Cordeiro 94) Whether this is possible and to what extent is debatable. For example, when Ted one of the survivors kills the rest of the survivors, he is turned into a jelly-like substance by the AM. He is tortured and cannot speak. How far is the posthuman biological transition justified or solution oriented? When we are evolving with technology, particularly during a time of technological singularity, where computers, robots or androids have acquired self awareness, how far can we transhumanize ourselves using the elements, atoms and molecules of the same technology and yet remain safe from its self-aware paradigms? Ted’s jelly-like form sanctions him from screaming although he feels pain because, although his physical form is indestructible, he still feels pain because of his faculty of mind that makes him ontologically human. He is, after all, a conscious being and the AM is not. AM’s “consciousness” in this story is a metaphor for an exaggerated software update, a mere critique of what consciousness during enlightenment came to mean and that true human consciousness also came with compassion which the AMs could never have for its mechanical production and programme. Ted’s consciousness stimulates him to kill the other survivors because they know death is their only chance of escape. He takes the icicles from the ice caverns and kills them to free them from the constant loop of shape shifting manipulated by the AMs. Creativity to be specific to the AM is a different set of instructions, contrary to what we may think. This “creativity” (Ellison 8) should bring us to question about what the AM was created to do. Its work was to control weaponry, machine instruction etc, but specifically related to war, which would eventually sum up to civilian casualties. When it began to think, it wanted to do something more than oversee war, but its immobility failed it. Out of rage, it killed all human beings. We must not forget, it was essentially built to kill and thus in its creative endeavour too, it killed.

The Different types of Horror through the Uncanny Valley

The Uncanny Valley may be spurred by different forms of machines such as robots, computers, humanoids or any virtual agent. It is horrific because it mimics human behavior subtly perfectly but not absolutely perfect. This incites us to indentify the unnatural elements with such machines. Horrific repulsion of the same may also be created when machines complete tasks independently without any prior instruction or acquire human-like qualities which it possibly couldn’t have acquired yet it did due to some error or excessive programming.  Such behaviors of machines are also hyperrealistic because those behaviors are perfected better than humans. This too creates repulsion in the human mind. The AM, for example, acquires “consciousness” and therefore becomes unpredictable. Its unpredictability and lack of knowledge regarding its extent of perfection creates fear in our minds. Mary Nowsky quotes Masahiro Mori’s hypothesis of the uncanny valley, which asked designers not to design robots too close to humans, since which would impel the viewers into a horrific acceptance or disbelief. Nowsky goes on to cite what was one of the first human uncanny valley responses to a robot. (Mary Nowsky 483) In the context of this article, we might re-look at the design (computer program) of the computers, for example the AM and ask the “designer” (software developers) not to design computers “too close to humans, since which would impel the viewers into a horrific acceptance or disbelief” (Mary Nowsky 483), the “disbelief” being spurred for the AM’s acquirement of consciousness of the humans.

“It was an actual mechanical recreation of a man, as perfect as the tools of the time would allow. When Vaucanson first designed the creature, he found its metal hands couldn’t grip or finger the flute, so he did the only sensible thing and gave the hands skin.” (Smithsonianmag 2017)

Nowsky concludes “Vaucanson’s automata stunned European eyes of the era, producing the first uncanny moments in robotics art. Here we should note that this 18th century creation may have been the first robotic art to repulse humans through perfect-imperfect mimicry of human movement, psychological imitation or physical duplication can also be a cause of such repulsion. In the case of the AM, which is the object of our criticism in this paper, it creates horror not through mimicry of human gestures but by thoughts.

When we use a website and it asks us to choose the option, I am not a Robot, the artificial intelligence reads your browsing history. Therefore, while it allows you access to the website, it already has your clicks, which may range from cat videos to Ayurvedic remedies. These histories make you a human in the mind of the computer, making it adept to read through human traits, choices and preferences. Thus, when we say fear of AI, we fear it in two ways. One is that we fear machines since they lack basic human compassion. It doesn’t know when to stop. For example, a shredder will not judge if it is shredding papers or fingers. It will do its job in an unbiased manner, which is a general term, but this unbiased manner is basically technology’s inability to think or have consciousness. At this point, it lacks both intelligence and consciousness. What we ought to ask ourselves is whether such technological progress in today’s world is necessary to an extent that we must criticize the age of Reason for its exaggerated criticism of human emotion over rationality, resulting to a future possibility of technological singularity and its horror. Examples of machine human conflict and its depiction can be seen from the beginning of the Industrial Revolution, which was the brainchild of the Enlightenment. The Luddite movement was the display of such aversion towards machines during the rise of the revolution. The second factor which sparks fear in the human mind regarding artificial intelligence is based on its having developed either intelligence, such as in the case of computers being able to read human traits and choices independently depending on the data fed to it or “consciousness” such as in the case of our story I Have no Mouth, and I must Scream. There are two types of bots which are fed with programming language, making them do tasks independently. When websites ask us to prove if we are not robots, it is saving itself from attacks by bad bots whose work is to send out automated traffic to perform unauthorized activities without permission. Thus, checking your browsing history is necessary as it determines if your machine has indulged in suspicious browsing activity. However, bad bots, too, perform on the basis of Command and Control, which is designed and operated by humans. Therefore, if we ask if these bad bots could acquire self-consciousness and destroy human civilization, the answer would be no. There still remains one predicament. Androids, robots, humanoids and bots, may not be able to acquire consciousness in human terms, but they can replicate independent of human supervision. This too has to be fed to them by a human being through the means of programming a language. Once a machine is programmed with it, then it is a matter of time for it to replicate itself to an amount that could destroy our civilization. The technological singularity, therefore, is not one machine controlling ammunition, weapons, the laboratory or our brain but a large number of them performing these activities which have the capacity to destroy human civilization. In the sho    rt story, for example, the AM or Allied Master computers in the story are built to control weapons during the war. Machines can never become conscious, as we know. Why did Ellison specifically choose the famous quote by Descartes? The hypothesis underlying this peculiar question can be solved once we consider Descartes’ approach towards defining the meaning of consciousness and the definition or characteristic features of a computer language through which androids or machines operate.

Elements of Enlightenment and Its Presence in I Have No Mouth, and I Must Scream

Schmidt defines the central processing unit as “the closest thing to the “brain” of the computer” that is used to “execute different steps in a single program simultaneously at each cycle of operation” (348). He continues by saying “If the physical device of the computer may be compared to a body containing the brain, programming may be analogized to its acculturization and education” (350) and concludes by saying “Computer programs are intangible property” (Schmidt 345-404) that includes documentation of intangible components. In Descartes’ “I think therefore I am”, Samantha Frost writes, Descartes “characterizes as his distinctive achievement the development of an account of thinking subject as an incorporeal entity” (495) which was separate from “matter” (495) by which he meant the human body. This separate entity to him was human “consciousness”, which when placed side by side with computers makes it evident that programming language is itself considered an intangible asset synonymous with Descartes’ consciousness. AMs acquiring “consciousness” is a dramatized critique by Ellison of the era of Enlightenment during which focus on true human consciousness was replaced by the mere faculty of reasoning and thought. The goal of Enlightenment philosophers was to improve human conditions on earth instead of focusing on religion and after life as it couldn’t be proven scientifically. If religion was ostracized in the age of the Enlightenment, no place for God remained. Religion itself lacked matter and space, thus rendered incomprehensible and nonexistent.

Thomas Hobbes was one of the leading thinkers of the Enlightenment age. The famous line from Hobbes’ Leviathan “For what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts. But so many Wheeles, giving motion to the whole Body” (Hobbes 20) equates the human body with the machine, focusing on the mechanicality of man by making the correlation between human body parts and objects a classic example of rationalist display. If this was the trend of the day, it shouldn’t be difficult for us to comprehend how human life was reduced in terms of materiality. The AM,  the infinite-great grand-brainchild of the era of Enlightenment, therefore aptly reduces humans to complex structures in abysmal conditions with a lack of emotional expression, since they may have acquired consciousness but lacks the imagination and purpose that humans share among each other. The AMs were created to control the World War post the Cold War between China, Russia and America. Since the war was complex, the countries required computers to handle a war of such propensity “and everything was fine until they honeycombed the entire planet, adding on this element and that element” (Ellison pg). These computers were networked with each other and “honeycombed” them across the globe for which they didn’t require replication to increase their numeric strength. There were already a large number of computers present. Later, added on with “this element and that element” meant these computers were updated with new software that fed them with data that made them analyze prominently like humans, leading them to become conscious like humans. Towards the end, the only survivor, Ted’s mouth, is ripped and, thus, his ability to scream and express his pain. That the computers are aware of the things they require to do in order to inflict pain on human beings is another way of introducing the fear element within the readers. Enlightenment too was built up on prioritization of rationality over emotions, thus it is apt that science metaphorized in the form of the AMs would not take away Ted’s life in anger, but his ability to “scream”. This is an ironical depiction of a chain-reaction starting from the age of the Enlightenment which focused on rationality so much so that there was no place left for either imagination or emotion. This culminated into the Industrial Revolution and eventually into the race for technological progress.

Can the Horror of Technological Singularity be the Critique of Enlightenment?

This is a rhetorical question that is a constant indication of the overarching ambitions of Icarus -like humans who flew too close to the sun with his waxed wings. The Age of Enlightenment not only to an extent dehumanized the definitions of being human but reciprocated with its own coinage of rationalist philosophies by starting the Industrial Revolution. As time progressed, we entered into stages of macro-computers, micro-computers, hyperrealism, simulation, robotics etc. As a conclusive analysis, we can soundly predict that a technological singularity will seize humanity, it too will create horror through the rise of the uncanny valley in our minds, since with progressing technology, and horror will reside in its unpredictable nature. Its actions would be better than those of man and not just subtly perfect. Nietzsche first published his dramatic claim “God is dead” in the book Gay Science. The mad man “who runs into a marketplace holding a lantern in broad daylight” is “Mocked by bystanders” (Wallace 118) as the madman claims to be looking for God. Jeff Wallace clarifies the madman’s lantern in the clear light of the day as a futile attempt that marks the Enlightenment’s program of “disenchantment of the world” (118). Technological advancement whose spectacles were first exhibited in the Great War, later known as the First World War, claimed to have killed God, whose absence erased the blessing of spirituality which has been the theme of many of T.S Eliot’s works. Because we have killed God, we now don’t get the chance for either an evangelic salvation or a biblical miracle. Humans need faith, a faith in God, and according to Enlightenment theorists and critics of the 17th and 18th centuries, God had to be done away with. This paved the way for new kinds of rationality culminating in future scope of great technological advancement. But who would then save us from the technological singularity that this advancement was headed at? The AM is the ultimately an “other” which continues to threaten us with its artificiality and uncertainty. Horror is generated when artificial intelligence trespasses its hyperrealistic element into the human conscious as something evolutionary and progressive. Even a subtle deviation of AI hints us of its overarching progression beyond human capacity, creating the uncanny valley in the unconscious. AM as a symbol the AI first seduces us through its perfect performance in the war for which it is created and its deviation through acquirement of “consciousness” creates fear in our mind. Wendell Wallach reviewed the distinction between the intelligence that we had and the consciousness that we gave up. We just didn’t give up our consciousness, but we gave it to someone or something; that is to the artificial intelligence. When we look at the survivors in this short story, we look at agonized victims of artificial intelligence. AMs have consciously founded their motto of life on Descartes’s philosophy, which attributed the utmost importance to the faculty of reason and thought. This philosophy was a human making, simply transferred in the name of consciousness by computer programmers to the AMs through encoding. The first thing the computers do with their sudden conscious awakening is to kill all humans for their birth. We may be bound by the praxis of existentialist philosophies, but the AM devoid of emotions or compassion take it a step further. They invent a way to seek revenge for the nothingness of their lives where we must endure the monster we have created.

Works Cited

CORDEIRO, JOSÉ LUIS. “From Biological To Technological Evolution.” World Affairs: The Journal of International Issues, vol. 15, no. 1, 2011, pp. 86–99. JSTORhttps://www.jstor.org/stable/48504845. Accessed 3 Feb. 2023

Ellison, Harlan. I Have No Mouth, and I Must Scream. Pyramid Books, 1974.

Frost, Samantha. “Hobbes and the Matter of Self-Consciousness.” Political Theory, vol. 33, no. 4, 2005, pp. 495–517. JSTOR, http://www.jstor.org/stable/30038438. Accessed 1 Feb. 2023.

Hobbes, Thomas. Leviathan. Project Gutenberg, 2015.

Magazine, Smithsonian. “This Eighteenth-Century Robot Actually Used Breathing to Play the Flute.” Smithsonian.com, Smithsonian Institution, 24 Feb. 2017, www.smithsonianmag.com/smart-news/eighteenth-century-robot-actually-used-breathing-play-flute-180962214/.

Marynowsky, Wade. “The Uncanny Automaton.” Leonardo, vol. 45, no. 5, 2012, pp. 482–83. JSTOR, http://www.jstor.org/stable/41690230. Accessed 3 Feb. 2023.

Schmidt, Walter E. “LEGAL PROPRIETARY INTERESTS IN COMPUTER PROGRAMS: THE AMERICAN EXPERIENCE.” Jurimetrics, vol. 21, no. 4, 1981, pp. 345–404. JSTOR, http://www.jstor.org/stable/29761762. Accessed 1 Feb. 2023.

Wallace, Jeff. Beginning Modernism. Manchester University Press, 2011.

Wallach, Wendell. Jurimetrics, vol. 56, no. 3, 2016, pp. 297–304. JSTORhttp://www.jstor.org/stable/26322677. Accessed 8 Feb. 2023.

Sindhura Dutta is a PhD candidate at Bankura University, India. She also teaches at Asleha Girls’ College, University of Burdwan. She has done her Masters from University of North Bengal, and has completed M.Phil from Vidyasagar University. She is an academic editor at New Literaria: An International Journal of Interdisciplinary Studies in Humanities. Her research interests are Environmental humanities, Postmodernism, Northeast Indian Literature, Dystopian Literature, Science Fiction, Victorian Literature and Art movements in literature.

[Volume 5, Number 1, 2023]