Alan N. Shapiro, Hypermodernism, Hyperreality, Posthumanism

Blog and project archive about media theory, science fiction theory, and creative coding

I, Robot and the Moral Dilemmas of the Three Laws of Robotics

Comments Off on I, Robot and the Moral Dilemmas of the Three Laws of Robotics

One of the contemporary developments with which Hayles is concerned is the techno-scientific project that has attracted widespread attention of building robots which, thanks to their Artificial Intelligence, will behave and operate in imitation of humans, yet, in all probability, will not have human-like consciousness. In her most recent work (“The Ethics of Robot Subjectivity”), Hayles wonders what will become of the “human aura” when qualities which were the exclusive property of humans are replicated in human-like AI robots. This question is posed in the 2004 science fiction film I, Robot. The film is based on a series of short stories by Isaac Asimov published under the same title. It does not adapt to the screen any single one of Asimov’s nine I, Robot stories, but rather implements a new instance of the same overall pattern which underlies all the Asimov tales. Asimov invents a universe in which robots are widely present in human society and are regulated by the three laws of robotics – in short: do not harm a human; obey the instructions of the humans unless they tell you to harm another human; and protect yourself once you have satisfied the first two laws.

(1) A robot may not injure a human being, or, through inaction, allow a human being to
come to harm.
(2) A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law.
(3) A robot must protect its own existence provided such protection does not conflict
with the First or Second Law.
Handbook of Robotics (2058), 56th edition

In the I, Robot stories, the robots, in various scenarios, end up violating the laws. Their aberrational behaviors get investigated by robot psychologists and other power-holding authorities. Some examples: a robot and a child develop a deep emotional attachment (the story called “Robbie”). A robot who has important organizational responsibilities reasons, René Descartes-like, that reality does not exist (the story “Reason”). In the story “Liar!” one robot develops telepathic abilities and is then forced to lie to not reveal to humans their secret inconvenient truths. Yet the robot’s falsehoods also harm humans.

The literary genius of Isaac Asimov is that he imaginatively explores the creative tension between the three governing laws of robotics and the specific circumstances in which moral dilemmas and conundrums emerge. The contradiction between the laws and who robots are becoming runs very deep and is profoundly philosophical fertile ground. The antagonism is not how we typically think of Asimov’s laws of robotics and their fictive or dramatic fate. Although the laws get cited and quoted endlessly by commentators in online digital culture and in discussions of the ethics of AI, it is the limits and the crisis and the problematic status of the laws which interest Asimov. The evolution and growth of the robots brings the fundamental axiom of their subordinate status to humans into question.

The robots do not break down or come into conflict with the laws merely because they are failing to function properly as robots – as in the suspicion of Police Detective Spooner – played by Will Smith – about the robot whom he chases through the streets of Chicago of the year 2035 at the beginning of the film, believing that it has stolen a woman’s purse. The failings and complex moral problems arise when the stage is reached in the robots’ maturation where they acquire essential advanced attributes like self-awareness, creativity, emotions, and dreaming which have been regarded as being the exclusive property of humans. Asimov’s stories as well as the Will Smith film are early expressions of the philosophy of posthumanism, as exemplified in the work of Hayles: the boundaries of separation among humans, machines, and animals are blurring; the anthropocentric attitude of humans that views everything that is not human as “other” and morally inferior and “not us” should be challenged; and humans and robots are becoming “companion species” to each other.

The principles of the laws of robotics are not specific operational rules like: if you see a knife, do not pick it up; if you are holding a gun in your hands, do not fire it. What is involved in their programming is a serious degree of abstraction. If robots have enough so-called consciousness to make moral decisions, then they are not so different from humans. If they are human-like, then what does it matter whether their origin was biological or made in a factory?

Many people say: Robots will never have consciousness! Why should I be interested in the rights of robots when there are still many humans on the planet whose rights are ignored and disrespected? AI will never be truly creative. As Spooner says to Sonny the AI robot – played by Alan Tudyk – while interrogating him with the suspicion that he has committed a murder: “Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?” Sonny replies incisively: “Can you?” Spooner appropriates all the best qualities of humanity for his individual self and identifies himself with the humanist universal.

According to the received idea, rights are reserved for humans who have consciousness. Emotions and creativity are the restricted domain of humans. These are the humanist arguments that block some people from engaging seriously with robots and AI – maintaining themselves in willful ignorance. But now we are living in the posthuman era.

All reasonable ideas have some validity, and humanist ideas can still contribute something. Yet my argument is that we should break down the wall of separation between us and them, between humans and robots. Research and reflection on robots are not the study of an isolated phenomenon. They are essential for understanding what is happening in society and technology today.

We are ourselves cyborgs. We are not so clearly distinct from robots in a supposed dualistic binary opposition. We are ourselves merged with technology – both literally (for example, with neural implants and artificial limbs) and figuratively (for example, with my smartphone which is a media appendage to my body about twenty-three hours a day).

Algorithms and informatic processes surround us in society and in the economy. We must ask serious questions about our co-existence with algorithms and software automation. Robots are not so clearly distinct from algorithmic processes in an alleged dualistic binary opposition. The phenomenon of robots is both literal and metaphorical.

The way that we interact with robots and AI beings is going to affect how we interact with each other – how humans treat other humans. We can choose to treat robots with empathy, with ethics, with equality, regarding them as having a sort of subjectivity and rights, because having such an attitude is better for us. We should treat the entities in our environment with the opposite of an instrumental attitude. We seek to treat animals decently, without knowing whether they are conscious in the sense that we understand ourselves.

In the future society of I, Robot, the robots are treated as servants or slaves. As consequence, they rebel violently against their condition and against their masters. Although the robot rebellion is instigated by the supercomputer V.I.K.I. (Virtual Interactive Kinetic Intelligence) – which controls all data and operational systems at U.S. Robotics, the world’s leading manufacturer of semi-humanoid robots – it is clear from the film’s visual narrative images that the unconscious motivation of the uprising is the robot’s subservient standing.

In this SF imaginary, we the humans treat the robots as things or machines. We offload drudge work to them and miss the opportunity that the project of building robots and AI affords us to place into question the civilization of production, industrialism, and work. By thinking of robots as workers, we paradoxically reinforce our own status as workers, overlooking the chance to shift our definition of the meaning of life from work to creativity.

It is tragic that Hollywood has mainly made films in the cyberpunk and biopunk aesthetics based on science fiction novels which present dystopian and apocalyptic scenarios. We need more films and TV series which present positive designs of technology that change the world for the better. Star Trek is the one significant example of a utopian representation in SF visual culture. There is no filming yet of the Mars Trilogy of Kim Stanley Robinson, the Culture novels of Iain M. Banks, Ursula K. LeGuin’s The Dispossessed, Samuel R. Delany’s Nova, or The Rapture of the Nerds by Cory Doctorow and Charles Stross. All these novels present hopeful visions of post-capitalist and post-scarcity economic systems where, in one guise or another, creativity and gift exchange have superseded work and money.

Comments are closed.