The Advent of Postmodern Robotic Technoreligiosity
Liberal Studies, New York University
Journal of Evolution and Technology - Vol. 25 Issue 2 – November 2015 - pgs 25-38
In the future, human beings will welcome robotic companions into their homes as caregivers of children. Parents will undoubtedly want them to teach their offspring right from wrong, and, in doing so, to reflect the ethical values derived from the parents’ respective religious traditions. Accordingly, the market will create a demand for religious robots. This article examines the ways in which robots may be considered religious based upon a minimalist and maximalist definition of religion derived from the field of religious studies. A minimalist definition of religion rests upon the family resemblance of attributes considered religious, particularly those characteristics that are deemed prototypical, e.g. moral views derived from religious doctrine. A maximalist definition of religion claims that the essential attribute of any social phenomenon deemed religious would be the practitioner’s experience of what Rudolf Otto called “the holy.” From a minimalist perspective, robots programmed to function according to a religious ethos are arguably religious, especially if they have the capacity to engage in moral reasoning derived from prototypical religious beliefs. Consequently human societies will see the marketing of Christian, Jewish, Islamic, and Buddhist robots among others, most likely manufactured under a watchful clerical eye, and the rise of integrated religious communities with both human and robotic members. In a distant future when neuroscience acquires the capacity to replicate individual neural pathways of the human connectome, religious groups may want manufacturers to copy experiences of “the holy” into the “brains” of their robotic companions to ensure orthodoxy. The article closes by posing questions about the reality of simulated robotic religious experience.
The term “robotic religion” sounds so strange to the ear that some may mistake it for a liberal jest about fundamentalism. On the other hand, given the radical divergence between scientific and religious discourses, it may strike others as an oxymoron or a figment of the imagination. “To the modern mind, which has divided our culture into different domains, spirituality has little to do with technology. Religion is spiritual, technology is material” (Coeckelbergh 2010, 958). However, robotic religion may not be as incongruous or as far-fetched as it appears. In Isaac Asimov’s classic tale, “Reason” from his 1950 collection I, Robot, one humanoid robot mimes the semiotics of an Islamic believer: “There is no Master but the Master and QT-1 is his prophet!” (Asimov 2008b, 54). Although the juxtaposition of robots and religion smacks of caricature and science fiction to this day, there is more here than meets the eye.
I do not doubt that, in the not-too-distant future, a large segment of the population will become convinced that they not only can, but must, program their servant-robots according to their religious affiliations, e.g., Muslim robots, Catholic robots, Mormon robots, and so on. Indeed, with technological breakthroughs that allow humans to recapture forgotten past experiences and advances in robotic “brains” that mimic human neurological structures, the posthuman world will even witness the experimental transfer of human religious experiences to robots. This technological revolution will pose significant theological and philosophical questions. Are such “real” experiences of human beings less real by virtue of their transfer to robotic hosts? Or is the hyperreal fiction of robotic religious experience nonetheless an “authentic” replica? Indeed, if robotic hosts have their own memories of these human experiences whose origins are considered real, can a robot, in the words of evangelical Christian discourse, “be saved,” even if the robot is not human? Will the future see the advent of postmodern “born-again” robots? An answer to these questions about the future existence of religious robots presumes that religion itself is a stable theoretical category; however, it too is steeped in controversy.
What is religion?
In the late nineteenth and early twentieth centuries, sociologists like Emile Durkheim explored human ideas and behaviors to mark the boundaries of the phenomenon known as religion. In his famous foundational work The Elementary Forms of the Religious Life, based on the study of aboriginal practices, Durkheim concluded that,
A religion is a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbidden – beliefs and practices which unite into one single moral community called a Church, all those who adhere to them. The second element which thus finds a place in our definition of religion is no less essential than the first; for by showing that the idea of religion is inseparable from that of the Church [or moral community], it makes it clear that religion should be an eminently collective thing. (1965, 62–63)
As a functionalist, Durkheim held that religion not only provided answers to questions about the meaning of existence, but also served as the basis for social solidarity, moral guidance, and social control. But perhaps more importantly Durkheim theorized that religion was “nothing other than the collective and anonymous force of the clan” (1965, 253). Dispatching the traditional notion of God, Durkheim concluded that “[t]he god of the clan . . . [,] personified and represented to the imagination under the visible form of the animal or vegetable [,] . . . serves them as a totem” (1965, 236).
The totemic principle manifested itself in the “division of the world into two domains, the one containing all that is sacred, the other all that is profane,” which to Durkheim served as “the distinctive trait of religious thought” (1965, 52). Although a product of human community, the sacred manifested itself in states of “collective effervescence,” when human beings experienced a “certain delirium” as if transported to another world (Durkheim 1965, 258). Despite bringing the divine down to earth, Durkheim nonetheless hypothesized that there was indeed something “eternal” in religion in the form of each human community’s periodic reaffirmation of “[its] collective sentiments and collective ideas” (1965, 474–75).
In contrast to Durkheim, the sociologist Max Weber had little interest in identifying social forces which act as the collective meaning of religion. Weber noted that the Renaissance, Reformation, and Enlightenment successively freed the individual from the grip of religion, both in terms of its view of the universe and its control over populations. In an age of Entzauberung or disenchantment of the world, the individual came into his or her own, using rational processes and goals rather than traditional beliefs, to find meaning in the modern world (Weber 1971, 270). Yet, religion survived, side by side, with rationality, and to understand its longevity, Weber examined both the behaviors and attitudes of individual actors. “The external courses of religious behavior are so diverse that an understanding of this behavior can only be achieved from the viewpoint of the subjective experiences, ideas, and purposes of the individuals concerned – in short, from the viewpoint of the religious behavior’s meaning (Sinn)” (1991, 1). In contrast to positivistic social science, Weber’s so-called Verstehen methodology suggested that the meaning of religion was not authored by social scientists but rather by the practitioners themselves. To understand religion was to understand their subjective point of view or motives for the act, which was incumbent upon sociologists carefully to explain (erklarendes Verstehen) (Weber 1946, 95).
In one sense, Weber’s Verstehen methodology set the stage for a momentous development in the twentieth-century study of religion. Rudolf Otto’s 1917 work, The Idea of the Holy, attempted to provide a phenomenological account of the meaning of the sacred to practitioners across all cultures and all religions. “For if there be any single domain of human experience that presents us with something unmistakably specific and unique, peculiar to itself, assuredly it is that of the religious life” (1950, 4). For Otto, religion rested not merely on a set of beliefs or behaviors but rather on what he regarded as a foundation stone of universal human experience: the experience of a sacred dimension. “I shall speak, then, of a unique ‘numinous’ category of value and of a definitely ‘numinous’ state of mind, which is always found wherever the category is applied. This mental state is perfectly sui generis and irreducible to any other” (1950, 7). Otto theorized that every human being qua human had the capacity, “as an a priori category of mind” (Otto 1950, 175) to experience this numen. Here Otto stepped far beyond Weber’s Verstehen methodology, for, as a theologian, he not only empathetically hypothesized that individuals truly believed in the holy, but that their belief showed that the holy was in fact true.
Otto’s book had a galvanizing effect on the study of religion, for whereas the quest of Durkheim, Weber, and others tried to establish the study of religion on a social scientific basis, Otto attempted to return this new field of study to a theological source. For the last century, many religious studies scholars have reiterated that the experience of the holy is the hallmark of religion. For example, Charles Taylor has argued that religion ultimately means some sense of what is beyond the human, which informs and transforms individuals and communities with a sense of goodness (Taylor 2007). Likewise, Huston Smith, long recognized as one of the leading popularizers of this homo religiosus, argued that the experience, deemed holy or sacred, is the essence or “universal grammar” (Rosemont and Smith 2008) of religion, which ultimately is impervious to rational human inquiry.
Otto’s approach to the study of religion was not without resistance. Mircea Eliade, the twentieth century’s dean of the history of religion scholars, noted that
[t]he growing interest in phenomenology of religion has created a tension among students of Religionswissenschaft. The different historical and historicistic schools have reacted strongly against the phenomenologists’ claim that they can grasp the essence and structure of religious phenomena. For the historicists, religion is exclusively a historical fact without any transhistorical meaning or value, and to seek for “essences” is tantamount to falling back into the old Platonic error. (1984, 35–36)
The division between historical and historicistic (social scientific) lines, on the one hand, and phenomenological approaches, on the other, in the study of religion still persists to this day in the academy, particularly in the field’s professional association, the American Academy of Religion (AAR).
Over the past twenty years, some scholars within the AAR have launched scathing attacks on the phenomenological interpretation of religion, even arguing that religion as a separate field of study does not, in fact, exist. Eliade acknowledged that religious studies should not be subsumed by the phenomenological impulse, for “there is no such thing as a ‘pure’ religious fact. Such a fact is always also a historical, sociological, cultural, and psychological fact, to name only the most important contents” (1984, 19). However, in his controversial 1997 Manufacturing Religion, The Discourse on Sui Generis Religion and the Politics of Nostalgia, the bête noire of the academy, Russell T. McCutcheon, charged that, in studying world religions, Eliade himself was guilty of smuggling “The Myth of Religious Uniqueness” (1997, 3) into the academic study of religion via the backdoor.
[O]ne might conclude that Eliade was firmly asserting that the study of religion relies on such historical methods as linguistics and sociology. However, . . . the use of “phenomenon” suggests that despite the fact that every religious manifestation is inevitably also a human manifestation, behind such instances there is a form or essence that rightly attracts scholarly curiosity (1997, 12).
Accordingly, McCutcheon claims that this scholarly sleight-of-hand undermines the integrity of the academy. “What is troubling is how the label ‘academic study of religion’ comes to stand for uncritical and theoretically suspect research into essences and realities” (1997, 123). To McCutcheon, such “discourse on sui generis religion” (1997, 30) is disingenuous and only serves a darker, more unsettling purpose. For him, religion is a creation of Western scholars (2004), whose purpose is “to develop an autonomous social identity that contributes to their consolidation of social position and power” (1997, 30–31). In short, the field of religious studies is a ruse by scholars to create jobs for themselves.
It is not surprising that McCutcheon’s criticisms have been unwelcome. Religious studies scholars would hardly wish to commit academic suicide by abolishing the field in which they earn their livelihood. And although some Western scholars may want to abolish religion as a category, practitioners, East and West, North and South, may not so readily agree. The question remains, if we are to continue to use the category of religion, what are its attributes? Clearly, the phenomenological approach drags theological presuppositions into its definition. Practitioners may well divide the world between sacred and profane as Durkheim suggested, but scholars need not advocate the reality of what allegedly transcends the human to acknowledge its significance in the habitus of the devotee. Perhaps Weber’s Verstehen methodology, which recognizes the importance of the experience, whether discursively or socially constructed, to the individual, provides a path toward what we might call a minimalist definition of religion.
This minimalist definition of religion calls into question Otto’s experience of the “holy” as an essential attribute of religion. What if practitioners of a socially recognized religious tradition did not experience Otto’s “holy”? Would they not be considered religious at all? Would, for example, a mainline Protestant, whose faith rested on habit rather than heart, not be religious? I would imagine that most people would regard her as being religious, even if her belief did not have a mystical grounding. However, if the experience of the holy is not the litmus test for religion, then what makes a phenomenon religious?
Benson Saler claims that scholars of religious studies can draw the boundaries of religion if they adopt a family resemblance approach and “prototype” theory of disciplinary identity.
I advocate that for scholarly purposes . . . we formally conceive of religion in terms of a pool of elements that more or less co-occur in what scholars generally regard as the clearest and least problematical examples of what they call religion. Those elements – we could conceive of them as a set of predicates – collectively define our conceptual model. The instantiations of that model are what we call religions, and they differentially participate in the pool. The instantiations are linked by family resemblances; they need not all share some one element or some subset of elements. (1999, 396, citing Fitzgerald 1996, 232–33)
Saler gives examples of such elements: “theism, belief in souls, ritual, sacrifice, sacred canon, eschatology, pilgrimage, etc.” (1999, 397). As will become clear later, I would include among them an ethics that reflects the ideal conduct of community members. According to Saler, no single element is essential. For example, belief in a god or gods is not an essential attribute of religion. As paradoxical as it may seem to many Christians, there are atheistic religions, like Theravada Buddhist sects (Orrù and Wang 1992). Saler further suggests that particular traditions have “prototype effects” that imply that the tradition is a fuller instantiation of a particular “family resemblance” element than that found in other traditions (1999, 399). Due to their theism, Christianity or Islam, for example, may appear more religious than Taoism or Confucianism.
From this scholarly debate over the disciplinary boundaries of religion, we can garner two standards by which to judge the religiosity of our future robotic world. The first is what we might call a “minimalist” definition of religion using the family resemblance approach and prototype theory. If robots evince one or more attributes common to the pool of elements in recognized religious traditions, then robots might properly be called religious, particularly if their programming and/or behaviors reflect prototypical religious traditions. On the other hand, a “maximalist” definition of religion would call for the discovery of the alleged essential attribute, as recognized by some historians of religion, e.g., Eliade, and defined by Rudolf Otto as the “experience of the holy.” This article argues that robots of the future will assuredly fulfill the minimalist definition of religion and will likely in some distant time ahead meet the requirements of a maximalist definition of religion with the advent of postmodern hyperreal religious experience, e.g., “born-again” robots.
In his opening story to I, Robot, entitled “Robbie” (Asimov 2008a), Asimov tells the tale of a robot in the far-off year of 1998, who serves as a caregiver and playmate of a little girl named Gloria. Due to human prejudice against robots, Gloria’s mother demands that her husband get rid of her daughter’s metallic companion. Gloria is so distraught by the sudden disappearance of her friend that her father devises a plan for her to be reunited with Robbie. This charming story highlights what will certainly be the entrée of robots into human everyday life. In our future, robots will be welcomed into the home as playmates and teachers of young children.
Maja Matarić, a neuroscientist and robotics expert at University of Southern California, has worked on developing robots to serve children with special needs.
We have demonstrated that child-sized humanoid robots can encourage some children with autism to be more verbal and empathetic. The robots’ life-like appearance and responsive behavior seem to stimulate children to play with them and express empathy; when our robot didn’t obey a command, one child said, “Now I know how my teacher feels.” (Social roboticist 2012, 280)
A New York Times article reported in 2014 that on YouTube “you can watch developmentally delayed children doing therapy with a cute blue-and-yellow CosmoBot that also collects information about their performance” (Aronson 2014).
The demand for robots that interact with children is not limited to those with special needs. Japanese companies stand in the forefront of the manufacture of caretaker robots. Japan is plagued by a dearth of childcare workers, where waiting lists for daycare centers sometimes reach up to two years. Indeed, robot caregivers may enhance the lethargic Japanese economy by allowing new mothers to return to work, boosting household income, and increasing consumer spending (Robot nannies 2013). Introduced by NEC Corporation in 2003, PaPeRo (Partner-type Personal Robot) uses facial- and speech-recognition technologies to initiate conversations and play games with children, even singing to them (Childcare robot 2015). In light of the demand, more sophisticated caregiver models like the Aldebaran #PepperRobot are making their appearance (Kovac 2014).
Although robotic nannies and tutors may, like Asimov’s Robbie, still appear as rather clunky machines, plans are afoot to produce more humanoid robots. Researchers at MIT, creators of the robot model named “Cog” in the 1990s, have already made major strides toward fulfilling this goal. As Anne Foerst, a professor of theology and computer science at St. Bonaventure University, has noted,
According to [the] philosophy [of Embodied AI researchers], human intelligence can emerge only in a body that is as humanlike as possible. For this reason any entity with humanlike intelligence must have a body that is built in analogy to a human body. Because Cog is an attempt to rebuild a humanlike creature, its shape is close to that of a human. . . Its builders hope that its outward appearance will motivate people to interact with Cog as with a human. (1998, 100–101)
The MIT lab has also pioneered the effort to create a human-like robotic head named Kismet with auditory, visual and proprioceptive properties that can simulate emotional responses (Foerst 2015). The progress made in the creation of humanoid robots at MIT is being enhanced by work on the development of haptic communication in robotic models by Japanese researchers. “If a communication robot equipped with tactile sensors over its entire body could have the same capability of haptic interaction as humans do, we would feel greater familiarity with the robot, thus shortening its communicative distance from people” (Muyashita et al. 2007). Some researchers have focused on social interaction and the perception of social cues that produce ties between human beings by “adapting the conceptualization of ritual action to human-social robot interaction (HRI)” (Linke 2013, 49). Undoubtedly, based on the robotic capacities for verbal, auditory, and tactile communication, children will bond with robotic caregivers, who, as Irish computer scientist Noel Sharkey observed, “in most cases, prefer a robot to a teddy bear” (Sharkey 2008, 1800).
While an impressive technological achievement, the development of humanoid caregivers, who will exercise a very strong influence over their charges, raises important ethical questions. Should a robot caretaker admonish children when they do wrong? If it should, as I imagine parents would want, how can a robot tell right from wrong? Who will “teach” a robot to model ethical reasoning and behavior? The Office of Naval Research (ONR) has recently provided a grant to researchers at Tufts University, Brown University, and Rensselaer Polytechnic Institute to develop morally competent robots. According to Bertram Malle of Brown’s Humanity Centered Robotics Initiative, “If we build robots that interact with humans and that have increasing decision capacity, . . . keeping robots amoral would simply be unethical” (Stacey 2014). Gary Marcus, an NYU cognitive scientist, has suggested that robotic “decisions will inevitably be guided by a mixture of what is preprogrammed and what they learn through observation” (2015). For parents, particularly those who identify with a particular religious tradition, the lessons that a robotic caregiver teaches their children will be of vital importance. Accordingly, they undoubtedly will want a robot that reflects their own religiously-based moral viewpoint.
Researchers are aware of this preference. The Tufts, Brown, and Rensselaer Polytechnic study proposes that a morally competent robot must have the
knowledge of a system of norms appropriate for the community one resides in; the ability to guide one’s behavior in light of these norms; the ability to perceive and evaluate other people’s behavior when it violates those norms; a “vocabulary” that allows one to communicate about one’s own and others’ norm-violating behaviors – such as to justify a behavior, or when appropriate, apologize for it. (Stacey 2014)
One of the Tufts researchers in the study, Matthias Scheutz, notes, however, that "[i]t’s almost impossible to devise a complex system of ‘if, then, else’ rules that cover all possible situations" (Jeffries 2014). Consequently, the ONR-funded study is intent on developing a robotic capacity for moral reasoning. The robot must be able to weigh varied, and sometimes even competing, factors to arrive at moral decisions.
The minimalist definition of religion and robots
I imagine that parents would want a caregiver robot to be built with at least a rudimentary program of ethical rules and, most likely in face of complicated situations, have the capacity for moral reflection and decision-making. I do not doubt that parents who are affiliated in some way with a religious tradition would desire a robot whose decisions would mirror their own. Depending on the religious group, a robot might urge its charge to refrain from, for example, eating pork, swearing, or striking another child. Moreover, it is likely parents who observe a particular religious tradition would want their robot helper to do no less than teach the child morality. Since codes of conduct are grounded in underlying ideas, religious codes of conduct are intertwined with religious doctrines. Accordingly, caregiver robots in homes of parents affiliated with particular religious traditions would need to be programmed with software that reflected those beliefs and codes of conduct.
Such robots would be arguably religious under the minimalist definition of religion, which holds that a phenomenon is religious if, based upon the family resemblance of various traditions, it shares at least one attribute in common with a pool of elements in a recognized religious tradition, particularly if that attribute is prototypical. Caregiver robots programmed with religious codes of conducts and beliefs would share at least two such attributes and likely would embody other attributes as well.
The Associated Press has already reported the development of a robot designed by an Iranian instructor to teach children how to pray (Iranian teacher builds robot to teach prayer 2014). Named Veldan, meaning “Youth of Heaven,” this humanoid robot was built using a Robotis Biolid kit from a South Korean tech firm. The instructor re-engineered its configuration to allow for physical prostration. “‘As you see the children’s reaction in their faces, you realize how interesting it is to them to see how the science of robotics has been beautifully used for a religious purpose and I am sure it will be greatly effective in teaching them how to pray’” (Iranian teacher builds robot to teach prayer 2014). With its anticipated mass production in Iran, Veldan will serve to model the proper ritual of salat for Muslim children.
As this example suggests, parents need to be assured that, as far as humanly possible, the software used in the construction of such humanoid robots would meet the criteria for religious orthodoxy. Secular manufacturers of humanoid caregivers would be compelled to enlist advisers, be they imams, rabbis, priests, ministers, monks, or theologians, who would oversee the production and installation of software appropriate for their religious traditions. Perhaps manufacturers would even assemble panels of clergy or theologians to authenticate its orthodoxy. In any case, consumers would want robots that conform to their religious traditions. Hence, companies would manufacture and market Muslim robots (perhaps Shi’ite and Sunni models), Jewish robots (perhaps, Orthodox, Conservative, and Reformed models), Buddhist robots (perhaps, Pure Land, Nichiren, Theravada models, etc.) and the like: the list may be as long as the number of different religious traditions in the world, assuming that their adherents can afford the price of such robots.
Undoubtedly the designation of such robots as religious will be met with objections. Could a robot be any more religious than a program on a computer that offered instruction in religious beliefs, ethics, or rituals? People would hardly call such computers religious. However, humanoid robots don’t look like laptops, particularly if they have visual, auditory, oral, and haptic functions, and if they have the capabilities to learn, communicate with one another, and particularly engage in moral reasoning, people paradoxically might find it more natural to attribute religion to their technological helpmates. I believe that this “robotic turn” in society over the next century will have significant implications and challenges for religion itself.
If robots became capable of moral reasoning, then it would logically ensue that robots could make wrong choices as well as right ones. In the discourse of Western religious traditions, for example, robots would therefore have the capacity to sin. As absurd as this conclusion might seem to some, a sinning robot who assumed the role of caregiver or teacher might appear to be quite a serious problem to religious parents. Such robots would seem to be like those of us who struggle with the question of moral behavior. This similarity is underlined by the recent concern expressed by ethicists that robots may be subjected in the future to sexual coercion and abuse. David Levy has argued that this development in the relationship between robots and humans is inevitable (2008).
As one group of Islamic writers has claimed, “the question of marriage with a sexbot can’t be underestimated” (Amuda and Tijani 2012, 21). If sex is inevitable between humans and robots, and robots are capable of moral reasoning, it would seem that religious traditions will have to consider the proper regulation of robotic sexual behavior. Sadly, these Muslim authors fear to take the next step; they refuse to consider such regulation. Instead, they conclude that sex with a robot is tantamount to bestiality, holding that an Islamic marriage can only exist between a male and a female human being. However, what if robots were not only capable of having sex but also of making choices about their sexual partners? Could not a robot be guilty of the sin of fornication? If so, perhaps human beings in religious groups should marry robots, rather than risk their robotic partners falling into sin. Or will there come resistance from some religious communities to the very idea of marriage between a human and a robot, analogous to the traditional belief (shared by the Catholic Church and Protestant Fundamentalists, for example) that marriage is only between a man and a woman? Will some bio-fundamentalists argue – motivated by robophobia – that marriage can exist only between humans?
I believe that, rather than remaining outside religious communities, robots of the future will be brought inside to ensure that their moral reasoning and behavior are shaped by the religious group. Of course, the boundaries of the religious community are governed by religious ritual. In Christianity, for example, entrance into the community is signified by baptism. Will such robots be baptized at the factory or will they undergo baptism at a church? Religious groups may have to change the way in which the ritual is performed. Aspersion may work, but affusion will pose hazards to robotic hardware, and Baptists will certainly have to abandon full-body immersion for robotic initiates. Robotic baptism will not only force changes in ritual but also in Christian theology itself.
Traditionally, Christian doctrine holds that human beings are inherently sinful and deserve death for the disobedience of Adam and Eve in the Genesis story. Consequently, Jesus, the sacrificial lamb, dies pro nobis, in place of all human beings. In the Catholic tradition, as articulated in the fifth century by Saint Augustine in Book XIII of the City of God (2004, 512), original sin is passed in the sexual act and transmitted to the fetus. However, although robots may be capable of sexual activity, they are not born. Therefore, robotic members of the religious community would not be saved from original sin since they never would have had it. Nonetheless, they would be capable of making wrong moral choices or sinning.
The inclusion of robots within a Christian community would require revisions in soteriology. In both Catholic and Protestant traditions that hold that sinners share in Christ’s redemptive act through the ingestion of bread and wine, embodying or symbolizing the body and blood of Jesus, robots would be excluded from a central ritual act by virtue of the fact that they neither eat nor drink. This development would be less of a problem among those Protestant traditions which place an emphasis on hearing the Word rather than on the ritual of communion, but it would be an issue for Catholic, Episcopalian, and Presbyterian churches, among others. There would have to be some way in which the central sacrifice could be signified without the requirement of physical ingestion.
Once inside the religious community that shapes the moral reasoning of a robot programmed with doctrinal and ethical software, robots will become a voice for their religious worldview. With a capacity to communicate with other robots as well as humans, robots will likely proselytize for what they believe to be true or right. Robots may therefore engage in missionary activity, ancillary to their primary tasks, be they caregivers, teachers, domestic workers, etc.
Finally, religious institutions will have to face the question of what to do when a robot who is a member of the group becomes antiquated or obsolete. Should robots be programmed to self-terminate? To the human members of a religious community, that may seem insensitive and even inhuman. Although some of their “owners” may wish to terminate them, I would imagine that there would be significant opposition in the religious community, especially among those who use robots as spouses, caregivers, or teachers of their children. Perhaps obsolete robots should be placed in community-supported facilities like old-age homes. Of course, the problem is that robots don’t die – and it would be impractical to provide for robots ad infinitum. Instead, I would imagine the termination of its “life” would have to be a robot’s own choice. The question raised will be analogous to euthanasia and assisted suicide. The increasing appearance of jurisdictions that permit euthanasia and assisted suicide suggests that legal regimes in the future could authorize such actions for robots. If and when a robot “dies,” a religious community would have to face the question of how to handle the “body” of the robot. Should it be disassembled? junked? recycled? Humans in the religious community might oppose such a callous disposal of entities that worshipped alongside them, raised their children, or perhaps even became their spouses. I believe that religious groups will demand that terminated robots be treated with respect and dignity and will permit their burial alongside their human families in cemeteries.
The maximalist definition of religion and robots
As much as adherents of various traditions might acknowledge that robots could be religious, sharing the community’s beliefs, rituals, ethos, and so on, even they might hesitate to believe that robots could directly have an experience of the holy, as Otto put it. Some things may seem impossible even for God. However, with faith and perhaps more importantly technology, all things are possible.
The visionary work of Thomas Berger, a neuroscientist at the University of Southern California, suggests that it may be possible to record the code of long-term memories in the hippocampus (Cohen 2013; Cook 2015, 28). If, as he argues, memory is a series of impulses generated by neurons in the brain, then it is possible to insert electrodes that can record those impulses. The impulses constitute a coded message, generated by cells of the hippocampus, which creates a unique memory. If it were possible to duplicate the neurological patterns of a human brain, it might well be possible to reproduce human experiences by implanting the code, copied by electrodes in the human brain, in its robotic doppelganger. However, if simple experiences, such as the memory of the image of another human being, involve 500,000 cells of the hippocampus, the odds against transferring human memories into machines may seem impossibly high. The transfer of human experiences to robotic brains would require the replication of over 100 billion neural connections. Before any such duplication could take place, neuroscientists would have to map the human brain.
Inspired by the success of the government-backed Human Genome Project, the Obama Administration has launched the “BRAIN [Brain Research through Advancing Innovative Neurotechnologies] Initiative,” a research project to do just that. As the White House announced,
[t]he initiative will accelerate the development and application of new technologies that will enable researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact at the speed of thought. These technologies will open new doors to explore how the brain records, processes, uses, stores, and retrieves vast quantities of information, and shed light on the complex links between brain function and behavior. (White House n.d.)
One of the neuroscientists supported by the BRAIN Initiative is Sebastian Seung, a professor at MIT. Dr. Seung argues in a 2013 book that it may be possible someday to map the connectome (2013, xii), or 100 billion neural connections, of the human brain despite its incredible complexity. To put that project in perspective, neuroscientists have identified the complete neural wiring of a simple one millimeter worm, C. elegans, but that involved only 302 neurons and took a team of neuroscientists 12 years (Cook 2015, 28). How possibly could researchers map the human connectome? Seung is piloting a preliminary project called Eyewire that seeks to map the neural connections of a mouse’s eye by enlisting the aid of thousands of volunteers. He has designed a computer game that displays slices of a mouse’s eye and asks players to mark its neural pathways on a three-dimensional image. Over 165,000 worldwide have signed up, in effect democratizing and massifying scientific research (Cook 2015, 28).
If, as Berger and Seung claim, it will be possible to map the human brain and to identify the coding of specific human memories, could neuroscientists working in robotics one day duplicate the human brain, analogous to what Asimov called the “positronic brain” in I, Robot? “Whole brain emulation,” or what Calvin Mercer has called “uploading,” transcends traditional artificial intelligence since it is the replication of the entire connectome (2015, 176). As Matthew Zaro Fisher, a theological commentator, acknowledges, “[in] the end, uploading may be impossible” (2015, 28). However, Seung argues, “[y]our body and brain are not fundamentally different from the artificial machines manufactured by humans, only much more complex” (2013, 270). These are technical problems which may be solved. Fifty years ago the microchip was merely science fiction.
Still there are some difficult questions to face even if scientists surmount the technological challenge of copying 100 billion neural connections. Each individual human brain is unique. As Seung claims, “you are your connectome” (2013, xv). In effect, replicating a human brain would be replicating a human individual. “As we cross the divide to instantiate ourselves into our computational technology,” as Robert Kurzweil admitted in his book The Age of Spiritual Machines, “our identity will be based on our evolving mind file” (1999, 128–29). Following earlier commentators such as Vernor Vinge, Kurzweil has identified this supposedly inevitable scientific breakthrough as the point of “Singularity” at which biology and technology will merge (2005, 9). Some theologians read this historical moment as a transhumanist eschatology, in which human beings will be guaranteed “cybernetic immortality” (Peters 2015, 131). Among some theologians, transhumanism’s goal of eternal life raises significant issues of human hubris. On the other hand, Anders Sandberg has dismissed the comparison of transhumanism to religion because “transhumanism in general may lack key parts of a belief system,” failing to offer any answer to the meaning of life (Sandberg 2015, 5).
Although intellectually provocative, this academic debate over whole brain emulation is not relevant to the transfer of a discrete human religious experience to a robotic host. As Kurzweil suggests, “we don’t need to understand all of it; we need only to literally copy it, connection by connection, synapse by synapse, neurotransmitter by neurotransmitter. It requires us to understand local brain processes, but not necessarily the brain’s global organization, at least not in full” (Kurzweil 1999, 124–25). Implanting a unique human religious experience differs significantly from the issue raised by many theologians about whole brain emulation. Neuroscientists need not upload the entire connectome. Instead, they would have to identify only certain neural tracts. Once researchers determine the coding of neural connections for specific occurrences, it would become possible to replicate discrete human memories in simulated neural pathways. Yet, even if human beings were to accept the existence of these robots who could share their identical experiences, which human memories would their “owners” allow to be implanted in robotic brains?
I would suggest that individuals who seek robots with worldviews consonant with their own religious convictions might want to implant memories of religious experiences, either of their own or of those who epitomize the faith of their community. The replicated memory would ensure the certainty of a robot’s loyalty and dedication to the faith. I imagine that these communities would select the memory of a religious experience of a venerated member, which would serve as a prototypical model for the manufacture of thousands of robots to serve sectarian households.
Yet, such transfers do pose significant questions about the “reality” of these memories. Are they “authentic” replicas of “original” religious experiences? Or are they less real than their originals? Is the so-called divine presence lost in translation between human and robot? I am sure that religious communities and theologians will struggle with these questions. If an individual has a “born-again” experience in the Evangelical tradition, are robots, whose neural tracts mimic those of a particular human brain and are implanted with the code of that experience, truly “born-again,” even though paradoxically they were never “born” at all? If the “born-again” experience bears witness to God, is God less real because the code of his presence appears in a synthetic duplicate of human neural pathways? Is the power of God limited to humans? Are robotic companions of human beings unredeemable, even though they are capable of “sin”? Or is robotic religious experience of an altogether different order of the real?
The existence of synthetic, yet simultaneously identical, copies of religious encounters in robotic hosts threatens to shatter how we think about what many human beings consider their most intimate experiences. I am reminded of Jean Baudrillard, whose prescient work describes the hyperreality of simulacra in our postmodern world. His reflections seem to speak to the problematic “reality” of simulated human religious experiences in robotic doppelgangers:
In this passage to a space whose curvature is no longer that of the real, nor of truth, the age of simulation thus begins with the liquidation of all referentials – worse: by their artificial resurrection in systems of signs . . . It is no longer a question of imitation, nor of reduplication, nor even of parody. It is rather a question of substituting signs of the real for the real itself; that is, an operation to deter any real process by its operational double, a metastable, programmatic, perfect descriptive machine which provides all the signs of the real and short-circuits all its vicissitudes. (1988, 167)
In this Baudrillardian world of hyperreality, robots will take on the signs of religiosity, its rituals, ethics, soteriologies, in a way that is meaningful both to themselves and human beings. The code or signifier will generate a new signified: a post-human, robotic community bound together by a reconfigured experience of the holy, one grounded in the religious experience of, if not God, then the image of God.
I expect that religious communities in the future will become obsessed with the problem of postmodern robotic technoreligiosity and there will develop threads of theology, some very divisive, which will challenge the unity of extant religious traditions. Yet, I also believe that some religious communities will come to embrace the validity of robotic religious experience, even though its “reality” is of a very different nature.
Amuda, Y.J., and I. B. Tijani. 2012. Ethical and legal implications of sex robots: An islamic perspective. OIDA International Journal of Sustainable Development 3(6): 19–27.
Anne Forst, deus ex machina. n.d. Groks science radio show and podcast.
https://grokscience.wordpress.com/transcripts/anne-foerst/ (accessed January 3, 2015).
Aronson, L. 2014. The future of robot caregivers, New York Times, July 19.
http://www.nytimes.com/2014/07/20/opinion/sunday/the-future-of-robot-caregivers.html?_r=0 (accessed April 14, 2015).
Asimov, I. 2008a. Robbie. In Asimov, I, robot, 1–24 New York: Bantam Books. (Orig. pub. 1940.)
---. 2008b. Reason. In Asimov, I, robot, 46–67 New York: Bantam Books. (Orig. pub. 1941.)
Augustine. 2004. City of God. Trans. H. Bettenson. New York: Penguin Classics.
Baudrillard, J. 1988. Simulacra and simulation. In Selected writings, ed. M. Poster, 166– 84. Stanford: Stanford University.
Childcare robot. n.d. Nextnature. http://www.nextnature.net/2006/03/child-care-robot/ (accessed April 14, 2015).
Cohen, J. 2013. Memory implants. MIT Technology Review, April 23.
http://www.technologyreview.com/featuredstory/513681/memory-implants/ (accessed February 15, 2015).
Coeckelbergh, M. 2010. The spirit in the network: models for spirituality in a technological culture. Zygon 45 (4): 957–78.
Cook, G. 2015. Mind games. New York Times Magazine, January 11, 26–31; 50.
Durkheim, E. 1965. The elementary forms of the religious life. Trans. J. W. Swain. New York: The Free Press. (Orig. pub. 1912.)
Eliade, M. 1984. Quest: History and meaning in religion. Chicago: University of Chicago Press. (Orig. pub. 1969.)
Fisher, M. 2015. More human than the human? Toward a “transhumanist” Christian theological anthropology. In Religion and transhumanism: The unknown future of human enhancement, ed. C. Mercer and T. Trothen, 23–38. Santa Barbara, CA: Praeger.
Foerst, A. 1998. Cog, a humanoid robot, and the question of the image of god. Zygon 33(1): 91–111.
Iranian teacher builds robot to teach prayer. “It was so exciting to me to see a robot pray,” student says. 2014. The Associated Press, February 25.
Jeffries, A. 2014. Can a robot learn right from wrong? The Verge, May 27.
http://www.theverge.com/2014/5/27/5754126/the-next-challenge-for-robots-morality (accessed January 3, 2015).
Kovac, S. 2014. Robots will soon be raising children in Japan. For real. Allparenting.com, July 3.
(accessed January 3, 2015).
Kurzweil, Ray. 1999. The age of spiritual machines: When computers exceed human intelligence. New York: Penguin.
--- . 2005. The singularity is near: When humans transcend biology. New York: Viking.
Levy, D. 2008. Love and sex with robots: The evolution of human-robot relationships. New York: Harper.
Linke, C. 2013. Social robotic experience and media communication practices: An exploration on the emotional and ritualized human-technology-relations. intervalla 1: 49–59.
Marcus, G. 2015. Teaching robots to be moral. The New Yorker, March 12.
http://www.newyorker.com/tech/elements/teaching-robots-to-be-moral(accessed April 14, 2015).
McCutcheon, R.T. 1997. Manufacturing religion: The discourse on sui generis religion and the politics of nostalgia. New York: Oxford.
---. 2004. Religion, ire and dangerous things. Journal of the American Academy of Religion 72(1): 173–93.
Mercer, C. 2015. Whole brain emulation requires enhanced theology, and a “handmaiden.” Theology and science 13(2): 175–86.
Muyashita, T., T. Tajika, H. Ishiguro, K. Kogure, and N. Hagita. 2007. Haptic communication between humans and robots. Advanced Robotics 28: 525–36.
http://robots.stanford.edu/isrr-papers/draft/miyashita-final.pdf(accessed January 3, 2015).
Orrù, M. and A. Wang. 1992. Durkheim, religion, and buddhism. Journal for the Scientific Study of Religion 31(1): 47–61.
Otto, R. 1950. The idea of the holy. Trans. J.W. Harvey, 2nd ed. New York: Oxford. (Orig. pub. 1917.)
Peters, Ted. 2015. Theologians testing transhumanism. Theology and Science 13(2): 130–49.
Rosemont, Jr., H. and H. Smith. 2008. Is there a universal grammar of religion? Chicago: Open Court.
Saler, B. 1999. Family resemblance and the definition of religion. Historical Reflections/Réflexions Historiques 25 (3): 391–404 .
Sandberg, A. 2015. Transhumanism and the meaning of life. In Religion and transhumanism: the unknown future of human enhancement, ed. C. Mercer and T. Trothen, 3–22. Santa Barbara, CA: Praeger.
Schweitzer, A. 2005. The quest of the historical Jesus. Translated by W. Montgomery. Mineola, NY: Dover. (Orig. pub. 1906.)
Seung, S. 2013. Connectome: How the brain’s wiring makes us who we are. New York: Mariner Books.
Sharkey, N. 2008. The ethical frontiers of robotics. Pp. 1800-1801 in Science, December 19, 322(5909). http://www.sciencemag.org/content/322/5909/1800.full(accessed April 14, 2015).
The social roboticist. 2012. Nature, August 15.
www.nature.com/nature/journal/v488/n7411/full/488280a.html (accessed February 15, 2015).
Stacey, K. 2014. Can robots learn right from wrong? News From Brown, May 8.
https://news.brown.edu/articles/2014/05/morality (accessed April 14, 2015).
Taylor, Charles. 2007. A secular age. Cambridge, MA: Harvard University Press.
Weber, M. 1946. Politics as a vocation. Pp. 77-128 in From Max Weber. Ed. and trans. H. Gerth and C.W. Mills. London: Routledge and Kegan Paul. (Orig. pub. 1919.)
---. 1991. The sociology of religion. Translated by E. Fischoff. Boston: Beacon. (Orig. pub. 1922.)
The White House. n.d. The brain initiative.
http://www.whitehouse.gov/BRAIN(accessed April 13, 2015).
Will robot nannies save Japan's economy? 2013. Planet Money, July 19.
Wrede, W. 1971. The messianic secret, translated by J.C.G. Greig. London: James Clarke & Co.