The Three Minds
Argument Jamie Cullen Artificial Intelligence Laboratory The jsc@cse.unsw.edu.au Journal of Evolution and Technology - Vol. 20 Issue 1 – June 2009 - pgs 51-60 http://jetpress.org/v20/cullen.htm Abstract Searle (1980, 2002, 2006) has long
maintained a position that non-biologically based machines (including robots
and digital computers), no matter how intelligently they may appear to behave,
cannot achieve “intentionality” or “consciousness,” have a “mind,” and so
forth. Standard replies to Searle’s argument, as commonly cited by researchers
in Artificial Intelligence and related communities, are sometimes considered
unsatisfactory by readers outside of such fields. One possible reason for this
is that the Chinese Room Argument makes a strong appeal to some people’s
intuitions regarding “understanding” and necessary conditions for
consciousness. Rather than contradict any such intuitions or conditions, I
present what in my view is an independent and largely compatible intuition: If
Searle’s argument is sound, then surely a human placed under similar testing
conditions as a non-biological machine should succeed where a machine would
allegedly fail. The outcome is a new rebuttal to the Chinese Room that is
ideologically independent of one’s views on the necessary and sufficient
conditions for having a “mind.” 1 Introduction Searle’s Chinese Room Argument ( [The] appropriately programmed computer
really is a mind, in the sense that computers given the right programs can
literally be said to understand and have other cognitive states. (Searle
1980.) The In my opinion, a significant
amount of this continued debate is caused by two inter-related factors: (a)
Some opponents of the The dual purpose of this
paper is to (a) draw explicit attention to the earlier mentioned underlying
belief, and (b) provide an alternative appeal to a commonsense intuition that
makes the sleight of hand underlying the Chinese Room more readily apparent,
whilst neither affirming nor contradicting the underlying belief. I hope that
the argument presented here is found to be intuitive by people both inside and
outside of the Artificial Intelligence research community. I will conclude the paper by discussing the implications
of the presented argument, and by re-examining the roles of the various
possible participants in the 2 The Chinese Room 2.1 Overview The essence of the While it is beyond the scope of this paper
to attempt a full summary of the debate surrounding the Chinese Room (such a
summary would be extremely difficult to accomplish in a normal journal-sized
article), I will provide a brief description of two of the more well-known
replies: the Systems Reply and the Robot Reply (Cole 2004).2 In the Systems Reply it is argued that, while the
person in the room (Searle) might not understand Chinese, the system as a whole
understands. The fact that Searle himself does not understand Chinese is
irrelevant in the same sense that an individual section of a human brain (let’s
assume a part of the brain not typically associated with linguistic ability)
might not understand Chinese when considered in isolation. However, the brain
when considered as a whole system, does “understand.” Searle’s reply to this
line of reasoning is to ask us to imagine that he (in the room) now memorizes
the rule-book and keeps track of everything in his own head (Searle 1980). He
argues that, since the only key component of the system is effectively himself,
and since he still would not “understand” Chinese in any conventional sense
(Searle is apparently considering his own English-speaking personal viewpoint
in this case), the fact that the system as a whole appears to “understand”
Chinese proves nothing. In the Robot Reply we are asked to imagine
that the “rule-book” (computer program) is placed into a robot that can
interact in a sensorimotor sense with its environment. This might allow a
connection of the symbols with real sensorimotor experience. Searle’s reply is
to ask us to still consider the person in the room, but to ask us to imagine
that some of the inputs to the room are received directly from a camera mounted
on a robot, and that some of the outputs are used to actuate the robot’s limbs.
He then argues that Searle (in the room) still does not know what those symbols
“mean,” regardless of their source. He is simply following the rules of the
rule-book (Searle 1980). 2.2 A Two-Participant Turing Test in Chinese We might consider the Chinese Room Argument to be a
kind of “Two Participant Turing Test.” In the original “Turing Test” (also
known as the Imitation Game), we are asked to imagine a game of three
participants: an Interrogator, a Human player, and a Machine player. The three
participants are all physically separated, cannot see one another, and may
communicate only through a textual communication facility (e.g. by “typing”
textual messages to one another). The Interrogator’s task in the game is to
attempt to correctly distinguish the Human from the Machine using only the communication
in the game (Turing 1950). To form the Chinese Room variant of the Turing Test,
we remove the Human player, and change the language of testing to Chinese. The
Chinese Speaker outside takes on the role of Turing’s original “Interrogator,”
and the “Something in the Room” takes on the role of the Machine. The Chinese
Room Argument then looks inside the Machine player to attempt to show that the
Machine does not really “understand,” even if it can fool the Interrogator that
it is human over a channel of linguistic communication. Searle then takes on
various roles within the Machine in an attempt to show that regardless of an
allegedly successful conversation between Machine and Interrogator, he still
does not “understand” the conversation. His underlying (perhaps implicit)
assumption is that if he (in the room) does not “understand,” or cannot
identify exactly “where” or “how” “understanding” is taking place, then
“understanding” must not be taking place. In Searle’s words: ... formal symbol manipulations by
themselves don’t have any intentionality; they are quite meaningless; they
aren’t even symbol manipulations; since the symbols don’t symbolize anything
... The aim of the Chinese room example was to try to show this by showing that
as soon as we put something into the system that really does have
intentionality (a man), and we program him with the formal program, you can see
that the formal program carries no additional intentionality. It adds
nothing, for example, to a man’s ability to understand Chinese. (Searle
1980; emphasis mine.) 3 Presenting the underlying ideology: Élan
vitale in
the Chinese Room Searle’s argument is sometimes over-broadened (perhaps
misconstrued) as a rebuttal of the concept of man-made intelligence in general.
That is, the argument is seen as an attack on Artificial Intelligence in the
broader sense of something artificially constructed by human beings, rather
than more familiar artifacts such as a robots, digital computers, analog
computers, Universal Turing Machines, etc. However, Searle’s argument does not
seem so broad as to include all kinds of man-made artifacts. Indeed, he states: Assuming it is possible to produce
artificially a machine with a nervous system, neurons with axons and dendrites,
and all the rest of it, sufficiently like ours, again the answer to the
question (Could an artifact, a man-made machine, think? ) seems to be
obviously, yes. (Searle 1980.) More recently he has stated: “There is no question
that machines can think, because human and animal brains are precisely such
machines” ( Where Searle’s views
apparently diverge from those of many AI and Could we produce consciousness
artificially in some medium other than carbon-based organic compounds? The
short answer to that question is that we just do not know at present. Since we
do not know how the brain does it, we do not know what sorts of chemical
devices are necessary for its production. Perhaps it will turn out that
consciousness can only be produced using electro-chemical phenomena, but does
not require organic compounds. It might be that certain types of organic
compounds in certain conditions are causally sufficient without it being the
case that they are causally necessary. At present we just do not know. ( In my opinion, Searle’s quest for a biological causal
agent of consciousness or Élan vitale is a term attributed to the philosopher
Henri Bergson from his book Creative Evolution, first published in the
early 20th century (Bergson 1911). Élan vitale (normally translated from
French into English as “vital force” or “vital impetus”) is a hypothetical
explanation for the development of organisms and evolution, which Bergson
linked closely to consciousness.3 It is essentially an elusive
mystical force that acts on inanimate matter and gives rise to life and
consciousness. The overall notion is very similar to the more modern notion of
an 4 An ideological spectrum I find it useful to visualize Élan vitale as
lying on an ideological spectrum of ideas regarding necessary and sufficient
conditions for possessing a “mind.” On the left side we might place more
liberal and inclusive notions regarding “minds,” particularly those related to
structure or strictly information-based ideas. Walking from left to right we
progressively add stricter requirements, moving towards more anthropocentric
notions of mental phenomena. As the spectrum is deliberately ideological in
nature there will naturally be strong disagreements between proponents of any
one ideology about which requirements are necessary, sufficient, or even real.
At the extreme left, we might imagine that suitably arranged matter shaped into
an appropriate structure could produce “consciousness” or a “mind.” Near the
left end of the spectrum, we might place ideas such as the Physical Symbol
System hypothesis (Newell and Simon 1963). In these and related ideas,
“consciousness” is probably considered as an illusory notion or perhaps an
emergent property of appropriately organized matter. Walking towards the right we might add
embodiment (both physical and virtual) to our list: i.e., a computer might not
be capable of having a “mind,” but a computer hooked up to sensorimotor
apparatus, such as an epigenetic robot, or virtually embodied in a simulator,
might fit the bill. Slightly further to the right, we might remove the virtual
requirement, and allow only physically embodied machines. At the extreme right
of the spectrum we might place the dualistic notion that only a human being
with a “soul” may possess a “mind.” Shortly before Cartesian Dualism seems an
appropriate position for the notion of Élan vitale or the 5 Embracing the underlying ideology: Applying
the Chinese Room to a human being Part of the difficulty in artifact-related replies
(which include many of the popular replies to the While non-biological artifact-based arguments may
never convince Searle to abandon his apparent belief in Élan vitale, one
may choose to frame the debate in a different way in order to see more clearly
if his argument (rather than any specific ideology) really does hold up. As
Searle’s argument appears to hinge on whether or not Searle (in the room)
“understands” the Chinese conversation taking place, the question I ask is
relatively simple: What would happen if we applied the Chinese Room test
criteria to a human being? Would Searle in the room be able to understand the
conversation? Such a framing of the debate
removes the ambiguity and confusion that may result from taking about
non-biological artifacts, or getting caught up in discussions about what
may/may not be capable of “intentionality,” “minds,” “consciousness,” etc.
Rather than going around and around on an ideological Ferris Wheel debating
what can/can not have “consciousness,” we instead choose to test something
(actually “someone” in this case) who, by definition, must have it, and see
what the test says about them and their ability to have a mind. Let us imagine what might happen if we placed a human
being under the same test conditions that Searle chooses to test an artifact.
From there we can examine whether Searle in the room might be able to
understand Chinese, and consequently whether or not the Chinese Room is structurally
valid from this perspective. If the Chinese Room test cannot be passed by a
human being, then what hope might an artifact have? 6 The Three Minds Argument ( Imagine a modified form of the To expand on the story above, we might further imagine
Searle in the room as the full-time caregiver for Nao (a tough job, but
presumably far better than being stuck in a jar). Searle’s job is both to input
the symbols into Nao’s jar, and to interpret the outputs from the jar as
written symbols and put them outside the room. As this is a thought experiment,
let us imagine Nao has sensors on his jar such that he can receive symbols in
Chinese via Searle through the pathways that used to correspond to sensory
input (perhaps the visual pathways) and understand them. Nao also has a similar
ability for outputting written symbols, say using some neural connections
formerly used for motor action. These simple sensors and actuators take on a
similar role to the pencil and paper in the rule-book metaphor, in that they
facilitate the updating of state and output of conversational symbols from the
brain in the jar. Searle’s job is still to pass the symbols back and forward
from the For the sake of argument, we might even imagine that
Searle also has the job of maintaining the life support system of Nao’s jar. He
repeatedly pumps a foot pedal to power the mechanism that is keeping Nao alive,
among some other life support maintaining duties (flow of oxygenated blood to
the brain, etc.). That is, without Searle around, Nao would die, and he or she
wouldn’t be able to continue a conversation with Li. It should be apparent that
Nao is entirely dependent on Searle in order to continue in the conversation
with Li. One might imagine the room as a kind of hospital room instead of a
nondescript “Chinese Room.” Note that in the original Note that Searle’s job in the original The intent behind the Now that we have three “guaranteed” minds in the
experiment, a reasonable question to ask is this: Would Searle in the room understand the conversation that takes place
between Nao and Li? The obvious answer appears to
be “No.” Li and Nao would converse (albeit awkwardly) in Chinese, while Searle
would in a sense facilitate, but still not understand the conversation
occurring in a language foreign to him. 7 So who is communicating with whom? The intuition that Searle still would not understand
Chinese, even if a human was used in the room in place of an artifact, is a key
component of the Three Minds Argument. It seems intuitive that Searle in the
room would not understand the conversation as he normally does not speak
Chinese, and is actually not part of the direct conversation. However, Searle’s
argument for an artifact not being able to have a “mind,” etc. hinges upon
precisely the same point. It seems that in both the As Pinker (1997) points out, we often view human
beings in different ways in different contexts. For example, in the context of a
serious moral dilemma, it makes sense to view a human being as a living,
thinking entity with emotions and a mind. In the context of an airplane engineer
designing a plane, the passengers are viewed as weight, little more. In the
context of using a wrench, the human is little more than a torque applying
tool. I posit that in the context of the Searle in the room is still a
human being; however, his role in the conversation is more that of a conduit of
information, rather than a conversational participant. In my view, such a
reframing of the situation draws attention to the underlying sleight of hand in
the Chinese Room: assuming any communication is actually possible, it would be
occurring between Li and Nao, not Li and Searle. As Searle observes, it is
perhaps intuitive that in his role in the Chinese Room he would not
“understand” the conversation. However, this is because he does not speak
Chinese, and is not actively participating in the conversation. In my view, the
“commonsense” intuition that Searle in the room would not “understand” is
actually a valid and correct intuition. However, Searle’s inference as to the
reason behind this lack of “understanding” is not. When we start with something that Searle
presumably must concede has “intentionality” (a person), Searle in the room still
cannot understand the conversation. The properties that the artifact from the
original Chinese Room might possess become largely irrelevant when viewed in
this context: the question to ask in the 8 Possible Objections 8.1 Missing Embodiment Reply One concern that might be raised is that Nao appears
to have lost conventional human sensorimotor embodiment himself. It might then
be argued that Nao has now consequently also “lost” some abilities such as
“intentionality.” In response, we could imagine a variant of the Note that this particular
reformulation of the 8.2 Intuition Pump Reply In the pejorative sense of
the term, “intuition pumps” are philosophical arguments that essentially
mislead us into accepting an idea by making an appeal to our “commonsense”
intuitions, while helping us to conveniently overlook important details.
Indeed, the concept of the intuition pump has been used before to criticize the
Chinese Room (Dennett 1991). It might also be argued that the Three Minds
Argument is an intuition pump as it too makes a strong appeal to intuition.
While I have no objection to the claim that the Three Minds Argument makes an
appeal to intuition, not all appeals to intuition are necessarily incorrect or
misleading: just because something may be intuitive, doesn’t mean it is wrong.
A claim that an argument is an intuition pump clearly requires a strong
supporting argument pointing out where and how the argument might be
misleading, if such a criticism is to hold up to scrutiny. 8.3 Isn’t this just the Systems Reply? In the Systems Reply Searle
in the room as is essentially considered as a component in a larger machine:
while Searle in the room does not “understand,” the system as a whole does. The
key distinction between the Systems Reply and the Three Minds Argument is that
the Systems Reply is described with the assumption that a non-biological
artifact is being tested. For someone ideologically orientated towards the
right hand side of the described “ideological spectrum,” these arguments may
still appear as “obviously” flawed, due to the source material simply not
having the “right” properties for consciousness. For a person more inclined
towards the left side of the spectrum, such a concern seems more likely to be a
non-issue. The Three Minds Argument attempts to side step such concerns by
simply not relying on a non-biological artifact. If the test is to be a fair
one, it should be structurally valid for people as well as for machines. 8.4 Didn’t you put the
“intentionality” back in? Part of Searle’s original argument is that he believes
that human beings have “intentionality.” As a computer can’t have
“intentionality” (by Searle’s reasoning), the test cannot be passed. Now that
we’ve effectively “put the intentionality back in” by replacing the artifact
with a human being, Searle might simply agree that the test can now be passed,
as the system now has “intentionality” (or some other quality that we might
believe that a human being possesses that an artifact does not). This response
misses the point of my argument: In my view, “intentionality” and related
issues are “red herrings” in the Chinese Room. The key is not what special properties
the “mind” being tested might possess, it is about Searle in the room and his
“understanding” of the conversation. The Three Minds Argument attempts to show
that Searle still doesn’t understand the conversation irrespective of what
properties the entity under examination might possess. When we can see that
even a person cannot pass Searle’s test, the structural flaw in the Chinese
Room becomes clear. 9 Conclusion Searle (1980, 2002, 2006) has long maintained a
position that non-biologically based machines (including robots and digital
computers), no matter how intelligently they may appear to behave, cannot
achieve “intentionality” or “consciousness,” or have a “mind.” His position is
apparently rooted in an underlying belief that presently undiscovered
biological mechanisms give rise to such phenomena. I suggest that Searle’s
search for “Neural Correlates of Consciousness” may be characterized as a
modern reformulation of a significantly older idea, normally going by the name
of Élan vitale. The Physical Symbol System Hypothesis and Élan vitale
can be imagined as occupying roughly opposed positions on a spectrum of
ideologies related to necessary and sufficient conditions for “minds.” In his
original In the Three Minds Argument, I attempt to side-step
the problem of arguing for or against specific ideologies regarding the “mind”
and take a different approach. The man-made artifact described in the The overall intent of this paper is neither to
critique nor to question any specific beliefs about consciousness, minds, and
so forth, but instead to demonstrate that the Chinese Room does not prove
anything about such things. In the meantime, the question of whether or not
Artificial General Intelligence is possible remains both open and provocative. Acknowledgments Thanks to Alan Blair and Peter Slezak for their
feedback on earlier drafts of this paper. Notes 1. I have deliberately avoided using Searle’s term “Strong
Artificial Intelligence” here, as I believe that confusion around what this
term signifies has muddied the Chinese Room debate. I have instead opted for a
more modern term, which I believe better reflects the type of research at which
Searle’s argument is normally directed today, irrespective of Searle’s original
intent. 2. Readers may wish to consult (Cole 2004) and ( 3. Élan vitale has also been described as a
possible inspiration for the fantasy concept of “The Force,” as represented in
the 1977 movie Star Wars (incidentally, released a few years before
publication of the 4. I suspect that most reformulations of the References Bergson, H. 1911. Creative evolution.
Translated by A. Mitchell. Henry Holt and Company. Dennett, D. 1991. Consciousness explained. Penguin Books. Cole, D.
2004. The Chinese Room Argument. The
Stanford Encyclopedia of Philosophy (Summer 2004 Edition), ed. E. Zalta. http://plato.stanford.edu/archives/sum2004/entries/chinese-room/ Newell, A., and H. Simon. 1963. Pinker, S. 1997. How the mind works.
Penguin Books. Searle, J. 1980. Minds, brains, and
programs. The Behavioral and Brain Sciences 3 (3): 417-57. Searle, J. 2006. Talk given at Towards a
Science of Consciousness (conference). Philosopher’s
Zone, ABC National Podcast. http://www.abc.net.au/rn/philosopherszone/index/date2006.htm
(Retrieved: 2nd May 2009). Turing, A. 1950. Computing machinery and
intelligence. Mind 59:
433–60. |