The Chinese Room – introduction

August 16, 2011

What’s the Chinese Room Argument?

I’ve blogged before about the Turing Test several times; Alan Turing held the position that if a program was indistinguishable from a human mind in all manners of interaction, then it could be considered “conscious,” whatever that means. This position is formally known as “Strong AI” – in other words, hardware is inessential to the working of a mind – cognitive states can equally well be implemented on computers as on human brains.

For many, this is a troubling stance. It is difficult to argue that computers will never come to pass the Turing test, but there is a position which states that computers can only simulate thought and not demonstrate actual understanding – “weak AI.” Weak AI draws a line between simulating thought and actual thought; thought simulation is the manipulation of abstract symbols to produce output indistinguishable from the of a human, whereas actual thought consists of “states,” connects syntax with semantics, words such as  “tree” associated with sensory experience and memory, and involves understanding.

John R. Searle formulated an impressive argument for weak AI known as the “Chinese Room argument.” Here, we are taken to assume that the Turing Test exchange is in Chinese. Instead of a computer running the program, we have an English speaker who does not understand Chinese. Their job is to receive input in the form of Chinese characters. Then they follow would rules in a book (the program) in manipulating symbols in other books (the databases) in order to produce Chinese characters in output. The man doing the manipulations does not understand Chinese, and the program does not allow him to; hence the program does not “understand Chinese.” Strong AI is thus false, because programs cannot create understanding.

In the essay, Minds, Brains, and Programs, in which Searle produces this thought experiment, he presents and responds to a number of counterarguments. A few of them are listed here  –

  1. The Systems argument – understanding is an epiphenomenon, and perhaps although the man himself does not understand Chinese, the entire system does in fact possess an understanding. Searle deftly strikes this down; if the man were to memorize the program and the database, he would have internalized the entire system, yet he would still not understand Chinese.
  2. The Robot argument – If we created a robot with the program running as its brain, which was able to take in input via cameras and sensors and act accordingly, then surely it would show understanding of the world. Searle refutes this argument as well, emphasizing the lack of qualitative difference between the original program and this robot.
  3. “Many Mansions” argument – “Your whole argument presupposes that AI is only about analog and digital computers. But that happens to be the present state of technology.” Searle pokes fun of his opponents here, noting correctly that the “Strong AI” stance is supposed to be hardware-independent, such that any computational device should support intelligence if programmed correctly.

And so by introducing straw men and striking them down easily, Searle apparently solidifies his position. Or does he? Daniel Dennett provides a far more convincing counter-argument, which I’ll introduce in my next post.


Computer Science and Philosophy: The Most Human Human

July 8, 2011

I recently finished a fascinating book by Brian Christian, The Most Human Human. Christian took part in the 2009 “Loebner Prize,” an annual “official” Turing Test, in which prizes are given to the program which is able to fool the most human judges (or elicit, on average, the lowest confidence of its non-personhood in the judges’ assessments). Christian did in fact win the “Most Human Human award,” and to sum up some of his insights on how far artificial intelligence has come in terms of holding a conversation, I crafted a short Citizen’s Guide to Judging whether it is a Computer or Not:

Read the rest of this entry »