Friday, June 10, 2011

Can Stupid be Smart?

I'm going to talk about something that tries to pass itself off as philosophy. It's called "Philosophy of Mind" but don't let the highfalutin name fool you. In the wrong hands this "Philosophy of Mind" should be called "Philosophy of Never Mind."

The topic today is John Searle's Chinese room thought "experiment." It's supposed to tell us why computers can never truly understand things like people understand things. Searle may be correct about that. Computers don't yet understand anything in the same way you or I do. I'm fairly confident of that. But Searle can reach into the future and practically guarantee computers never will understand. That seems to be overly bold to me.

The following is a slightly simplified version of what Searle wants us to believe about "mind." [1]


You are a prisoner locked in a room. You have been given a bunch of cards with strange writings. Unknown to you these are Chinese words and symbols. You have also been given a list of instructions. These instructions tell you that another series of cards are going to be shoved through a slot in the door. These will have symbols printed on them too. Your job is to follow more instructions given to you (in plain English) which tell you how to match the new cards with the old cards and create a third stack of cards which you will shove back though the slot in the door.

You do your job well and one day they give you a pardon and a diploma because you have proven you are a master of Chinese Philosophy with special merit in the thought of Confucius. Unknown to you this has not been a waste of your time. A committee of eminent Chinese philosophers has been shoving questions through the slot. Your responses were brilliant if somewhat formal and, at times, insulting.

So do you deserve this honor? Do you know anything about Confucius or even Chinese? Of course not. Searle thinks this is significant. He thinks we have proven that computers can fool us. They may appear to understand even though they do not understand. He claims this is the case because you didn't understand anything about what you were doing in that room or why you were doing it. You were simply following a list of instructions -- that is, you were executing a program. And even though you executed it well, you still understood nothing.

Searle, pretending to be that prisoner, thinks his thought "experiment" does the following: "I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program."

Right away Searle makes a fatal mistake. And with this mistake absolutely nothing else he concludes will necessarily be true. His first statement is mostly correct. He does produce the answers by manipulating symbols, symbols that he is clueless about. He collects symbols. He compares the symbols. He moves them around. He sends symbols somewhere else and he's done. These are discrete steps he executes according to the instructions he's been given. In this case he is the equivalent of a computer's Central Processing Unit (CPU). The CPU in all computers is a very simple device. Its basic functions can be reduced to: comparing, jumping, adding or subtracting, logical "and" or "or" operations, and moving values around.. Actually it can be reduced to simpler than that. So let's grant that the man locked in the room is functionally the CPU.

This is the sentence where Searle goes wrong: "I am simply an instantiation of the computer program."

No, he is not. He is the CPU. He is not the program. There is no requirement that the CPU understands the program, and in fact it never does. It doesn't know what the next instruction is. It doesn't even remember the previous instruction. It's dumb as dumb can be. Any computer scientist knows the difference between a CPU and a program. It's the difference between hardware and software. Let's say it's the difference between a car and a driver. So Searle fails to understand the thing that he claims to model. He says, "I can have any formal program you like, but I still understand nothing." Of course. He's the hardware in his "experiment," not the program. He's the car, not the driver. He's not expected to understand. He says "in the Chinese case the computer is me." But he's confusing the man in the room with the room itself and the instructions (program) in the room that he faithfully executes. He is one small part of the system. He is not the system itself.

Searle commits the fallacy of division. A computer executing a program is a system. Not every part of a system contains every attribute of the system. Consider the system named Angelia Jolie. The system is sexy by some standards. Yet is every component of that system sexy? Is that nose hair sexy? Is that liver sexy? Consider a mechanical watch. It keeps good time. Does a gear in that watch keep good time? Does the spring inherently have anything to do with keeping good time?

The question in my mind is how this "experiment" could have generated such interest in the first place. It's so deeply flawed that it should have been shrugged off as the musings of an untrained mind.

Nevertheless, Searle continues his errors: "we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding." But we can see no such thing because Searle has committed us to "seeing" from the limited perspective of the man-as-CPU. He is trying to avoid seeing from the system level. If his "experiment" does what he claims, if it does fool Chinese philosophers that the room does understand Chinese, then it could very well be that the system does understand even though the prisoner does not. The program and all of its hardware understands, the man is simply one part that enables that understanding.

Perhaps Searle is confused -- or hopes to confuse -- because he knows men can think. Men can understand. So if he puts man in the role of a dumb CPU we look at this "experiment" from the man's point of view. But in order to properly evaluate the "experiment" we need to look at the whole system. It's much more difficult to see that perspective from the program's point of view. It may be impossible to construct a thought "experiment" from the program's perspective.

We can easily dismiss this next assertion: "One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same -- or perhaps more of the same -- as what I was doing in manipulating the Chinese symbols." This is definitely not the claim made by supporters of strong AI. Artificial Intelligence supporters are not as confused as Searle. They know the difference between a CPU and the system as a whole.

Searle goes further, claiming "not the slightest reason has so far been given to believe" that "a program may be part of the story." Why? Because his example suggests "that the computer program is simply irrelevant to my understanding." Yes it's true the program is irrelevant to the prisoner's understanding. But as man-as-CPU is not the whole system, there is simply no need that man-as-CPU understands a thing. Searle keeps making the same mistake: "I have everything that artificial intelligence can put into me by way of a program, and I understand nothing." Of course. A CPU is not designed to understand.

Let's assume Newton understood calculus since he invented it. Should we expect to yank a neuron out of his brain and demand it understand calculus? That's the standard Searle expects of AI.

Searle notes that if the man-as-CPU was passed symbols in English instead of Chinese that he would understand what was going on. This is significant to Searle but it has no significance whatsoever. Man-as-CPU is allowed to understand. He is a man after all. But it simply does not follow that he must understand. As most employees learn, one can follow orders whether the orders are understood or not.

Apparently this systems objection was brought to Searle's attention prior to publishing. A reasonable person would have admitted, "Well maybe I haven't thought this thing through." But not Searle. He pulls the old bait and switch. He recasts his "experiment" to this: "let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori ["it's even more likely"] neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him."

Got that? That, my friends, is intellectual dishonesty. He's asking us to imagine a man who learns Chinese. But this is no ordinary Chinese speaker. This is a man who knows Chinese but he really doesn't know Chinese. Searle is asking us to start with a contradiction.

Why didn't Searle simply begin with this contradictory scenario? Why mention the Chinese Room at all? That's easy to see. Because nobody with a brain would have read past the first paragraph of such a silly proposal. This is what Searle now expects you to believe:

The "But I Don't Understand" Defense

Suppose you are a traffic cop. You pull over a Toyota for speeding. Inside there is a nice looking oriental couple. The passenger, a woman, appears to be pregnant. You address the driver:

"Sir, I clocked you at 95 mph."

"My wife is in labor, officer." The officer makes eye contact with her. She screams!

"Yeah, right. I've heard that one before."

"Contractions are two minutes apart!"

"Not buying. You were going 95 in a 45 mph zone. Hand over your driver's license. And turn off your engine."

"Please escort us to the hospital, officer!"

"Hands off the wheel! Turn off that engine!"

The woman screams again and the man takes off.

You, as an officer of the law, follow them to the hospital. The woman runs inside but you wrestle the man to the ground.

"My wife really is having a baby, officer!"

"Tell the judge."

As you're writing your report you learn the woman had a "false alarm." Maybe the baby will arrive next week.

A month later you're in court. The man pleads his case.

"I'm innocent, Your Honor. I admit I was going a little fast but I'm innocent of all those other charges dealing with refusal to obey an officer of the law and the flight and stuff. I simply didn't understand one word the officer was saying."

"Why didn't you understand?" asks the judge.

"Truth is I don't understand a word of English. Not one word."

"You seem to understand now."

"No, I don't."

"I don't understand."

"Me neither."

"What don't you understand?"

"None of this. None of this conversation and none of the commands given to me by the officer."

"Do you want me to cite you for contempt of court too?"

"No, sir. My contempt is not for the court. My contempt is for those who naively believe that just because English words come out of my mouth that I understand anything I say. I'm just a mouthpiece."

"A mouthpiece? Whose mouthpiece?"

"I don't know that either."

"Are you insane?"

"Would that make me innocent?"

"Is that your defense?"

"No. I simply don't understand English."

"How do you expect me to believe that?"

"Because someone programed me. They forced me to memorize the rules of the language. They forced me to memorize all the words. They forced me to perform billions of calculations in my head in order to spit out the proper responses to any English question. But I know nothing about what it all means."

"If that were the case I could ask the right question and you wouldn't have a response."

"I have an infinite number of responses so that wouldn't work."

"If you have an infinite number of responses then it's not reasonable of you to expect me to believe you have not learned English. So shut up before you get yourself into deeper trouble."


That is what Searle expects of us. In order to accept his thought "experiment" we must believe the irrational. His original Chinese Room now fits neatly into a brain. And now it becomes a bizarre assertion that a man who speaks a language well enough to fool experts really might not understand a word.

This is a lame response to the Turing Test. That is the issue. Searle merely begs the question when he claims he knows a guy who understands nothing yet can pass the Turing test. He should be honest and simply say he doesn't believe the Turing test is a valid test. He should dispense with his idiotic "experiment" because it reduces to that simple assertion anyway. Yet Searle has the audacity to claim systems objections such as mine beg the question. That's nonsense. Searle has not shown why a person or machine that converses quite normally in English should ever be suspected of not understanding.

Then Searle gets really irrational. "If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding."

I know of nobody who claims a stomach understands Chinese or acts like it understands Chinese. I doubt anybody seriously claims that *any* system that has inputs and outputs and a program in between is a cognitive system. If such a person exists, he is as silly as Searle. This cognitive stomach is a straw man. Because Searle misleads himself into thinking a person could converse fluently in Chinese yet not understand a word, he can reach the absurd conclusion that food and food waste can be information. As they say, garbage in, garbage out.

There are billions of computers in the world. Virtually all of them input data, process that data, and output data. Yet very few AI enthusiasts would claim any of those real systems is cognitive. So Searle is dreaming up true-believers who, if they exist at all, are insignificant. No computer system as of today "understands" what it is doing. The question is, can a computer system someday be designed which does understand what it is doing.

Searle has proven nothing other than the fact that his sort of "philosophy" will get us nowhere.

Notes:

[1] A copy of the original and a more easily read reprint.