Friday, June 10, 2011

Can Stupid be Smart?

I'm going to talk about something that tries to pass itself off as philosophy. It's called "Philosophy of Mind" but don't let the highfalutin name fool you. In the wrong hands this "Philosophy of Mind" should be called "Philosophy of Never Mind."

The topic today is John Searle's Chinese room thought "experiment." It's supposed to tell us why computers can never truly understand things like people understand things. Searle may be correct about that. Computers don't yet understand anything in the same way you or I do. I'm fairly confident of that. But Searle can reach into the future and practically guarantee computers never will understand. That seems to be overly bold to me.

The following is a slightly simplified version of what Searle wants us to believe about "mind." [1]


You are a prisoner locked in a room. You have been given a bunch of cards with strange writings. Unknown to you these are Chinese words and symbols. You have also been given a list of instructions. These instructions tell you that another series of cards are going to be shoved through a slot in the door. These will have symbols printed on them too. Your job is to follow more instructions given to you (in plain English) which tell you how to match the new cards with the old cards and create a third stack of cards which you will shove back though the slot in the door.

You do your job well and one day they give you a pardon and a diploma because you have proven you are a master of Chinese Philosophy with special merit in the thought of Confucius. Unknown to you this has not been a waste of your time. A committee of eminent Chinese philosophers has been shoving questions through the slot. Your responses were brilliant if somewhat formal and, at times, insulting.

So do you deserve this honor? Do you know anything about Confucius or even Chinese? Of course not. Searle thinks this is significant. He thinks we have proven that computers can fool us. They may appear to understand even though they do not understand. He claims this is the case because you didn't understand anything about what you were doing in that room or why you were doing it. You were simply following a list of instructions -- that is, you were executing a program. And even though you executed it well, you still understood nothing.

Searle, pretending to be that prisoner, thinks his thought "experiment" does the following: "I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program."

Right away Searle makes a fatal mistake. And with this mistake absolutely nothing else he concludes will necessarily be true. His first statement is mostly correct. He does produce the answers by manipulating symbols, symbols that he is clueless about. He collects symbols. He compares the symbols. He moves them around. He sends symbols somewhere else and he's done. These are discrete steps he executes according to the instructions he's been given. In this case he is the equivalent of a computer's Central Processing Unit (CPU). The CPU in all computers is a very simple device. Its basic functions can be reduced to: comparing, jumping, adding or subtracting, logical "and" or "or" operations, and moving values around.. Actually it can be reduced to simpler than that. So let's grant that the man locked in the room is functionally the CPU.

This is the sentence where Searle goes wrong: "I am simply an instantiation of the computer program."

No, he is not. He is the CPU. He is not the program. There is no requirement that the CPU understands the program, and in fact it never does. It doesn't know what the next instruction is. It doesn't even remember the previous instruction. It's dumb as dumb can be. Any computer scientist knows the difference between a CPU and a program. It's the difference between hardware and software. Let's say it's the difference between a car and a driver. So Searle fails to understand the thing that he claims to model. He says, "I can have any formal program you like, but I still understand nothing." Of course. He's the hardware in his "experiment," not the program. He's the car, not the driver. He's not expected to understand. He says "in the Chinese case the computer is me." But he's confusing the man in the room with the room itself and the instructions (program) in the room that he faithfully executes. He is one small part of the system. He is not the system itself.

Searle commits the fallacy of division. A computer executing a program is a system. Not every part of a system contains every attribute of the system. Consider the system named Angelia Jolie. The system is sexy by some standards. Yet is every component of that system sexy? Is that nose hair sexy? Is that liver sexy? Consider a mechanical watch. It keeps good time. Does a gear in that watch keep good time? Does the spring inherently have anything to do with keeping good time?

The question in my mind is how this "experiment" could have generated such interest in the first place. It's so deeply flawed that it should have been shrugged off as the musings of an untrained mind.

Nevertheless, Searle continues his errors: "we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding." But we can see no such thing because Searle has committed us to "seeing" from the limited perspective of the man-as-CPU. He is trying to avoid seeing from the system level. If his "experiment" does what he claims, if it does fool Chinese philosophers that the room does understand Chinese, then it could very well be that the system does understand even though the prisoner does not. The program and all of its hardware understands, the man is simply one part that enables that understanding.

Perhaps Searle is confused -- or hopes to confuse -- because he knows men can think. Men can understand. So if he puts man in the role of a dumb CPU we look at this "experiment" from the man's point of view. But in order to properly evaluate the "experiment" we need to look at the whole system. It's much more difficult to see that perspective from the program's point of view. It may be impossible to construct a thought "experiment" from the program's perspective.

We can easily dismiss this next assertion: "One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same -- or perhaps more of the same -- as what I was doing in manipulating the Chinese symbols." This is definitely not the claim made by supporters of strong AI. Artificial Intelligence supporters are not as confused as Searle. They know the difference between a CPU and the system as a whole.

Searle goes further, claiming "not the slightest reason has so far been given to believe" that "a program may be part of the story." Why? Because his example suggests "that the computer program is simply irrelevant to my understanding." Yes it's true the program is irrelevant to the prisoner's understanding. But as man-as-CPU is not the whole system, there is simply no need that man-as-CPU understands a thing. Searle keeps making the same mistake: "I have everything that artificial intelligence can put into me by way of a program, and I understand nothing." Of course. A CPU is not designed to understand.

Let's assume Newton understood calculus since he invented it. Should we expect to yank a neuron out of his brain and demand it understand calculus? That's the standard Searle expects of AI.

Searle notes that if the man-as-CPU was passed symbols in English instead of Chinese that he would understand what was going on. This is significant to Searle but it has no significance whatsoever. Man-as-CPU is allowed to understand. He is a man after all. But it simply does not follow that he must understand. As most employees learn, one can follow orders whether the orders are understood or not.

Apparently this systems objection was brought to Searle's attention prior to publishing. A reasonable person would have admitted, "Well maybe I haven't thought this thing through." But not Searle. He pulls the old bait and switch. He recasts his "experiment" to this: "let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori ["it's even more likely"] neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him."

Got that? That, my friends, is intellectual dishonesty. He's asking us to imagine a man who learns Chinese. But this is no ordinary Chinese speaker. This is a man who knows Chinese but he really doesn't know Chinese. Searle is asking us to start with a contradiction.

Why didn't Searle simply begin with this contradictory scenario? Why mention the Chinese Room at all? That's easy to see. Because nobody with a brain would have read past the first paragraph of such a silly proposal. This is what Searle now expects you to believe:

The "But I Don't Understand" Defense

Suppose you are a traffic cop. You pull over a Toyota for speeding. Inside there is a nice looking oriental couple. The passenger, a woman, appears to be pregnant. You address the driver:

"Sir, I clocked you at 95 mph."

"My wife is in labor, officer." The officer makes eye contact with her. She screams!

"Yeah, right. I've heard that one before."

"Contractions are two minutes apart!"

"Not buying. You were going 95 in a 45 mph zone. Hand over your driver's license. And turn off your engine."

"Please escort us to the hospital, officer!"

"Hands off the wheel! Turn off that engine!"

The woman screams again and the man takes off.

You, as an officer of the law, follow them to the hospital. The woman runs inside but you wrestle the man to the ground.

"My wife really is having a baby, officer!"

"Tell the judge."

As you're writing your report you learn the woman had a "false alarm." Maybe the baby will arrive next week.

A month later you're in court. The man pleads his case.

"I'm innocent, Your Honor. I admit I was going a little fast but I'm innocent of all those other charges dealing with refusal to obey an officer of the law and the flight and stuff. I simply didn't understand one word the officer was saying."

"Why didn't you understand?" asks the judge.

"Truth is I don't understand a word of English. Not one word."

"You seem to understand now."

"No, I don't."

"I don't understand."

"Me neither."

"What don't you understand?"

"None of this. None of this conversation and none of the commands given to me by the officer."

"Do you want me to cite you for contempt of court too?"

"No, sir. My contempt is not for the court. My contempt is for those who naively believe that just because English words come out of my mouth that I understand anything I say. I'm just a mouthpiece."

"A mouthpiece? Whose mouthpiece?"

"I don't know that either."

"Are you insane?"

"Would that make me innocent?"

"Is that your defense?"

"No. I simply don't understand English."

"How do you expect me to believe that?"

"Because someone programed me. They forced me to memorize the rules of the language. They forced me to memorize all the words. They forced me to perform billions of calculations in my head in order to spit out the proper responses to any English question. But I know nothing about what it all means."

"If that were the case I could ask the right question and you wouldn't have a response."

"I have an infinite number of responses so that wouldn't work."

"If you have an infinite number of responses then it's not reasonable of you to expect me to believe you have not learned English. So shut up before you get yourself into deeper trouble."


That is what Searle expects of us. In order to accept his thought "experiment" we must believe the irrational. His original Chinese Room now fits neatly into a brain. And now it becomes a bizarre assertion that a man who speaks a language well enough to fool experts really might not understand a word.

This is a lame response to the Turing Test. That is the issue. Searle merely begs the question when he claims he knows a guy who understands nothing yet can pass the Turing test. He should be honest and simply say he doesn't believe the Turing test is a valid test. He should dispense with his idiotic "experiment" because it reduces to that simple assertion anyway. Yet Searle has the audacity to claim systems objections such as mine beg the question. That's nonsense. Searle has not shown why a person or machine that converses quite normally in English should ever be suspected of not understanding.

Then Searle gets really irrational. "If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding."

I know of nobody who claims a stomach understands Chinese or acts like it understands Chinese. I doubt anybody seriously claims that *any* system that has inputs and outputs and a program in between is a cognitive system. If such a person exists, he is as silly as Searle. This cognitive stomach is a straw man. Because Searle misleads himself into thinking a person could converse fluently in Chinese yet not understand a word, he can reach the absurd conclusion that food and food waste can be information. As they say, garbage in, garbage out.

There are billions of computers in the world. Virtually all of them input data, process that data, and output data. Yet very few AI enthusiasts would claim any of those real systems is cognitive. So Searle is dreaming up true-believers who, if they exist at all, are insignificant. No computer system as of today "understands" what it is doing. The question is, can a computer system someday be designed which does understand what it is doing.

Searle has proven nothing other than the fact that his sort of "philosophy" will get us nowhere.

Notes:

[1] A copy of the original and a more easily read reprint.

41 comments:

Anonymous said...

You presume to speak for all involved in the research and application of AI. "Strong AI" is exactly as Searle defined it (he defined it after all), and there are people that think it quite possible.

Please visit the Singularity Institute (singinst.org) on the web for a better understanding of the issues involved before you pour out your heart. Imputing intellectual dishonesty or stupidity to the likes of Searle (and Aristotle, and Einstein, and Heisenberg, et al. as you have at other sites when decrying philosophy and metaphysics as useless) is funny - if you know so much, you should really save the world from its greatest thinkers rather than hiding in this corner of the internet wondering why their dumb ideas have gotten so much currency.

Also: go to www.simulation-argument.com ... if you think no one takes Strong AI seriously, and you think philosophy is useless, I'd love to read your 'science'-only argument for why you're real.

Anonymous said...

And no, I don't believe Strong AI is possible. Searle's arguments (including the thought experiment you try to ridicule) persuade me that the brain is not (like the hardware of) a digital computer, and the Mind or human intelligence cannot just be like a software programme running on it.

Anonymous said...

Reading your post again it's apparent that you've missed the point of Searle's thought experiment.

You're on some jihad against philosophy so anything sailing under its flag just must be bad - how rational is that?

Searle shows, as you admit, that a system such as his Chinese Room exhibits no understanding. Further, it's not just the algorithm or particular programme employed that is responsible for this lack of understanding but the nature of the system itself.

You don't dispute this(?) but must recast the argument as something it's not so you can have a dig at a philosopher. You're not as smart as you think you are, and your one man war against philosophy will die with you. If you're going to be wrong about something there are much more pleasurable things to waste your life on.

Anonymous said...

If you included a link to Searle's paper then your reader(s) would be able to see for themselves how badly you misrepresent Searle when you go on about the Chinese understanding stomach.

When looking at the definition of computation Searle sticks to Turing's definition but points out merely that, unlike the original Turing machine, today's computers don't use a tape and nor must they.

If you think it's oh so laughable that biological substrates could be used as memory or processors, or that no law mandates tha computers be silicon based then you should read more.

I think that neglecting to link to Searle's paper made it much easier for you to be dishonest, and that's all that matters for you.

The link: http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

Anonymous said...

users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

Hardy Har Har said...

I have to agree with Anny that you are mistaken in thinking that no one believes in the cognitive abilities of computers (aka Strong AI). Minsky ring a bell? That was the model back in 70s-80s--ie, the brain was a computer, with various modules, interfaces, etc. Now, I agree--to some extent--that Searle was a bit imprecise with his tech-talk (ie, CPUs are definitely not programs), but that's not sufficient to overturn his claim, really, which concerns..semantics, and meaning really. A CPU will process orders, commands, programs--syntax, in short-- without knowing about those programs. Mere processing of info. is not thinking, in short--sort of like Deep Blue calculates all the moves for a chess move, but really ..its just a sophisticated calculator, not Kasparov--ie a chess bot's simulation, not conscious. Yet that strong AI people wanted to claim-- or at least suggested-- that the computer (including CPU) was capable of understanding meaning, not just processing and simulation. So Searle deserves some credit for the distinctions, however obvious.

That said, he was a bit....didactic about it.Perhaps in a few centuries---strong AI will happen. The singularity, and all that. But I do think most AI types continually conflate simulation and constructs--merely processing, however powerful (deep blue, or the Watson machine)--with consciousness. When we can plug in via cranial modem, or wireless, and download like..memories of Daisy Mae, Ill consider Searle refuted.

Hardy Har Har said...

The "Enscheidungproblem" which Church and Turing dealt with (and really Goedel) is quite different. I don't recall how Searle relates it to the AI issue. But it's a problem for first order logic. Not AI or consciousness per se. Maybe there is some eggheady relation, but ...it eludes me, though I think formal logic itself does have shall we say cognitive--and a priori-- aspects which are not merely reducible to operations (ie, Wittgenstein's...great mirror if you will)


Searle's no guru of mine but I don't think you quite get what Searle was doing in the Chinese room regarding meaning, and opposing AI. In a way it was somewhat...humanist (and related to his writing on intention as well).

Hardy Har Har said...

Ah. Scratch the above, por favor (tho, actually has some relevance. Ill explain later). That Turing test, the ELIZA jazz-- as in could one distinguish an intelligent machine from human. Searle says no, yet the test's not valid, right. That still relates to simulation. At Disneyland, the old electric Abe Lincoln fooled many people for years. Looks and acts and speaks like Abe (or so we think). But we know it's not, and not intelligent--merely early bot, aka simulation. Even so now--the bot-phone operators sound nearly human, but they aren't. Or recall the Watson-machine winning Jeopardy. Sounds legit, but it isn't. Watson was merely a fancy search engine--so Searle's point from the CR applies (ie, its processing info. very efficiently, but not thinking).

Don Jindra said...

Anonymous,

There's no need for me to visit the Singularity Institute. I believe strong AI is not only possible but probably inevitable. If you find Searle convincing that's fine -- though I doubt you'd be an ideal candidate for solving problems of cognition.

But why would you think I'm on a "jihad” against philosophy? First, I'm far from the only person in the world to find Searle's arguments to be ridiculous. Second, if I'm on any "jihad" it's a jihad against bad philosophy, which is what the Chinese Room is. Your attempt to lump Searle in with Aristotle, Einstein, and Heisenberg is an insult to those men.

Anyone familiar with philosophy knows that philosophers are at war with each other. Are they also on a "jihad” against philosophy? Of course not. I happen to like philosophy. I just know better than to take it too seriously. No philosopher has all the answers. Some are completely wrong about almost everything. It takes more than hanging a shingle on your door to make your thoughts true.

Searle does not show that a system such as his Chinese Room exhibits no understanding. I completely deny this. It very well might understand, and probably does. If you think I accept Searle on this you've misread me.

You say "your one man war against philosophy will die with you." My eternal soul is not the issue. And philosophers will continue to war against each other after we're gone.

Then you suggest there are much more pleasurable things to waste my life on. Tell that to Searle. But if you are indeed on a pleasure hunt maybe you should stop wasting your time reading blogs that bug you.

It's a nitpick to complain I provided no link to Searle's paper. I provided one to the Wikipedia article. That has various links including one to Searle's paper. Nevertheless I have now added some links including ones to Searle. I'm sure I'll add more in the future.

I would like to hear how I misrepresented Searle and his stomach analogy. Maybe some reader will point out my supposed error since you didn't.

I do not deny biological substrates can be used as memory or processors. Of course they can. That's what brains are and that's probably the future of computers.

Don.

Anonymous said...

Thanks for your comments Don.

"One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same -- or perhaps more of the same -- as what I was doing in manipulating the Chinese symbols." This is definitely not the claim made by supporters of strong AI.

You either don't understand computation, or you don't understand the Chinese Room. Strong AI, a view you say is possible and claim to support, is the view that all there is to having a mind is a program. The Chinese Room is a program, and there is certainly a Turing machine that can implement its algo - why is this controversial to you?

But why would you think I'm on a "jihad” against philosophy?

You accuse Searle of intellectual dishonesty in your post; I found your post by following your name-link at a philosopher's site where you engage in no end of ad hominem and repeatedly state that philosophy is useless, not to be taken seriously, and solves no problems. To say that this philosophy or that philosophy disputes that philosophy or this philosophy is quite different from claiming that ALL philosophy is a mere diversion from the solely scientific work of truth finding. To reduce all philosophy to the importance of sudoku isn't a strict definition of jihad - but you'd forgive the hyperbole if you weren't so dishonest.

...

Anonymous said...

Searle does not show that a system such as his Chinese Room exhibits no understanding. I completely deny this. It very well might understand, and probably does. If you think I accept Searle on this you've misread me.

The only way the system exhibits understanding is if you include the programmer of the system as part of the system. The room, furniture, and symbol cards aren't understanding anything, are they? And the man in the room is only following the instructions given to him, no? So which part of the system understands Chinese? You response of 'it might very well understand, and probably does' is an assertion that you do nothing to defend. Snark is not an argument.

More to the point you seem to be arguing against yourself. In your post you say:
He is the CPU. He is not the program. There is no requirement that the CPU understands the program, and in fact it never does. It doesn't know what the next instruction is. It doesn't even remember the previous instruction. It's dumb as dumb can be.

So if the system of the Chinese Room "probably does" understand, what part of it exhibits this understanding? It's not the CPU (the man), so perhaps you (as a presumably more ideal candidate for solving problems of cognition) can let the rest of us in on the secret? Or do we just have to accept Searle's wrong without any sort of argument from you.


I would like to hear how I misrepresented Searle and his stomach analogy. Maybe some reader will point out my supposed error since you didn't.

I can explain this to you, but I can't comprehend it for you. You say:
I know of nobody who claims a stomach understands Chinese or acts like it understands Chinese. I doubt anybody seriously claims that *any* system that has inputs and outputs and a program in between is a cognitive system. If such a person exists, he is as silly as Searle.

Well you are such a person because, like all proponents of Strong AI, you say the brain is just a CPU. If understanding comes from some system level combination of program and processing, then the processing can be accomplished by electrical gates where 1s and 0s are assigned to voltage levels of 4 volts and 7 volts (like a conventional CPU); but using voltage levels is merely a convention, and the same results could be achieved by a man in a room (as you allow); indeed, with a bit of imagination, you could construct a processor out of transistors, or (perhaps with even more imagination) out of steam engines, or the solar system (can you get the internet on Stonehenge?) or gastric flux.

The doesn't say that the stomach understands Chinese (there's your misrepresentation) but to show "the irrelevance of hardware realization to computational description. These gates work in different ways but they are nonetheless computationally equivalent". So it follows from Strong AI that a system with a stomach as CPU would understand Chinese. This is what you believe, no? It's all in the program, not the hardware? The thing is, when you make the brain fit the definition of mere computer it turns out that almost anything can fit that definition. If this seems silly to you, you should look at again at what you're signing up to with Strong AI.

Finally: Your attempt to lump Searle in with Aristotle, Einstein, and Heisenberg is an insult to those men.

I'm no Searle cheerleader; I'm sure few would put him at the same level of genius as those others. My point, however, is that those great men inter alia would have laughed at your oft repeated belief that philosophy is pointless, useless, and not to be taken seriously. They all understood the importance of metaphysics (and by extension philosophy), and the metaphysical assumptions that must be in place to enable science.

jack bodie said...

DonJindra: Searle does not show that a system such as his Chinese Room exhibits no understanding. I completely deny this. It very well might understand, and probably does.

Let's examine this system level view of understanding that you advocate. The man in the Chinese Room could internalize all the relevant parts of the room (ie, memorize the instructions, the shapes of the Formal Symbols, etc.) so that he now is not only the CPU (as you put it), but also the program. Let's put to him the Formal Symbols for a story, the Formal Symbols for a question about that story, and get from him the Formal Symbols answering the question about the story in Chinese.

You say this system-in-a-man understands Chinese. We know also that he understands English. But if we ask him in English to tell us some pertinent fact about the story we gave him via the Formal Symbols he won't be able to. He has no idea what the Formal Symbols mean and therefore wouldn't be able to name the story's hero (for example) or tell us the colour of the villian's hat. How do you explain this inability while also maintaining that the man understands Chinese and therefore the story?

If students were to organize a cheating scam where the answers to a multiple choice exam were distributed in advance of the test, no student would have to learn the subject matter in order to fool the examiners they understood the material 100%. One could imagine a scenario where they simply memorize "Option B is the correct answer to the question about John Searle's Chinese Room; Option F is the answer to the question about DonJindra's dogma" without knowing what Option B or Option F are. And yet you would impute (some level) of understanding to the cheat?

I fear you wouldn't pass a Turing Test if the conversation turned to philosophy. We'd get more intelligence from a Speak'n'Spell.

Don Jindra said...

Hardy Har Har,

I understand the difference between semantics and meaning. But Searle must show what meaning is if he hopes to prove meaning cannot be in his Chinese Room. I know mere processing of info is not thinking. But what is thinking? Searle must tell us what it is if he hopes to prove a computer system is incapable of it. So I don't give Searle credit for distinctions. He's too far too vague.

I think Turing had in mind something a lot more sophisticated than a Disney puppet or ELIZA or even Deep Blue. Yes, I think the Turing Test is valid if applied rigorously.

Which of your posts did you want me to delete? I'll be glad to do it just tell me which.

Don Jindra said...

Anonymous,

"That doesn't say that the stomach understands Chinese (there's your misrepresentation) but to show "the irrelevance of hardware realization to computational description."

Searle is very clear here. He says, "If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive." He's clearly not thinking of some newfangled invention that uses gastric juices to replace electric current. His claim is that the "system" reply means he can call anything he likes a cognitive device because its all in our minds anyhow. It's all in how we want to interpret a thing. So Searle then claims this means a cognitive thing doesn't really have to do much of anything except input, process, and output. A stomach does this, So to Searle, since what we call a computer is arbitrary anyhow, a stomach, as a food processor, is as likely to cognate as anything else -- if we accept the system reply. He states this emphatically: "But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on, are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands."

That badly misrepresents the system reply. It's silly and will likely fool only the lazy. I characterized Searle right. You're trying to put words in his mouth.


"The thing is, when you make the brain fit the definition of mere computer it turns out that almost anything can fit that definition."

See, you're making Searle's mistake. Strong AI does not imply anything with inputs and outputs is capable of cognition. That's a straw man. Searle is merely using this straw man to assert that the Turing test is arbitrary. And since he thinks it's arbitrary he is permitted to randomly pick anything as a test for cognition. But since the validity of the Turing Test is the thing under consideration, Searle begs the question. He must assume its unworthiness and arbitrariness to reach his silly conclusion about stomachs. He is definitely not arguing that strong AI implies stomachs could be used as a substrate for CPUs. That, though silly, is not the issue. If it were the issue the Chinese Room would be rejected on silliness alone. But few reject it on that ground.

Don.

Don Jindra said...

Anonymous,

Your response of 'it might very well understand, and probably does' is an assertion that you do nothing to defend.

I have a lot to defend it. The system has fooled experts. What reason do you have to discount that and not discount the man on the street who also converses well enough to convince you he understands you?

So if the system of the Chinese Room 'probably does' understand, what part of it exhibits this understanding?

As I made plain above, you are making a logical fallacy. No part of a whole has to have all properties of the whole. The whole may have properties that are not in any of the parts that compose it.

Well you are such a person because, like all proponents of Strong AI, you say the brain is just a CPU.

I did not say the brain is a CPU. The brain is the major part of the system and it has within it probably many "CPUs", much storage, and much "software."

How you got from my -- "I doubt anybody seriously claims that 'any' system that has inputs and outputs and a program in between is a cognitive system" -- to accusing me of the same is bizarre.

...the same results could be achieved by a man in a room (as you allow)

It's a preposterous thought experiment so I don't actually allow it except for argument's sake.

My point, however, is that those great men inter alia would have laughed at your oft repeated belief that philosophy is pointless, useless, and not to be taken seriously.

Your point is misplaced. I don't say all philosophy is pointless or useless or not to be taken seriously. I do say it is taken too seriously by many. I do say it has not solved problems. Nobody yet has told me a problem philosophy has solved.

They all understood the importance of metaphysics (and by extension philosophy), and the metaphysical assumptions that must be in place to enable science.

Science is the abandoned child of philosophy. It's no good appealing to science when so many philosophers reject it.

Don.

Don Jindra said...

jack bodie,

But if we ask him in English to tell us some pertinent fact about the story we gave him via the Formal Symbols he won't be able to. He has no idea what the Formal Symbols mean and therefore wouldn't be able to name the story's hero (for example) or tell us the colour of the villian's hat. How do you explain this inability while also maintaining that the man understands Chinese and therefore the story?

You're begging the question. You simply assume the man-as-system doesn't understand the Chinese symbols. You assume the formal symbols can be manipulated in a way that I seriously doubt they can be. That is, I seriously doubt symbols which stay meaningless symbols can be manipulated in a way that fools people day after day into thinking the manipulator has understanding when he does not. Now this may be a big assumption too but I can see no good reason to cast aside this assumption since it's the same one we make when talking to people. IOW, I see no reason to use a double standard simply because we want to think human intelligence is extra-special.

One could imagine a scenario where they simply memorize "Option B is the correct answer to the question about John Searle's Chinese Room; Option F is the answer to the question about DonJindra's dogma" without knowing what Option B or Option F are. And yet you would impute (some level) of understanding to the cheat?

But that trickery is easily defeated by anyone who tries. It does not compare to a case where a machine continues to fool no matter how hard we try to confuse it.

Don.

jack bodie said...

Don Jindra: You're begging the question. You simply assume the man-as-system doesn't understand the Chinese symbols.

How am I begging the question? Rather it is you that begs the question and you give the game away by telling us that, "I seriously doubt symbols which stay meaningless symbols can be manipulated in a way that fools people day after day into thinking the manipulator has understanding when he does not."

Haven't you ever used a calculator? Do you really think the numbers on the LED display or keys on its keypad have meaning for the calculator? You're truly firm in your conviction that the machine understands multiplication (for example)? Perhaps you thought the Toy Story trilogy was a documentary?

Again you misrepresent the Chinese Room - perhaps because you think it strengthens your case to do so: ie, why do you think the system would have to fool an observer day after day, and why ignore that the inputs are structured as stories and questions about those stories, and outputs as answers to those questions?

These considerations would materially reduce the complexity of the program design and, while I doubt it would be trivial, I'm sure a program could be implemented to carry out even your misrepresentation of the challenge. But more importantly, when you accept something for the sake of the argument (DonJindra: So let's grant that the man locked in the room is functionally the CPU.) you can't later claim to demolish the argument by denying the thing you accepted.

You first object that the CPU isn't requireed to understand, the system as a whole does, but when it is pointed out that the pertinent parts of the room (ie, the cards with formal symbols, and instructions) can be memorized by the man in the room so that now the system as a whole resides in the man, you object "I seriously doubt symbols which stay meaningless symbols can be manipulated in a way that fools people day after day into thinking the manipulator has understanding when he does not."

Again, you injected the requirement for day after day. This could be a one-shot affair. And you accepted that symbols can be manipulated just as described when you thought that the man's only function was to 'act as CPU'.

You're in the grip of some ideology that let's you throw around phrases like 'intellectual dishonesty' without acknowledging that the only person contradicting himself is you.

The system understands (according to you) Chinese. The system also understands English. If we asked the system in English to tell us the names of the female characters in the story we gave it via the uninterpreted symbols how would you expect it to respond? And why?

Forget ideology for a moment and think about the problem.

Don Jindra said...

jack bodie,

A calculator is not the thought "experiment." If it was, Searle wouldn't have been saying anything controversial because virtually nobody thinks a calculator understands. A calculator is an extremely simple device. The question is not about simple systems.

"...why do you think the system would have to fool an observer day after day,...?"

Because that's the nature of the test. Searle admits that "precisely one of the points at
issue is the adequacy of the Turing test." That test is not meant to be a simple test. The interrogator should be given ample opportunity to ask tough questions for a long period of time. And it would take multiple interrogators to avoid the issue of incompetence. It certainly could not be a one-shot affair. That would be unfair to both sides and totally unreliable. I have no desire to make the testing easy or open to chance results. I want it to be as difficult and reproducible as possible and, I believe, so does Searle.

"...why ignore that the inputs are structured as stories and questions about those stories, and outputs as answers to those questions?"

I don't ignore it. Those are perfectly valid questions and the system would have to be able to answer them.

"These considerations would materially reduce the complexity of the program design..."

What considerations?

"...when you accept something for the sake of the argument (DonJindra: So let's grant that the man locked in the room is functionally the CPU.) you can't later claim to demolish the argument by denying the thing you accepted."

And I didn't.

"You first object that the CPU isn't required to understand, the system as a whole does, but when it is pointed out that the pertinent parts of the room (ie, the cards with formal symbols, and instructions) can be memorized by the man in the room so that now the system as a whole resides in the man, you object 'I seriously doubt symbols which stay meaningless symbols can be manipulated in a way that fools people day after day into thinking the manipulator has understanding when he does not.'"

There is no contradiction there. Both of my statements are totally consistent. I doubt the man-as-system lacks understanding. The man-as-system would not be able to organize responses unless he understood what they meant. This is exactly the point of contention. Maybe you're confused about the man-as-cpu versus the man-as-system. I am not.

"And you accepted that symbols can be manipulated just as described when you thought that the man's only function was to 'act as CPU'."

Of course. That's the nature of a CPU. It's not necessarily the nature of the system. Maybe this is difficult to grasp if you're not very familiar with computing -- I don't know. It's second nature to me since it's my profession.

"If we asked the system in English to tell us the names of the female characters in the story we gave it via the uninterpreted symbols how would you expect it to respond?"

That's a fairly easy question for man or machine. Both could answer correctly. Why would they answer correctly? That's precisely the issue.

Btw, this is not about ideology. It's about technology.

Don.

jack bodie said...

There is no contradiction there. Both of my statements are totally consistent. I doubt the man-as-system lacks understanding. The man-as-system would not be able to organize responses unless he understood what they meant. This is exactly the point of contention. Maybe you're confused about the man-as-cpu versus the man-as-system. I am not.

And this would be you begging the question. The mechanism by which the system (whether room or man) would be able to organize responses without understanding what they meant is precisely and adequately described. (ie, following a precise set of instructions the way a CPU mechanically completes a program)

You, on the other hand, provide no mechanism for the understanding you insist must be a part of the system. It just appears, without explanation, through repetition.

You've separately been asked where the understanding resides in the room (no insight from you, fallacy of division supposedly), in the man (no insight from you, the system just does understand, you say, or the system wouldn't be able to do what you accept a dumb CPU can do - ie, follow some set of instructions to manipulate symbols without understanding). I guess it must be magic.

The calculator is relevant because you seem unable to understand that the meaning from such systems as the Chinese Room is entirely observer relative. Just as in the calculator.

As to your comment: Because that's the nature of the test. Searle admits that "precisely one of the points at
issue is the adequacy of the Turing test." That test is not meant to be a simple test. The interrogator should be given ample opportunity to ask tough questions for a long period of time. And it would take multiple interrogators to avoid the issue of incompetence. It certainly could not be a one-shot affair


Again, mere assertion by you. Define 'tough', define 'long period' because no variants of Turing test require day after day or a particular difficulty of question. It's judge's discretion. But as I say, it could still be a one shot and take place day after day with many different interrogators simply by cycling a different person/CPU into the room to answer each question or each interrogator. The rules don't change and, as long as every symbol manipulator understands the language of the instruction set, there's no need for a particular length of service.

I'll help you out - your appeal to system level understanding must invoke a homunculus to allow for mind/understanding to be a genuinely computational problem. The problem is that you believe Strong AI before understanding whether the brain is a computer and whether mind/understanding is even a computational outcome. It's obvious to anyone that doesn't have a political/psychological need for Strong AI to be true that syntax and semantics are not intrinsic to the physics of any computational system, but always relative to an observer or operator.

Don Jindra said...

Jack bodie,

The mechanism by which the system (whether room or man) would be able to organize responses without understanding what they meant is precisely and adequately described.

Adequate? Stating he is given "rules" that enable him to correlate elements is about as vague as one can be.

You, on the other hand, provide no mechanism for the understanding you insist must be a part of the system.

I didn't claim the system understands. I claim we have to have a good reason for denying the system understands. Searle claims he has given us reason. He has not. I don't have to provide a mechanism for the system's understanding any more than I have to provide a mechanism for human understanding. That mechanism is unknown in every case. But that does not mean none is there and it does not mean we will not discover it.

It just appears, without explanation, through repetition.

You mean, as with our brains?

I guess it must be magic.

The non-materialist explanations are the ones based on magic. Don't pretend otherwise.

It's not surprising that cognition is mostly a mystery today. It's a tough question. It's made significantly tougher by the fact that it's impossible with today's technology to examine a living brain in detail while it's alive and working.

The calculator is relevant because you seem unable to understand that the meaning from such systems as the Chinese Room is entirely observer relative. Just as in the calculator.

That's total nonsense. I assume you got it from Feser's "Philosophy of Mind," Chapter 6 (Thought), section "The mind-dependence of computation." I'll have a lot to say on this in a future post.

"....no variants of Turing test require day after day or a particular difficulty of question. It's judge's discretion."

Turing clearly wants a rigorous test. Results mean nothing without that rigor. But it seems to me that you are not being serious here. I'll bet you, yourself, would demand this rigor just like I would. If someone came to us claiming he had a computer that understood, we both would try to prove it wasn't so. We both would be as clever as we could be in asking questions no computer should be able to answer. It would take a lot of time to convince us that the thing actually understood what we said as well as any human understood.

"...your appeal to system level understanding must invoke a homunculus to allow for mind/understanding to be a genuinely computational problem.... It's obvious to anyone that doesn't have a political/psychological need for Strong AI to be true that syntax and semantics are not intrinsic to the physics of any computational system, but always relative to an observer or operator."

That's more of Feser's nonsense. For now I'll just say I don't swallow your political/psychological need for a dualist perspective. When physical damage occurs to the brain, mental abilities suffer. For those of us who still believe in cause and effect, the ramifications of that are obvious. The "mind" is a function of the brain. No more is required to explain it. And working on that assumption, problems are no more than temporarily mysterious.

Don.

jack bodie said...

Don,

I give up. I certainly can't claim to understand the field better than Searle. Nor am I a better teacher; it's just clear that you're arguing past him and I assume the reason is psychological or ideological. Because for all the things you say you're not claiming you don't appear to have an argument with Searle:

1. On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.

2. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.

3. This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.

4. It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"

5. Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.

6. But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.

7. The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.

8. We cannot avoid the foregoing results by supposing that the brain is doing "information processing". The brain, as far as its intrinsic operations are concerned, does no information processing. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.\**

If you don't comprehend this, well good luck. He's no more arguing for dualism than you. But he does understand what Strong AI entails; strong AI you accept because system-level vagueness holds the possibility open. All of the reasons that Searle gives you to doubt meaning is intrinsic to computation you just reject however - and yet you accuse Searle of intellectual dishonesty.

Don Jindra said...

jack bodie,

1. Yes, computation in computers includes symbol manipulation. It also includes decisions, adaptation, memory, input & output, and pattern recognition. It includes a lot.

2. But syntax and symbols are not defined in terms of physics. The symbol for apple is defined in terms of the physical things we call apples. So I'm not sure what you mean. Syntax, in short, is not intrinsic to physics. Syntax is about rules and relationships. The physical world has many rules and many relationships. We, as humans, when we understand things, follow rules that make sense specifically because the physical world works in a particular way, and so do our brains. So syntax seems obviously intrinsic to physics, though I know some disagree.

3. So there is no consequence. Besides, we, as humans, compute. We are part of the physical world so at least something in the physical world computes. Obviously we didn't assign this ability to ourselves. Nor did we assign our internal representation of an apple to ourselves. The symbol, "apple," is a handle. It helps us get to the internal physical memories (representations) of the physical thing. Whether or not we want to call that internal representation "observer relative" is irrelevant. It does not imply that computers cannot have the same sort of "observer relative" conception based solely on the physics of the system. And it does not imply our own subjective, "observer relative," conception of the world is any more philosophically significant than our own unique thumb is philosophically significant.

4. Again, I don't know what you mean. I don't think the brain is a digital computer. But can a digital computer emulate a brain? I think that is the issue and I think it's theoretically possible.

5, There is no homunculus in a NAND gate. Even if I grant that we interpret the physics in both syntactical and semantic terms, it does not follow that computers cannot do the same.

6. Huh?

7. Again a NAND gate requires no homunculus. And a NAND gate remembers nothing. But the right combination of NAND gates results in a flip-flop and that does remember. So where does that "memory" come from if it is not in the NAND gates? -- from the relationships; -- from the "syntax" if you like. The homunculus fallacy is itself a kind of fallacy. It works only as long as a system is not yet understood. It doesn't prove anything. It attempts to denigrate the fact that the search for causes inevitably finds other causes. There is no way around this search for prior causes. That's how problems are solved. We could eventually be forced to explain why a "homunculus" is not required at the subatomic level before some sticklers would admit "mind" has natural causes.

8.The brain, as far as its intrinsic operations are concerned, does no information processing. The brain's main function is information processing.

I did not say meaning is intrinsic to computation. I don't know why you think I did -- especially when I told you calculators, though very good at computation, do not understand anything. So you appear to be stuffing a straw man.

Don.

jack bodie said...

Don

1. To avoid you talking past the argument, let's get explicit. Searle thought it best to go back to the original definitions given by Alan Turing for computation:

According to Turing, a Turing machine can carry out certain elementary operations: It can rewrite a 0 on its tape as a 1, it can rewrite a 1 on its tape as a 0, it can shift the tape 1 square to the left, or it can shift the tape 1 square to the right. It is controlled by a program of instruction and each instruction specifies a condition and an action to be carried out if the condition is satisfied.

The Church-Turing thesis states that Turing machines do indeed provide a precise definition of an algorithm or 'mechanical procedure' and continue to be the models of choice for theorists investigating questions in the theory of computation.

2. Forget your apples for a moment, the computing context should make this clear to you:

Take the above definition of computation and now open up your PC - you are most unlikely to find any 0's and 1's or even a tape. But this does not really matter for the definition. To find out if an object is really a digital computer, it turns out that we do not actually have to look for 0's and 1's, etc.; rather we just have to look for something that we could treat as or count as or could be used to function as a 0's and 1's.

In a home computer the physical token is a positive voltage relative to electrical ground (up to 5 volts in transistor-transistor logic circuits) and this is symbolizes binary value 1. The rule relating this voltage to that symbol is quite clearly assigned by us humans, it is in no way intrinsic to the physics of the computer and I'm amazed that you, as a professional, should consider it obviously otherwise.

3. So, yes, the consequence does follow that computation is not discovered in the physics, it is assigned to it. Yes, we humans can compute ... so what? It's clear some human mental abilities are algorithmic so from the Church-Turing thesis and Turing's theorem anything humans can do algorithmically can be performed by a universal turing machine. One is conscious, the other unconscious but leave that aside for now - unless you are begging the question that the brain is but a computer (and everything it does is done algorithmically) you still have a problem. Searle even grants that the development of proof theory showed that within certain well known limits the semantic relations between propositions can be entirely mirrored by the syntactic relations between the sentences that express those propositions. But the syntax is not in the physics. What you say about this being irrelevant is just weird and akin to arguing that up is down. Why this is relevant might become clearer with (4), Incidentally the Copenhagenists would dispute your assertion that there's nothing unique about the human observer.

jack bodie said...

4. Searle uses the following to make this clear: "Just as carburettors can be made of brass or steel, so computers can be made of an indefinite range of hardware materials.
But there is a difference: The classes of carburettors and thermostats are defined in terms of the production of certain physical effects. That is why, for example, nobody says you can make carburettors out of pigeons. But the class of computers is defined syntactically in terms of the assignment of 0's and 1's."

Hence:

"There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system. As applied to the Language of Thought hypothesis, this has the consequence that the thesis is incoherent. There is no way you could discover that there are, intrinsically, unknown sentences in your head because something is a sentence only relative to some agent or user who uses it as a sentence. As applied to the computational model generally, the characterization of a process as computational is a characterization of a physical system from outside; and the identification of the process as computational does not identify an intrinsic feature of the physics, it is essentially an observer relative characterization."

You think a computer can emulate a brain. Let me ask you a few questions: do you think there is anything a computer cannot emulate? Is this emulation the same a producing the mental phenomena in question? It's easy to confuse models with reality. A simulation of a hurricane does no actual work physically. Unless you are begging the question, I wonder what reason you have for assuming the brain is a computer (or that all its processes are computational) to be emulated and not something that computers might approach through simulation. You say you don't think the brain is a computer, but you argue as though you do believe this.

5. And a NAND gate exhibits no cognition. See above as to why the rest of your comment doesn't follow.

jack bodie said...

8. It is much too high a level of abstraction to say "the brain's main function is information processing". "The mistake is to suppose that in the sense in which computers are used to process information, brains also process information. To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in, for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics.

But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer relative point of view) and the specificity of the neurophysiology matters desperately. To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence, "There is a car coming toward me". But that is not what happens in the actual biology. In the biology a concrete and specific series of electro-chemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience. The biological reality is not that of a bunch of words or symbols being produced by the visual system, rather it is a matter of a concrete specific conscious visual event; this very visual experience. Now that concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, do an information processing model of that event or of its production, as we can do an information model of the weather, digestion or any other phenomenon, but the phenomena themselves are not thereby information processing systems."

Apologies for the lengthy reply - especially as earlier I said I'd given up on this challenge. However, I hope to make it clear to others - even if not to you - that your misreadings of Searle are not due to any weakness in Searle's argumention (or intellectual dishonesty). You are committed to a worldview that reduces all things to what is physical, even the mental. Not only that but you claim to be a computer scientist so, like the hammer, it seems all things look like a nail. That's fine for you, no one is proselytizing in your own house. But how can you, in all honesty, expect to take a disinterested look at how mental phenomena are produced when before any facts are presented (and no matter what facts are presented) the outcome is certain in your mind?

Don Jindra said...

jack bodie,

The rule relating this voltage to that symbol is quite clearly assigned by us humans, it is in no way intrinsic to the physics of the computer and I'm amazed that you, as a professional, should consider it obviously otherwise.

To call "0" and "1" symbols in reference to a digital computer is misleading. It's not a digital computer at all without something holding a state of 1 or 0. So those digits are not arbitrary symbols. Neither are they mere symbols. We cannot choose to strip those 1s and 0s out of the design. They are an absolute requirement. The digital computer is nothing without those 1s and 0s. Something physical has to implement them. Physical is physical. It's discoverable by physics. Its behavior is explained by physics.

Let's move up to a one bit adder circuit. There are three input lines: A, B, and carry in. There are three output lines: Sum0, Sum1, and Carry out. This adder behaves in a very predictable manner for a specific purpose -- addition. It was designed for that one purpose. You might claim that the inputs and outputs are arbitrary. You might claim that we can assign any meaning we want to those inputs and outputs. But that idea is relativistic nonsense. If there is a case where the inputs and outputs just happen to match up exactly to another function where we could reinterpret the inputs and outputs as a function of those symbols -- that's a freakish coincidence. That could theoretically happen. But that does not rob the circuit of its function of addition. That's still there.

More likely, there could be cases where partial states of this adder just happen to work in a design for a different purpose. We could therefore, in an opportunistic fashion, use the adder for some other purpose it just happened to fit -- not having to use all the truth table values because they are not required. But that does not mean the adder is being used for its best, full use. It doesn't mean the adder is now not an adder. It just means we chose to ignore the fact that it's an adder. We just don't care that it can do more. Furthermore, the new use is not arbitrary. Not any arbitrary thing would have worked. Not even this new, opportunistic use is arbitrary.

So we cannot just arbitrarily assign symbolic meaning to the adder's inputs and outputs and make it something else. That's nonsense. (And, btw, it also makes so-called "intentionality" nonsense.) An adder is an adder. It depends on its inputs having a specific, fixed "meaning." Its outputs have a specific, fixed "meaning." In the real world, 0+0=0; 1+0=1; 1+1=2; 1+1+1=3. The adder successfully implements this physical fact in physical electronic signals. It is not "observer relative." It's not even truly symbolic. There is simply no philosophic way around this.

In a way, Feser (in selling Searle) admits this although he misses the fact that he admits it: "We couldn't make a knife out of just anything -- steel and plastic will do, but shaving cream and butter won't -- but that doesn't undermine the point that something counts as a knife only relative to our interests." (p163, section "The mind dependence of computation" chap 6.) Well, yes it does undermine his position. It completely undermines it. We do not have the ability to arbitrarily assign uses to things just because we have an interest in doing so.

Don.

Don Jindra said...

jack bodie,

Searle's supposed profundity: "Just as carburettors can be made of brass or steel, so computers can be made of an indefinite range of hardware materials. But there is a difference: The classes of carburettors and thermostats are defined in terms of the production of certain physical effects. That is why, for example, nobody says you can make carburettors out of pigeons. But the class of computers is defined syntactically in terms of the assignment of 0's and 1's."

Well, not quite so -- as my example of an adder circuit demonstrates. Searle may choose to ignore the fact that the adder really does add. It performs the same sort of physical phenomenon as adding one apple to a basket and discovering -- to Searle's amazement -- that there are now two apples in the basket, or two 5 volt signals on the output. But Searle's amazement doesn't impress me. I can measure the signals with a volt meter and prove they really are there. That's a physical effect. That adder is to the computer what the carburetor is to the car. Maybe I should tell Searle his car is "observer relative" too. A horse is a car by another name and an ant is a horse by another name. Maybe we should drive ants to work. After all, to paraphrase Searle in the proper context: "As applied to the transportation model generally, the characterization of a process as transportation is a characterization of a physical system from outside; and the identification of the process as transportation does not identify an intrinsic feature of the physics, it is essentially an observer relative characterization." If this appears nonsensical to you then maybe you can see how Searle's original strikes me. The difference is one of hands-on experience. Searle drives cars but he doesn't have computer experience at the nuts-and-bolts level. All he can do is think about it in virtual, or symbolic terms. I have an easier time looking at things from the foundation which is extremely physical, sometimes annoyingly so. Those "symbols" are not as elusive and arbitrary as he presumes.

Don.

Don Jindra said...

jack bodie,

Obviously a computer cannot emulate everything. It may be able to simulate anything the brain can simulate. I don't confuse models with reality.

I don't assume the brain is a computer. I assume the brain is physical and therefore all its operations occur in the physical world. I assume there is no theoretical reason why a man-made device (like a computer, if not a computer as now existing) cannot simulate a brain and probably emulate a brain.

I see no reason that the operation of a brain should be more than the operation of brain cells. And since a brain cell's operation should be able to be performed by a collection of NAND gates, I see no reason why a man-made working brain should not be possible to construct from devices like electronic NAND gates. And then, whatever a brain does, the man-made version should be able to do.

Don.

Don.

Don Jindra said...

jack bodie.

So we're also referring to Searle's "Is the Brain a Digital Computer?" Okay, fine.

Searle: "we do not need to know the details of brain functioning in order to explain cognition. Brain processes provide only the hardware implementation of the cognitive programs, but the program level is where the real cognitive explanations are given."

I don't know why Searle believes this and I don't really care. I suppose some might claim that understanding details of brain function is not absolutely required to explain cognition. We may independently solve the problem ourselves. But I doubt anyone would claim such knowledge would be useless. I think it's obvious that if nature has already done something, it's useful to understand how that thing works before trying to do the same. It helps to understand how birds fly before we build airplanes.

Searle: "But the difficulty is that the 0's and 1's as such have no causal powers at all because they do not even exist except in the eyes of the beholder."

Searle is simply wrong. Those signals -- those 1s and 0s -- have definite causal powers. They are definitely "visible" to a logic analyzer and I can single-step through a program via a debugger and "see" what state causes what. I do this every working day. The program really does exist in the hardware.

Searle: "The same principle that implies multiple realizability would seem to imply universal realizability."

Wrong. A computer is certainly not defined in terms of mere assignment of syntax. That is Searle's flawed characterization. A computational device must be able to hold states, make decisions based on multiple states and inputs, save state information, and repeat operations. Very few materials can do this. So it's silly for him to say, "everything would be a digital computer, because any object whatever could have syntactical ascriptions made to it."

Searle: "Rhetorically speaking, the idea is to bully the reader into thinking that unless he accepts the idea that the brain is some kind of computer, he is committed to some weird antiscientific views."

So having failed to convince, this bully named Searle now plays on our sympathy. Poor boy!

I have admitted that no model describes the reality of "mind" adequately. But Searle's claim is, really, that it can never be explained -- even though he hedges on that bet from time to time.

Searle: "Similarly in the case of the mechanical computer the whole system includes an outside homunculus, and with the homunculus the system is both causal and logical; logical because the homunculus provides an interpretation to the the processes of the machine; and causal because the hardware of the machine causes it to go through the processes."

Again, that's simply wrong. Once the system is in the field, the programmer is doing something else. The user is no more a homunculus than I'm a homunculus when I have a conversation with my wife.

Don.

jack bodie said...

Don:
Obviously a computer cannot emulate everything. It may be able to simulate anything the brain can simulate. I don't confuse models with reality.

I don't assume the brain is a computer.


OK, if you say so. There's a whole lot of assumption that follows, and I'd say all of it needs examining and that no disinterested reader would accept it uncritically.

In particular why should the brain cell's operations be able to be performed by a collection of NAND gates?

Hardy Har Har said...

Searle's point in the CR concerns both syntax and semantic, IMHE. Neither ...are phenomenal, biological-- part of nature. One might call syntax itself..unique. You don't see logical connectives on trees.So in effect, the computer is a construct. A superior calculator, chessbot, or adding machine, etc. It's doing something humans did, but ...faster and with greater efficiency. Has more memory and processing sklls. But it's obviously not a perfect replicant, not thinking, not alive--merely following routines.

The CPU can do the quantitative...outperform humans. But not qualitative, excerpt perhaps via some odd application, and meaning-- apart from just synonyms-- is qualitative, whether.."red", Justice, or the meaning of er the movie "Sea biscuit". Or, say pain, even. So it's not.. human brains are computers, but computers are similar to human brains, but not in all respects. The AI people got the simulation backasswards.

That said, at some point AI might advance to where the gear is creative, and independent--one might say "conscious" in some sense--but ..is that necessarily a positive? It could be something like the Matrix.

kudos for laughing at Herr Doktor Feser and his little gang of wannabe blackshirts, either way.

jack bodie said...

I don't know why Searle believes this and I don't really care.

Searle does not believe this. He's stating the understanding of cognitivists that credit Strong AI (defined as, the only thing required for Mind is a program).

That is, people like you.

jack bodie said...

Don:
Again, that's simply wrong. Once the system is in the field, the programmer is doing something else.

Um, ok. So you great great grandfather has nothing to do with your existence, can in no way be said to have caused you, because he's no longer around and you're doing something else?

The causal power comes from the programmer or operator. Insisting otherwise is simply bizarre.

Simply saying another cognitive being doesn't require a homunculus so a computer doesn't is just stupid. That's the issue in question. And your wife obviously isn't a computer though you may be a little man.

jack bodie said...

Comments are disappearing. Either bugs in blogger, or DonJindra deleting posts.

If the latter, poor show. Whatever the case Searle has nothing to worry about from you. But keep at it - change the world.

Don Jindra said...

jack bodie

Searle: "But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer relative point of view) and the specificity of the neurophysiology matters desperately."

Not quite.

One day you find you have an infant to raise. But this infant knows nothing. That's unacceptable. So you speak to it. You tell it about eyes, ears, tummy, mom, dad, etc. You insist the infant knows what these symbols mean from your POV. Then one day the infant starts giving responses. Not all responses are pleasing from your observer relative point of view. In fact, not much behavior is pleasing from your observer relative point of view. So you debug the infant. You work hard on this project. You pay attention to details. You adjust to new requirements. You enlist the help of other programmers because this project is big. Eventually things work out and you ship the product nearly on schedule after eighteen years of inputting data.

If computers are defined from an observer relative point of view, so are people. Maybe this is why some people treat their pets like people, and why others treat their cars like people.

Don.

Don Jindra said...

jack bodie,

In particular why should the brain cell's operations be able to be performed by a collection of NAND gates?

My understanding is that neurons have inputs and outputs, are essentially digital in nature, and have relatively few states. If so, simple NAND gates can mimic their behavior. The main issue is how many of them compose the brain and how complex their connectivity is -- much more complex than current computers.

Don.

Don Jindra said...

jack bodie,

I have not deleted any posts. Just post them again. If an exact duplicate happens I'll delete that.

Don.

Don Jindra said...

jack bodie,

Searle does not believe this. He's stating the understanding of cognitivists that credit Strong AI (defined as, the only thing required for Mind is a program).

That is, people like you.


That's what I mean. I don't know why he believes this about people like me because I do not believe it.

Don Jindra said...

jack bodie,

The causal power comes from the programmer or operator. Insisting otherwise is simply bizarre.

We seem to be referring to two different issues.

Simply saying another cognitive being doesn't require a homunculus so a computer doesn't is just stupid. That's the issue in question.

The issue in question doesn't prefer one side over the other. Either both man and machine require a homunculus to explain our current lack of understanding of consciousness, or neither do. The homunculus does not imply machines are unable to understand. It says, simply, we don't yet know what understanding is whether in man or machine.

Don.

gcallah said...

"Reading your post again it's apparent that you've missed the point of Searle's thought experiment."

Yeah. It's like reading a thirteen-year-old upset at some calculus problem that is over his head, and throwing a temper tantrum as a result.

Don Jindra said...

Gene Callahan,

Since you don't bother to tell me what I misunderstand I'll wait to see if you have any idea what you're talking about.