May 24, 2009

Outline against Searle's Chinese Room Argument

I had this argument that I scribbled on a sheet of paper when I engaged myself with "Minds, Brains, and Programs" by John Searle. Though initially just an impression regarding the overall stance of the paper, it eventually popped into a rough counterargument of its own. I've supplemented full sentences where my original notes just have keywords and phrases. I haven't written this essay, and I don't care if someone else does. My guess is that someone else already came up with an argument similar to my own, so this independent discovery doesn't amount to much in terms of philosophical contribution.

"On the Abuse of Intelligence"

Problem: Analogous Chinese Room Argument -- Am I an intelligent speaker, or just a "symbol monger?"

"Symbol mongering" will refer to acts of following preprogrammed rules to translate one stimulus into another -- languages to languages, sensations to sentences, etc.

If I'm an intelligent speaker on the basis of translation, alone, then this "intelligence" rules out skills of creativity and innovation, which are generally used as markers for intelligence as it is commonly employed. (e.g. "He was smart to solve the puzzle in so few moves.")

Well, perhaps I'm just a symbol monger. But what of machines that advance to match our human behaviors, even those of innovation and creativity (among others)? It would not be impossible for machines to acquire human-like precision in detection of their surroundings and make proper categorical (which are implicitly terminological) distinctions on the basis of those "perceptions" (if we're willing to grant this capacity of robots. If not make another word for it), and even to apply that new data to provide additional programming for its own actions.

If these machines are comparably successful to humans in their mongering, they're not "intelligent?"

Well, what sort of criteria are there for intelligence?

Anatomical? -- If this was the case, then denials of intelligence of machines is just anthropic bias. If an eyeball and a sophisticated device work with comparable efficacy in their intended functions (i.e. seeing), then it sounds like we're just saying, "That robot doesn't count as smart. He wasn't a product of human procreation!" to which any A.I. scientist would rebuke, "So what? That's not what we do, anyway. This isn't bioengineering."

Metaphysical? -- Are we still making "mind" a special case for bio-organisms? Well, not anatomically, so what about talk of mind as something that the brain can't be, which a synthetic device couldn't mimic? But this leaves that horrible problem of definition at hand. If we're just going to define a mind as some non-physical notion, then what of the notion of non-physicality. Metaphysicians don't strike me as very clear on this, but seem more like they're scratching the surface of a hollow fruit.

Behavioral criteria? -- Then what discriminates symbol mongering from intelligence?

Criticism: Searle's talk misuses the term intelligence. Let's take his intuitions. I don't know Russian, and if given a manual for translation from Russian to English, assuming I instantly forgot the rules once I used them, I still wouldn't know Russian. But, if at any point, I memorize any of the rules of translation, I instantly know the other language, just like I know other languages. But how the heck do I determine that? Well, in practice, mainly. I go to Russian speakers, utter a constative sentence (maybe ostending a bit to compensate for lack of fluency), and if they react in a way that indicates understanding, then continue on a conversation assuming the presence of that knowledge, then it seems fair to say that we both know the meaning of the utterance. Performatives work in a very similar way, and the behavioral criteria to establish understanding may need clarifying in some instances, but such things are determinable from behavior.

Let me share some of my intuitions.

Let me give you four sentences in four different languages:

  • Is a cat is on the mat?
  • ¿Está un gato sobre la estera?
  • 一隻貓在墊子上嗎?
  • 111000101010000101001001010111.........?
Are these sentences different from each other? Clearly, yes! Look at all of the differences in squiggles, particularly on that third one! Sheesh! Now ask an English speaker, Spanish speaker, Chinese speaker, and preprogrammed, sentient, symbol-mongering robot these sentences while in plain view of a cat on a mat (and yeah, make sure they're honest). What responses will we get? "One is."/"Está."/"在啊."/"01..." and statements of the sort.

A zinger: How do you (the reader) know that I know the four languages above, and didn't just memorize four sentences in four different languages? It can't be because I might be able to produce more sentences in these languages, because that simply ushers the question to whether I knew n sentences or just memorized the sounds without knowing what they mean at all. Is it because I was able to associate that they carried analogous reference to some event? If that's the issue, then what makes accuracy in translation so different from intelligence with languages? It couldn't be innovation of original sentences, since I could just as well learn syntax markers and proper word orders for these languages, and even make responses on them from others' statements that simply applied common traits of parts of speech.

And what is Searle to say -- that I can translate languages, but don't really know them, myself? What else is a language besides an ability to translate some bit of data from one domain into another, a crafty, collective memorization wherein "use of language" means "performing an action that enables something else to associate that given action with their respective stimuli?"

Conclusion: Intelligence only makes sense when contrasted with stupidity, but these have to do with behavioral efficacy, and since semantics appears adequately interpretable as just another programmed translation from sensational recording to symbolic representation, I'm left with severe doubts over the presence of "consciousness" beyond the presence of perceptiveness, memory, and self-interested motivation.

Not pristine, I know, but I've found this a quite workable framework for full and clear objection to Searle's Chinese Room Argument. My questions now are whether someone else already had these objections, and whether there are good objections to my objections, to that person's objections, to that person's objections, et cetera.

Take care...

No comments:

Post a Comment