Author: Christophe Theron
Date: 08:15:40 03/17/01
Go up one level in this thread
On March 17, 2001 at 03:55:27, Robin Smith wrote:
>On March 16, 2001 at 23:28:02, Christophe Theron wrote:
>
>>On March 16, 2001 at 22:29:00, Robin Smith wrote:
>>
>>>On March 15, 2001 at 00:20:32, Christophe Theron wrote:
>>>
>>>>On March 14, 2001 at 18:49:25, Djordje Vidanovic wrote:
>>>>
>>>>>On March 14, 2001 at 15:04:28, Peter McKenzie wrote:
>>>>>
>>>>>>On March 14, 2001 at 14:07:36, Christophe Theron wrote:
>>>>>>
>>>>>>>On March 14, 2001 at 13:03:27, José Antônio Fabiano Mendes wrote:
>>>>>>>
>>>>>>>> http://personalidentity.tripod.com/id27.htm
>>>>>>>
>>>>>>>
>>>>>>>Part 2 is a real bullshit. The author tries to demonstrate that computers do not
>>>>>>>"know" chess, and he actually demonstrates that he does not "know" computers and
>>>>>>>that he does not "know" the human brain.
>>>>>>>
>>>>>>>I have heard more meaningful comments in a pub, even very late at night.
>>>>>>
>>>>>>Ah yes, Searle's argument is clearly refuted by the well known reasoning:
>>>>>>'Christophe says it is bullshit, therefore it is bullshit'. I happen to
>>>>>>disagree with Searle's Chinese Room argument, but I don't think its as clear as
>>>>>>just saying it is a pile of crap. In fact, I think he makes some very good
>>>>>>points.
>>>>>>
>>>>>>Peter
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Christophe
>>>>>
>>>>>
>>>>>I tend to agree with Peter. Searle's argument is based on a simple stipulation
>>>>>that the coded instructions of a program (Chinese character manipulation program
>>>>>in the given case) are insufficient to account for the meanings of the symbols
>>>>>or of the set of sentences generated with their help. This, further, implies
>>>>>that functional, or computational, explanations are insufficient to account for
>>>>>referential semantics (plain English: reference), and, as such, for
>>>>>intentionality which is the hallmark of humans.
>>>>>
>>>>>This kind of argument can be criticised, but not dismissed lightly. I've been
>>>>>having problems for quite some time with it :-))
>>>>
>>>>
>>>>I don't.
>>>>
>>>>Watch a neuron or a transistor very closely and tell me why the former is
>>>>carrying meaning and the later is not.
>>>
>>>Clearly both are carrying meaning.
>>>
>>>>Then tell me the meaning of "meaning". Or you could start with this maybe?
>>>
>>>Meaning is the same thing as information.
>>>
>>>>And what's this "intentionality" stuff? For me it sounds like a word invented by
>>>>marketing people trying to hype for human's superiority.
>>>
>>>I don't think humans are necessarily superior. But neither do I think that we
>>>understand what makes us human. In particular I don't believe that the problem
>>>of consciousness has ever been explained.
>>
>>
>>What is the problem?
>
>The probelem is that there has been no satisfactory explaination so far for the
>very existance of consciousness. We KNOW that it DOES exist, but we don't know
>WHY. Nothing that has been observed in the outside world and no scientific
>experiment that has been performed would lead one to believe that such a thing
>as consciousness exists. Our only experience of consciousness is subjective.
>And our only reason to believe in it's existance is because of this subjective
>experience. The only reason for me to believe that you are conscious is because
>I know, from my own subjective experiences, that I am conscious and so I assume
>that you, since you are like me in many ways, are also like me in that you are
>conscious. But that is the ONLY reason I believe you are conscious.
>Understanding how neurons work does nothing to explain the phenomenon of
>consciousness. Why is there any such thing as subjective experience at all?
>
>Think of the example of a totally color blind person. Assume they could only
>see in black and white. They could study the eyes of people who can see colors,
>they would know that the neurons respond differently to different wavelengths of
>light and that this information could be used by those posessing color vision,
>but this person would not EVER know what it is like to EXPERIENCE color vision.
>
>People studying emotions can say that when a person is afraid, such and such
>hormone is released which raises the pulse rate, increases sweat production,
>stimulates certain brain centers, etc, etc. But none of this tells us what it
>is LIKE to BE afraid. We only know about this part because we personally
>experience it.
>
>If "the mind is what brains do" as another poster put it, then why do some brain
>activites not lead to conscious experience? Not that I disagree with the
>statement "the mind is what brains do" ... in fact I believe it is probably more
>or less correct. But it does nothing to explain why large portions of the
>brain, which receive and process signals in a maner similar to those protions of
>the brain that appear to be conected with consciousness, why are they not also
>conscious? Apparently SOME of what brains do does not lead to conscious
>experience at all. What is the difference?
>
>I could go on at much greater length, but the topic is only remotely conected to
>computer chess. The connection, I guess, is that the question of "is a chess
>computer/program intelligent?" (and by the way I read and agree with all of your
>posts on that topic) is similar to the question "is a chess computer/program
>conscious?". This last question seems much harder to answer, and perhaps more
>interesting because it is harder. Since I don't know why I am conscious, and
>since I don't know why some parts of my brain seem to have little if anything to
>do with consciousness, speculating about computer consciousness is ..... well,
>speculative.
>
>Robin Smith
I'm extremely puzzled because I still do not see where the problem is.
The problem is that I'm not even sure the concept of "conscience" is of any use.
You can't define it, and it has no purpose. It is not even useful to explain
anything (is it?).
If this concept is different than "intelligence", then it has no purpose at all.
If it is the same thing as "intelligence", there is no need for two distinct
words.
Maybe somebody is going to name "Descartes" and mention something like
"self-awareness". But being aware of itself is not such a wonder. It can either
be the result of a high level information processing entity exploring its
universe and noticing itself as a part of this universe, or it can be a
"built-in" feature of this entity, and it is the case for most high level
animals on this planet (the instinct of preservation being the most basic
version of self-awareness).
Maybe this is going to shock some people, but to me it sounds just like another
useless concept. Something we are pleased to think at, like the concept of
"soul", but a totally useless one from a scientific point of view, at least in
the state of our knowledge.
Christophe
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.