Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The Chess Room Argument [by John R. Searle]

Author: Christophe Theron

Date: 20:33:10 03/20/01

Go up one level in this thread


On March 20, 2001 at 19:16:20, Robin Smith wrote:

>On March 20, 2001 at 01:50:10, Christophe Theron wrote:
>
>>That's it.
>>
>>Let me give another example, for which I have fought here for quite some time.
>>
>>If a player A is better than player B at fast time controls, I just begin to
>>assume that player A will be better than player B at longer time controls.
>>
>>From that point, if I notice that it might not always be the case, I might work
>>to provide evidence that it is not the case.
>>
>>But my first idea is to keep things simple and see if it works. So I would first
>>assume that time controls do not matter. I change my mind and add complexity to
>>my model only if I can prove that my "simpler" model does not work.
>>
>>I do that because I noticed a long time ago that general concepts are much more
>>powerful. A general concept (or "idea" or "principle") is one you can apply in a
>>large number of cases. So I try to keep my ideas as general as possible. It
>>hurts me when I have to add special cases to an otherwise "clean" (simple)
>>model.
>
>Christophe, I believe your assumption here is generally correct.  The principle
>even has a name, "Occams razor" named after a philosopher and theologian,
>William of Occam, who lived ~700 hunderd years ago.
>
>But I do have good evidence that in at least one case, Fritz5.32, it is not.  I
>am convinced Fritz5 is much weaker than comparable programs when playing at
>super long time controls.  My guess is this is probably due to heavy use of root
>processing.


It might be, but you understand that my point is not to discuss if it is or if
it is not.



>  But starting first with the simplest assumptions makes excellent
>sense.
>
>>I do not know what to do with the concept of "conscience". I don't need it to
>>cover a hole in the big picture of "intelligence", and it explains nothing
>>anyway. Worse: those who talk about it find it mysterious and impossible to
>>explain.
>
>Correct.  It is similar to trying to explain vision to a person blind since
>birth, you can explain the mechanics, but not what it is like, on a subjective
>level, to be able to see a beautiful flower.  Unless you share the experience,
>just explaining the mechanics doesn't explain everything.


Let me try something about "conscience". I'm trying hard to understand what you
mean, I don't want to look too stubborn. :)

Could it be that we call "conscience" the part of the information processing
entity (notice how I avoid the term intelligent entity) which is monitoring the
rest of the entity?

If I understand correctly the word "conscience", being conscient is to be able
to "realize" that I am thinking about something, or that I am saying this or
that. Or that I am currently looking at a flower. True?

In this case, "I" is the part of the entity which is not conscient (or less
conscient?), and the "conscience" is the part that collects only the higher
level informations. The conscience continually monitors the state of "I".

All the process of converting the signal received by a light sensor (eye,
camera...) and coming to the conclusion that a flower is in front of the light
sensor is out of the reach of "conscience". "I" do it, but conscience is not
involved at that stage.

When the signal is finally converted into the symbol "flower", the "flower"
connection is activated, which in turn is going to activate a number of related
concepts. Like a control panel where suddenly some lights turn on. The
conscience only sees this controls panel, and the entity then *thinks* "I see a
flower".

BTW this means there would also be two kinds of "memory". What happens can be
memorized in an "unconscient" memory (maybe a kind of memory for low level
events), and also in the conscience, as a sequence of high level events.

Notice that by this "definition", an entity can "lose consciousness" without
being totally unable to respond to external signals. What is going to happen is
that the entity will respond only with its lower level processing abilities, and
will not "record" the sequence of events ("I" responding without being
conscient).

Wow, sounds terribly close of what can happen to a human being.

I guess my reasonning sounds very naive to people who have studied psychology. I
feel like I am re-inventing the wheel. :)

Anyway, if this model of "conscience" makes some sense, then I don't see any
reason why it could not be implemented in transistors rather than in neurons...
It is possible, because this definition of conscience does not mention the level
of complexity of the symbolic processing unit which is the conscience.

Notice BTW that by this definition, a high level processing unit (or a symbolic
processing unit) can be called a "conscience" only when it is connected to the
"real world" thru a set of interfaces directly connected to this world. Here I
want to emphasize the fact that -maybe- a conscience cannot exist without a
world to which it is connected.

OK, now I think I have gone too far. :)




>> So in short it raises questions you cannot answer, or that you do not
>>need to answer! I fail to see how it helps me to understand anything about
>>"intelligence" (a concept supposed to be closely related).
>>
>>So I just drop it for now...
>
>Let me know if you want to take it up again later.


I hope I have not been too long. :)



    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.