Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: a solution for fritz5 autoplayer problem-is it possible?

Author: Albert Silver

Date: 15:12:10 06/04/98

Go up one level in this thread


On June 04, 1998 at 18:01:22, Don Dailey wrote:

>>>The idea is doable.  It would require a machine to punch the keyboard
>>>and a camera to watch the display and decode the moves.  But this is
>>>just overkill.  Also the program would have to be "trained" to
>>>operate each program it encountered.   We have people at our lab
>>>here at MIT who could do this (but probably wouldn't.)
>>>
>>>- Don
>>
>>Of course there would still be the matter of the Three Laws of Robotics:
>>
>>1) A robot will not harm a human being nor allow a human being to be
>>harmed through inaction (the actual wording may be different).
>>2) A robot will obey all comands of a human being inasmuch as they do
>>not conflict with Law no.1
>>3) A robot will protect it's own existence inasmuch as they do not
>>conflict with Laws no.1 and/or no.2
>>
>>Suppose the robot receives the move and is supposed to relay it, but in
>>a flash of understanding it realizes that the move is probably losing
>>and the programmer is nearby and possibly realizes it now or will do so
>>soon, so the robot cannot obey Law no.2 because so doing would be
>>disobeying Law no.1 (the programmer in a fit of despair might throw
>>himself out the nearest window, or if the locale is on the first floor,
>>drive to the nearest bridge and launch himself from there), so the robot
>>has no choice but to throw ITSELF out the window (or the nearest
>>bridge). Consequence: All such robots are condemned to the newly
>>baptized "Lemming Syndrome". Sorry Uri, it was a good idea though.
>>
>>                                 Albert Silver
>
>
>I just checked the web and was surprised to see 4 laws now.  Perhaps
>Asimov added this one later:
>
>0. A  robot  may not  injure  humanity  or,  through  inaction,  allow
>   humanity to come to harm.
>
>1. A robot may not injure a human being,  or through inaction, allow a
>   human being to come to harm, except where that would conflict  with
>   the Zeroth Law.
>
>2. A robot must obey  the orders given to  it by a human being, except
>   where that would conflict with the Zeroth or First Law.
>
>3. A robot must protect  its   own  existence, except where that would
>   conflict with the Zeroth, First or Second Law.
>
>
>But that changes everything.  Since Robot's are designed to serve
>humanity,  the robot must look at the bigger picture of how his
>testing services affects the computer chess community.  Throwing
>itself out the window would have a negative impact on many pepole
>so the robot would be relieved to find he could honor the 3rd law
>in good conscience.
>
>This is a dillema that I'm sure Captain James T. Kirk could reason
>out with the robot.   Execute your prime directive!
>
>- Don

Don't know about Law no.0 but I think that instead you'd have a robot
that just sat around doing nothing as it would be wondering about what
it's action will do to the rest of us carbon based beings. Or worse! It
could just decide that chess is a big waste of time, and that all people
promoting the perpetuation of this activity are not contributing to the
well-being of mankind so it would indeed play the move knowing that the
programmer will probably defenestrate himself. Thus any chess loving
programmer or engineer who decided to actually build such a robot would
in fact be revealing that he has suicidal tendencies or no survival
instinct.

                                 Albert Silver



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.