Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: a solution for fritz5 autoplayer problem-is it possible?

Author: Don Dailey

Date: 15:01:22 06/04/98

Go up one level in this thread


>>The idea is doable.  It would require a machine to punch the keyboard
>>and a camera to watch the display and decode the moves.  But this is
>>just overkill.  Also the program would have to be "trained" to
>>operate each program it encountered.   We have people at our lab
>>here at MIT who could do this (but probably wouldn't.)
>>
>>- Don
>
>Of course there would still be the matter of the Three Laws of Robotics:
>
>1) A robot will not harm a human being nor allow a human being to be
>harmed through inaction (the actual wording may be different).
>2) A robot will obey all comands of a human being inasmuch as they do
>not conflict with Law no.1
>3) A robot will protect it's own existence inasmuch as they do not
>conflict with Laws no.1 and/or no.2
>
>Suppose the robot receives the move and is supposed to relay it, but in
>a flash of understanding it realizes that the move is probably losing
>and the programmer is nearby and possibly realizes it now or will do so
>soon, so the robot cannot obey Law no.2 because so doing would be
>disobeying Law no.1 (the programmer in a fit of despair might throw
>himself out the nearest window, or if the locale is on the first floor,
>drive to the nearest bridge and launch himself from there), so the robot
>has no choice but to throw ITSELF out the window (or the nearest
>bridge). Consequence: All such robots are condemned to the newly
>baptized "Lemming Syndrome". Sorry Uri, it was a good idea though.
>
>                                 Albert Silver


I just checked the web and was surprised to see 4 laws now.  Perhaps
Asimov added this one later:

0. A  robot  may not  injure  humanity  or,  through  inaction,  allow
   humanity to come to harm.

1. A robot may not injure a human being,  or through inaction, allow a
   human being to come to harm, except where that would conflict  with
   the Zeroth Law.

2. A robot must obey  the orders given to  it by a human being, except
   where that would conflict with the Zeroth or First Law.

3. A robot must protect  its   own  existence, except where that would
   conflict with the Zeroth, First or Second Law.


But that changes everything.  Since Robot's are designed to serve
humanity,  the robot must look at the bigger picture of how his
testing services affects the computer chess community.  Throwing
itself out the window would have a negative impact on many pepole
so the robot would be relieved to find he could honor the 3rd law
in good conscience.

This is a dillema that I'm sure Captain James T. Kirk could reason
out with the robot.   Execute your prime directive!

- Don






This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.