Author: Albert Silver
Date: 10:04:38 05/13/02
Go up one level in this thread
On May 12, 2002 at 14:42:58, Janosch Zwerensky wrote:
>
>>I'd suggest reading Asimov's many Robot short stories developing his Laws of
>>Robotics.
>
>In principle, Asimov's laws of robotics are fine, but I am not sure that we
>could implement them if we some day built an actual machine of human-like
>intelligence.
>The reason for this is that I think that there is no practical way of
>hand-coding all the knowledge the robot would need to even *understand* the
>three laws, let alone follow them with any degree of accuracy, which means it
>would have to build the mental structures needed for comprehending such things
>essentially by learning. Once the robot *could* understand the three laws I'd
>predict that *we* would not understand the mind it had developed in sufficient
>detail any more to know exactly what we were coding into it when we tried to
>imprint its mind with the three laws...
>In addition to that, that learning phase would likely not work anyway if the
>robot did not have a different, easier-to-implement system of motivations (than
>the one induced by the three laws) in the first place. I don't think it is
>likely that one could turn these off easily when it's "grown up" without
>damaging the robot's mind.
Asimov dealt with many many of these issues in detail over time. Naturally the
interpretation and depth of understanding of the laws depends very much on the
depth of general understanding of the AI in question.
Law One: don't harm humans.
Ok, if it is taken at it's most basic level, it means don't hit me, electrocute
me, etc. If you wish to take that to the level of not hurting my feelings, etc,
then the machine must be capable of understanding at that level to do that. A
famous story by Asimov was called "Liar!".
Law Two: Obey humans (I'm cutting out the conditions to keep it simple)
Doesn't sound too hard does it? Of course if we get to the point where an order
other than one coming from a human could come, then conditions to identify a
human will have to be placed. Roger Zelazny once wrote a pretty nice short story
(pretty long, and won an award I believe) dealing with identifying a human
command from a non-human one.
Law Three: protect itself.
We tend to think of this in very human terms, but already nunmerous software
deal with exactly this: identify glitches or problems that could cause the
system to freeze and take steps. I just see this as being explanded on.
Albert
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.