Computer Chess Club Archives




Subject: Re: chess and neural networks

Author: Christophe Theron

Date: 00:22:15 07/03/03

Go up one level in this thread

On July 02, 2003 at 13:13:43, Landon Rabern wrote:

>On July 02, 2003 at 02:18:48, Dann Corbit wrote:
>>On July 02, 2003 at 02:03:20, Landon Rabern wrote:
>>>I made an attempt to use a NN for determining extensions and reductions.  It was
>>>evolved using a GA, kinda worked, but I ran out of time. to work on it at the
>>>end of school and don't have my computer anymore. The problem is that the NN is
>>>SLOW, even using x/(1+|x|) for activation instead of tanh(x).
>>Precompute a hyperbolic tangent table and store it in an array.  Speeds it up a
>Well, x/(1+|x|) is as fast or faster than a large table lookup.  The slowdown
>was from all the looping necessary for the feedforward.

A stupid question maybe, but I'm very interested by this stuff:

Do you really need a lot of accuracy for the "activation function"? Would it be
possible to consider a 256 values output for example?

Would the lack of accuracy hurt?

I'm not sure, but it seems to me that biological neurons do not need a lot of
accuracy in their output, and even worse: they are noisy. So I wonder if low
accuracy would be enough.


This page took 0.08 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.