Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Christophe Theron

Date: 08:32:03 07/04/03

Go up one level in this thread


On July 03, 2003 at 15:44:44, Landon Rabern wrote:

>On July 03, 2003 at 03:22:15, Christophe Theron wrote:
>
>>On July 02, 2003 at 13:13:43, Landon Rabern wrote:
>>
>>>On July 02, 2003 at 02:18:48, Dann Corbit wrote:
>>>
>>>>On July 02, 2003 at 02:03:20, Landon Rabern wrote:
>>>>[snip]
>>>>>I made an attempt to use a NN for determining extensions and reductions.  It was
>>>>>evolved using a GA, kinda worked, but I ran out of time. to work on it at the
>>>>>end of school and don't have my computer anymore. The problem is that the NN is
>>>>>SLOW, even using x/(1+|x|) for activation instead of tanh(x).
>>>>
>>>>Precompute a hyperbolic tangent table and store it in an array.  Speeds it up a
>>>>lot.
>>>
>>>Well, x/(1+|x|) is as fast or faster than a large table lookup.  The slowdown
>>>was from all the looping necessary for the feedforward.
>>>
>>>Landon
>>
>>
>>
>>A stupid question maybe, but I'm very interested by this stuff:
>>
>>Do you really need a lot of accuracy for the "activation function"? Would it be
>>possible to consider a 256 values output for example?
>>
>>Would the lack of accuracy hurt?
>>
>>I'm not sure, but it seems to me that biological neurons do not need a lot of
>>accuracy in their output, and even worse: they are noisy. So I wonder if low
>>accuracy would be enough.
>>
>
>There are neural net models that work with only binary output.  If the total
>input value exceeds some threshhold then you get a 1 otherwise a 0.  The problem
>is with training them by back prop.  But in this case I was using a Genetic Alg,
>so no back prop at all - so no problem.  I might work, but I don't see the
>benefit - were you thinking for speed?  The x/(1+|x|) is pretty fast to
>calculate, but perhaps the binary (or other discrete) would be faster.
>Something to try.
>
>Landon



Yes, what I had in mind was optimization by using integer arithmetic only.

If the output is always on 8 bits, the sigma(W*I) (weight*input) can be computed
on 32 bits (each W*I will have at most 16 bits).

Actually sigma(W*I) will have no more than 20 bits if each neuron has at most 16
inputs. 32 bits allows for 65536 input per neuron.

This -maybe- allows for a fast table lookup of the atan function that I see used
often in ANN. I think it can be a little faster than x/(1+|x|) computed using
floating point arithmetic. Also, and this is even more important, the sigma(W*I)
would use integer arithmetic instead of floating point.

Maybe I should just do a Google search for this, I'm sure I'm not the first one
to think about this optimzation.



    Christophe



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.