Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: revolution in computer chess

Author: Vincent Diepeveen

Date: 18:02:50 01/04/06

Go up one level in this thread


On January 04, 2006 at 14:16:58, Stuart Cracraft wrote:

>On January 03, 2006 at 20:26:51, Vincent Diepeveen wrote:
>
>>There is at least a 30 tricks which i'm using in diep where i hear no one about.
>>And please realize, diep has a very pathetic search when compared to certain
>>other engines.
>>
>>The basic concepts are nullmove R=3, transposition tables and worlds biggest
>>evaluation function.
>>
>>Vincent
>
>
>Then you are in a good position to evaluate what is Rybka, why is Rybka
>and most importantly how is Rybka.

Why not read some of my postings and those of GCP of past few days?

Vincent


>You have extremely large evaluation function - this is rumored of Rybka.
>
>Everybody seems to have nullmove R=2 or 3 these days and transposition tables
>are old hat. Nothing new there.
>
>The newness seems to be the search-style of Rybka with its large evaluation.

We all stand on the shoulders of giants.

>I would hazard a guess at the search style being key in Rybka.

>He is searching with probabilities, conspiracy-number search, something like
>that. McAllister didn't come up with great practical outcome for CNS and
>I don't think anyone else has.

What in his assembly code makes you think that?
He's using PVS nowadays according to Chrilly, so that refutes your CNS theory :)

Do you realize i wrote a CNS (conspiracy number search) version of diep, years
ago?

It played like major shit. Couldn't even solve simplistic tactics,
and had very inconsistent search. A major problem was getting mainlines.

CNS has a huge overhead for things that give in normal depth limited alfabeta a
direct cutoff, without too much overhead.

CNS theoretic problem is that it searches along your evaluation function. What
your evaluation function already understands, you search deeper. What it doesn't
understand, it cuts off.

So CNS never corrects the evaluation function by search.

History pruning of course has a similar disadvantage.

>So what is this.
>
>If I use probabilities for evaluation at terminal nodes, mapping say 5
>pawns to a certain win and grading it down, similar to a sigmoid or tanh,

What have neural net functions to do with CSN?

>quite similar to what we do for the learning function in temporal differences
>(which works fine by the way),

TD learning is the biggest nonsense on the planet of course.

Get Fruit 2.1, set all parameters to 0, go tune it with TD, and it will play
hundreds of points worse.

Don't cheat by limiting the parameter domain of certain parameters, nor use its
initial startup values.

>then my evaluation is probabilistic.
>How does this help me in search? Why is it better? I should re-read
>McAllister's paper but it was not probabilistic as I recall though it's
>been many, many years.

CNS was an original attempt, but completely failed in all respects.

P.Conners has been buried very very deep because of that.
Always when i emailed him a FEN file or something to run on conners, in order to
compare its output it with my own CNS implementation, i never got answer back.

>Probabilistic evaluation is something to think about as the rumor is that
>the top program (Rybka) is using it, besides its large evaluation.

You know, i start to get more and more amazed how much irrelevant suggestions a
single person can post in 1 posting.

>I understandyour point that when enough people do it, it leaks out. Commercial
>is always ahead of the rest, no question about it.

>But the point of this board, one at least, is to help accelerate the process.
>What harm is it to anyone?
>
>Does anyone actually make a living for very long on computer chess coding?
>I think not.

You sure will never.

Vincent

>Greetings,
>
>Stuart



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.