Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The man behind Rybka and his thoughts as posted here on CCC.

Author: Vasik Rajlich

Date: 01:22:53 01/02/06

Go up one level in this thread


On January 01, 2006 at 05:19:12, Harald Lüßen wrote:

>Sometimes ideas are in the air and many people try to use them.
>In this postings I find some of my own ideas or in other words
>I can imagine what was intended or meant with Vasik's descriptions.
>My problem is, there are people who can implement them and others
>who cannot (me).
>
>On December 31, 2005 at 19:24:30, Mike Byrne wrote:
>
>>Vasik Rajlich , chess programming superstar in the making, has offered many of
>>his ideas on chess programming here on CCC - prior to his release of Rybka.
>>
>>What follws are his words extracted from the CCC archives.  Interesting stuff as
>>it provides clues into how he programmed Rybka.
>>
>>=========================================================================
>>"Interesting. You could also prepare some data structures in the positional
>>search and use them in the tactical search, a sort of root-node pre-processing
>>repeated in mid-search.
>...
>>Vas
>>"
>
>I have heard of pre-processing piece square tables in the root position
>depending on the opening or the position of (center) pawns. My idea is
>to expand this to the middlegame. I don't know whether it works.
>When I hat a few minutes look at a Rybka.exe dump and saw lots of
>tables I wondered, whether Vas had done what I just dreamt about.
>
>My idea is an idea database or midgame database:
>
>Get the information of pawn structure, king position and raw material
>for both sides in very few bits as a key to further information.
>2 x king pos = 2 x 5 bit = 10 bit (a1b1, c1d1, ..., g1h1, ..., g8h8)
>2 x pawn pos = 2 x 24 bit = 48 bit (a2a3, a4a5, a6a7, ..., h6h7)
>2 x material = 2 x 3 bit = 6 bit (0=pawns only, 1=B/N, ..., 7=all)
>

Interesting post. Just a quick comment here - this is way too big to pre-compute
and access during the search. You could compute it on the fly, similar to a pawn
hash table, but that calculation itself would be extremely expensive.

Vas

>These 64 key bits can be used with an additional hash value in a
>big const table. Of course you can improve this coding.
>The entries in the table can be a kind of piece square tables coded
>in very few bits: 3 bit piece type, 5 bit coding pattern (like king
>and pawns above) to be filled with the individual 0-1-pattern or just
>fixed patterns, and 8 bit for the bonus value of pieces placed on
>the 1-fields of the pattern. Or you can store a list of typical
>moves (piece (from) to) with reduce/expand-values for the key.
>Or you can store some evaluation switches for special evaluation terms.
>
>You can give an additional evaluation bonus for pieces on the right
>place or sort the moves to the front in your move ordering. It
>would be good to reduce or expand the moves found in the table
>if they are possible in the real current position or expand the moves
>that hit some stored (pattern-)criteria. Off course you have to
>test your current position at each node in the search tree, at least
>near the root. Just like we do with the normal hash table.
>
>How to get and collect the information of the database? Scan through
>a big pgn database of good players, build the keys and patterns
>of each position and da a statistic of the pieces at positions
>and frequent moves. Reduce this data collection to a usable size
>and store it with or in your engine. I have no own chess knowledge
>and have to rely on others. In this case the whole chess history.
>Just think: every made move was good in that position.
>
>I don't know whether it is possible to build this database. Some
>pictures I have seen in Scid or other visualisation tools are
>very impressing and seem to hold some chess knowledge. They are
>like trained neural nets or just valuable heuristics.
>
>I don't know whether it is possible to use this information for
>an improved evaluation and search, but I would very much like
>someone to test it and tell us his results. I would do it myself
>but I have not enough time and patience to do it.
>
>What do you think?
>
>...
>>======================================================================
>>
>>"True, an evaluation which is both accurate and fast is ideal
>
>I can tune my engine (Elephant) to win against Gnuchess or Tscp, but
>not both. I will be crashed positionally or tactically. And when I
>find a good compromise it will loose badly to gerbil. Now you see
>my engines' level. :-)
>
>>It would be interesting to know the exact profile of the time spent in eval of
>>the top programs. As I rememember Amir said that Junior spends "less than 10% in
>>most positions". Does this mean a lot of lazy evals and some very big evals? Or
>>a truly small evaluation which pinpoints the most important features of the
>>position and ignores the rest.
>>
>>In many cases, you arrive in eval needing only to show that the non-material
>>factors can't add up to more than X. (ie X = material score - alpha, or beta -
>>material score.) As X gets higher and higher, more and more things could be
>>ignored. Maybe some sort of a pinpointed lazy eval is the answer.
>>
>>It seems to me that the areas where an engine can get the most benefit are:
>>
>>A) Guiding search (ie reducing/extending): very very high
>>B) Quality of eval: very high
>>C) Overall speed, including speed of eval: medium/low, unless very big factors
>>are involved, such as building specialized hardware or getting a chance to run
>>on a big mp machine
>>
>>On the other hand, "A" and "B" are quite vague, you never really know if what
>>you did was an improvement, while "C" is very easy to measure.
>>
>>Vas
>>"
>
>I did something similar in the last months. I replaced some lazy evaluation
>with a new one that gives an evaluation and possible estimated margins to
>the real score. I use these raw values and margins in my qsearch.
>The values are just the big material and evaluation terms without
>any fine detection. The margins depend on special threats and unsecurities
>like king safety and passed pawns.
>
>There could even be some search hints besides these two return values
>of this function. And the function could (should) be used in the whole
>search. I have not finished it yet. To be honest, the result was not
>worse than before.
>
>>==============================================================
>>"
>>Hello,
>>
>>It seems to me that there is a way to deal with this.
>>
>>Let's say you're at node A going to node B. You can statically evaluate node A,
>>statically evaluate the move (A->B) in the context of node A, apply the
>>appropriate reduction/extension, and enter node B with the already
>>extended/reduced search depth. In this case, node B will go into the hash table
>>with that new remaining search depth.
>>
>>If I understand the issue correctly, you'd also like to statically evaluate node
>>B before deciding on the reduction/extension for move (A->B). However, I am not
>>sure that that is necessary.
>>
>>Let's consider the typical node which is a candidate for reduction. It should
>>have one of two qualities: A) it is extremely static B) it looks unpromising, ie
>>score somewhat below alpha.
>>
>>In the case of very static nodes, it's pretty clear that you shouldn't need to
>>also look at the next move/node. In the case of an unpromising node, I was
>>thinking that you could do something like identify the few factors which could
>>change the evaluation. For example, if you're down two pawns but there are some
>>vague attacking prospects, then king safety would be that factor, and moves that
>>attack the king would be extended/not reduced, while moves which attack pawns on
>>the queenside are reduced. If you have an advanced passed pawn, then pushing
>>that pawn or supporting its advance would be the one way to bring the score
>>back, and moves which support that would be extended/not reduced. In all of
>>these cases, it seems to me to be enough to only consider the original node
>>(node A) and the moves themselves. The static evaluation of the next node can
>>then be done already knowing the new depth.
>>
>>Unfortunately I haven't started to implement this, so there might be some issues
>>here.
>>
>>At any rate, any reports on your experience in implementing this are welcome.
>>There appears to be almost nothing in the chess literature about this. The best
>>I found was Ed Schroeder's description of the pruning in Rebel, but he
>>concentrates on pruning positions with very bad scores rather than on pruning
>>irrelevant moves.
>>
>>It does also seem to me that this is "the key" to a good program, or at least
>>that it could be "a key".
>>
>>Vas
>>"
>
>Does this follow from my ideas above or is there something else to?
>
>...
>>=====================================================================
>>
>>There's much more there too -- I just selected teh more interesting (for me)
>>tidbits.
>>
>>Best,
>>
>>Michael
>
>I also find these ideas interesting. I just cut some out for a shorter posting.
>I would like a further discussion of these ideas but I can not answer
>for 1 or 2 days.
>
>Harald



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.