Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Teaching computers to think

Author: Vasik Rajlich

Date: 11:21:19 03/15/04

Go up one level in this thread


On March 15, 2004 at 12:19:43, Tord Romstad wrote:

>On March 15, 2004 at 09:17:31, Vasik Rajlich wrote:
>
>>Of course, selective search if done right will work, the commercial programs
>>prove this. At the moment, I am completely re-doing my evaluation. I think
>>selective search must be tied to the evaluation, and in particular the part of
>>the evaluation which measures the high-value aspects of a position. In the
>>middlegame, this is king safety. In the endgame, this is passed pawns.
>
>I am not sure I agree that a good selective search *must* be tied to the
>evaluation, but (as you already know) I believe that this is at leat one
>of the approaches which it is possible to make work.
>
>The following is a copy of an e-mail I sent to Vasik an hour or two ago.
>I post it here as well, in case somebody else is interested.  Please take
>everything I write with boatloads of salt.  My engine is not very strong,
>and more experienced programmers will probably laugh when they read
>about my primitive techniques.  But because it might have some interest
>for newbies, and because the experts might enjoy a good laugh, here is
>a simplified description of how my selective search works:
>
>
>Hello, Vasik!
>

Hi Tord,

this is great stuff! Expect more questions soon - and hopefully I'll have some
useful feedback as well.

First one very general question: to what extent are you confident that what you
outlined here improves your engine? Do you test each piece separately, do you
test changes in groups, have you tried longer vs shorter time controls, etc? I
suspect that you are getting some benefit, given what is known about your
program, including NPS rates - just curious how you go about validating what you
do.

>As promised, I send you a brief description of how my selective search
>works.  Because my search looks rather unconventional (for instance,
>as you might remember from CCC, I use the concept of the "cost" of a
>move instead of extensions and reductions), I've tried to translate
>what I do to a more normal language.

Actually, I also use this concept, called "weight" in my case. The idea came
from some descriptions of Junior's search in the CCC archives. Anyway, as you
pointed out, it's just a cosmetic change from extensions & reductions.

>
>First of all, I should mention that I use MTD(f), which means that all
>my searches are done with a zero width window.  Therefore, the search
>fails high or low at all nodes.  I call the static eval at all
>interior nodes, and use the evaluation to make many different search
>decisions, like explained below.

I also use MTD (f), it's a bit more elegant IMO. PVS should behave the same way
though, as far as selective search goes.

>
>Basically, I have two different selective search mechanisms.  The
>first one is very simple, and is used in many programs (I think).  In
>the last n plies of the search (n is currently 2, but I haven't yet
>decided whether 2 or 3 works best), I do what I call "static null move
>pruning".  When the static eval returns a score >= beta, there are no
>hanging pieces, no mate threat, the king safety for the side to move
>is good, and there were no extensions in the last two moves of the
>path leading to the position, I just return the score without doing a
>search.  I do something similar further from the leaves as well, but
>only when the score is *very* far above beta.
>

Yes, I can believe that you get a small but clear win from this. Null move seems
to me to be more effective further from the leaves. (An untested intuition.)
When I allow transitions from a null-move right into q-search, the program can
overlook some pretty simple stuff at the leaves. It seems to me that after a
null-move is tried, ideally there should be enough depth left to take care of
certain things.

BTW Don Dailey wrote a very detailed description of his own algorithm for this,
it's in the CCC archives.

>The other mechanism is based partly on how a move changes the various
>components of the evaluation function, and partly on the history of
>the move.

Yes, this is the intesting stuff! I think this is where Junior, Shredder, Fritz,
Hiarcs, Tiger, etc, are getting some big gains.

>
>In addition to all the common extensions, I extend moves which
>dramatically changes the value of some evaluation component (usually
>king safety or passed pawns).  If a move dramatically increases the
>pressure on the opponent's king, I extend the move.  The magnitude of
>the extension depends on how much the evaluation component in question
>increases.

I tried several versions of this. None of them appeared to improve the level of
play in blitz. Have you tested this extension individually in self-play?

Also, do you evaluate king safety separately for purposes of search and for
purposes of evaluation?

Anyway, this extension is very logical. I will try it again once my new king
safety evaluation is ready.

Also, it's very logical to extend moves which static eval indicates are better.
It appears that this is what Junior does - although it's not 100% clear.

>
>At all nodes, the first three moves are always searched to full depth
>(or more, in case they are extended).  When the first three moves fail
>low, I expect the node to be a fail-low node, and reduce the search
>depth for the remaining moves unless they seem particulary promising.
>All the remaining moves are searched with reduced depth unless one of
>the following conditions are satisfied:
>
>1. The move is extended.  In this case, it obviously doesn't make
>   sense to reduce the search depth.
>
>2. One of the last two moves in the path leading to the position were
>   extended.
>
>3. The move played contains a threat (of mate or of capturing a
>   piece), or moves a hanging piece to a safe square.
>
>4. The static eval or one of its components make a jump in positive
>   direction when the move is made.
>
>5. The move has often caused a beta cutoff in the past (I compare the
>   history table entry for the move to the total number of nodes
>   searched).
>
>When a reduced move fails high, it is re-searched with normal search
>depth.

Aha, I follow the logic here.

The risk of course is that you will reduce some good positional moves, not
giving search the chance to show that they are good. Well, nothing is free.

>
>Like in the case of extensions, the amount of the reduction depends on
>just how bad the move looks.
>
>At the node directly following a reduction, I immediately return a
>fail low score if the null move search fails low by a big margin.  In
>cases like this it is clear that the reduced move contains a dangerous
>threat, and it is too risky to search it with reduced depth.

Actually, I tried a version of this: when a null-move failed badly (by two pawns
or more), I extended the search by a full ply. Of course with MTD (f) you need a
good fail-soft, and I worked through quite a number of issues with failing-soft
in my q-search, etc. Nevertheless, this extension appeared to hurt my program's
level of play slightly in blitz.

This was quite a surprise. Generally, moves which threaten something (ie which
cannot be ignored) must be better, and more worthy of search, than moves which
do not threaten anything. My explanation is that this extension caused the
program to spend too much time looking at tactical lines and not enough at good
quiet positional lines.

>
>The idea of these reductions is that a move which does not appear
>promising and which very rarely has worked in the past is usually very
>unlikely to fail high.
>
>I also have some other search tricks which you perhaps would not
>classify as "selective search", but which might still be interesting
>to you:
>
>I use a dynamic null move reduction factor.  The null move search is
>usually skipped unless there is a good reason to think that the node
>is a fail-high node (this is decided by comparing the value of the
>nstatic eval to beta, of course).  When I am *very* sure of a fail high
>(huge static eval, no hanging pieces, safe king, and no other kind of
>danger reported after calling the eval), I use R=4 or (in extreme
>cases) R=5.  In most "normal" positions, I use values close to 3.
>When there is some obvious kind of danger in the position (like an
>unsafe king for the side to move, or a high passed pawn eval for the
>opponent), I use values in the range 1 to 2, in an attempt to avoid
>horizon effect problems.  Like many other places in my search, I also
>consider the path leading to the position when deciding which value of
>R to use.  When there has been an extension in the last few plies, I
>avoid using values bigger than R=3.

Offhand it would seem to me that the increased savings above R==3 are too small
to be important. Also, it would seem to me that attempting to skip the null-move
search via static eval also wouldn't gain much.

>
>In the qsearch, I use the eval to decide which moves to search.  The
>decision of whether or not to search checks is based mainly on the
>king safety of the opponent.  If the static eval is slightly below
>beta, but there is a hanging piece which when captured will yield a
>score high above beta, I return a fail-high score immediately unless
>the position is too tactically complicated.  If the position is only
>moderately complicated, I use a simple attack-table-based SEE to cull
>losing captures.  In more complicated positions, I use a slower, but
>more accurate SEE.  If the position is really wild, the engine decides
>not to trust the SEE at all, and searches all captures.
>
>I hope you will be able to find something of interest in the above
>description.  Feel free to ask questions if something was not clear.
>

Yes, thanks for all of the ideas. I'll let you know how things go ...

Greets,
Vas

>Cheers,
>Tord



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.