Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Evaluation Approach

Author: KarinsDad

Date: 08:59:53 01/15/99

Go up one level in this thread


On January 15, 1999 at 00:53:02, Eugene Nalimov wrote:

>In my old program (1989-1990) I did the following:
>
>(1) For searching depth N, I did full evaluate at *every*
>    node at first N-2 (or N-1, I don't remember) plies.
>(2) Based on that score I could do "razoring" of the
>    uninteresting lines (search only captures). Also eval
>    returned information necessary for decision "Shall we
>    search that node deeper" - change in king safety or in
>    number of hanging pieces.
>(3) Eval also modified values of the pieces, so that
>    incremental MakeMove could more precisely evaluate
>    scores after captures.
>(4) If those captures changed pawn structures, I tried to
>    estimate - will real score still be out of alpha/beta
>    window? If not, I'd have to re-evaluate it; otherwise,
>    I could use estimation. Also, there were various special
>    cases (when you exchange one of the bishops, second
>    bishop's value drops; when there were a lot of exchanges
>    value of the remaining pieces can drastically change;
>    when one of the queens nears enemy king, king safety
>    can change; if you exchanged several "hanged" pieces,
>    evaluation can change; etc.)
>
>Please note that program was written for 8080, so I tried
>to cut the corners where possible. I think that now you can
>live with pure material estimation, and just call full eval
>when necessary - e.g. when
>  estimation >= (alpha - pawn) && estimation <= (beta + pawn)
>(that's simplified version of lazy eval).

Well, there seems to be 9 possibilities:

1) (alpha - pawn) <= estimation <= (beta + pawn), (alpha - pawn) <= actual <=
(beta + pawn)

2) (alpha - pawn) <= estimation <= (beta + pawn), actual <= (alpha - pawn)

3) (alpha - pawn) <= estimation <= (beta + pawn), actual >= (beta + pawn)

4) estimation >= (beta + pawn), (alpha - pawn) <= actual <= (beta + pawn)

5) estimation >= (beta + pawn), actual <= (alpha - pawn)

6) estimation >= (beta + pawn), actual >= (beta + pawn)

7) estimation <= (alpha - pawn), (alpha - pawn) <= actual <= (beta + pawn)

8) estimation <= (alpha - pawn), actual <= (alpha - pawn)

9) estimation <= (alpha - pawn), actual >= (beta + pawn)

The first 3 would be handled by the simplified version algorithm you propose.
Your special rules (4) above seems to minimize possibilities #6, and #8 while
handling possibilities #4 and #7. Although, I could see how the special rules
will probably fail for #5 and #9 (probably extremely rare cases anyway except
with the possibility of hanging pieces).

This leaves #2 and #3. The question is whether they are frequent or important
enough to be avoided. It would be nice to come up with a mechanism to handle
these, but I cannot think of a way to do it right now.

I'll have to sit down and study this. Thanks for the input. :)

KarinsDad

>
>You'll have *very* unpleasant hash inconsistiences if your
>estimation will be wrong. Also, if you have search
>extensions that are based on how far your current score is
>from alpha and/or beta, you can decide not to extend where
>you should or vice versa. That can result in a fail low (or
>high) that will not be confirmed during re-search - you'll
>re-search with smaller alpha (or larger beta), so you'll
>call different evaluation function, and will not extend the
>search. (You can set a flag in the hash table "extend always",
>or have a separate "sticky" hash for that purpose, as was
>done in DeepThought, but I had only 8Kb for hash).
>
>Eugene
>
>On January 14, 1999 at 23:54:28, KarinsDad wrote:
>
>>On January 14, 1999 at 19:19:45, Peter Fendrich wrote:
>>
>>>I'm not sure that I understand your idea fully, but one warning!
>>>If you change your evaluation between iterations, you will get different
>>>evaluation for the same position. This will give you a corrupted hash table and
>>>in the end it might be that your performance drops more than you expect to gain
>>>from the quick evaluation.
>>
>>Yes, I considered this, that is one of the reasons I asked the question. I tend
>>to work in a "white to move and win" approach. If someone tells me there is a
>>win there (i.e. an algorithm worked for someone else), I will almost always find
>>it. However, if I do not know that, then I have to wonder if the approach is
>>even sound.
>>
>>To me, the problem is one of having two scores based on two algorithms. If they
>>are not basically compatable, then it would seem that you could really mess up
>>your search (i.e the position was -1.5 a while ago and now it is 3.5 and hence
>>your PV changes, but it could have changed 30 seconds ago, so you wasted a lot
>>of time).
>>
>>I think that the hash table is irrelevant to this since everytime a score at a
>>node changes, you must back up the tree (i.e graph) and propogate the score
>>through all possible parents of that node. You have to do this even if you only
>>had one evalutation function, hence, I could test with one eval function to make
>>sure the hash tree works first, and then use the two of them for improved move
>>order and better evaluation where it is important.
>>
>>>Other weird side effects will turn up as well like hash table cutofs that should
>>>never happen.
>>
>>This is a major concern. However, I am assuming that if I used a simplistic
>>score and never used a detailed score, then I would basically run into that
>>problem anyway and my program would be at 1800 to 2000 forever. Since I'm
>>planning on using the detailed score for the PV (and for the first 4 ply), I'm
>>hoping that at least this problem will be lower down the graph and the program
>>will be making at least sound moves (if not the best move).
>>
>>>Combine this with a bug of some sort and you will get stuck for weeks!
>>
>>Yikes!
>>
>>Thanks for the input,
>>
>>KarinsDad :)
>>
>>>
>>>//Peter
>>>
>>>
>>>On January 14, 1999 at 17:49:15, KarinsDad wrote:
>>>
>>>>I have been considering the possibility of having two sets of evaluation
>>>>routines in my code. One set is a quick simplistic evalution and the other is a
>>>>slower, more detailed evaluation.
>>>>
>>>>The simplistic evaluation consists of any set of data which can quickly be
>>>>calculated such as material and safe squares. I was also going to have my pawn
>>>>structure score here as well since I am using one large hash table for it,
>>>>hence, since pawn structures are relatively stable and once calculated, they can
>>>>be re-used for multiple positions across the search tree.
>>>>
>>>>The detailed evaluation was going to consist of the simplistic evaluation, plus
>>>>modified piece values (based on where they are and what they are doing), square
>>>>control, king safety, etc.
>>>>
>>>>My questions are:
>>>>
>>>>1) Has anyone used an approach similar to this, and if so, is it successful? and
>>>>2) If I use this approach, where should I use each evaluation? I was thinking of
>>>>using the detailed evaluation only on the first few ply (maybe up to 4), on the
>>>>entire PV, and at the leaf nodes of non-quiescent paths (once they became
>>>>relatively quiescent again). I would then use the simplistic evaluation
>>>>everywhere else for speed. Does this seem reasonable, or am I missing something
>>>>here? Will having scores derived from two different evaluations result in a
>>>>skewing of the search?
>>>>
>>>>Thanks,
>>>>
>>>>KarinsDad



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.