Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Question about evaluation and branch factor

Author: Uri Blass

Date: 06:19:25 11/21/03

Go up one level in this thread


On November 21, 2003 at 08:42:51, Anthony Cozzie wrote:

>On November 20, 2003 at 13:18:55, Uri Blass wrote:
>
>>On November 20, 2003 at 12:47:41, Anthony Cozzie wrote:
>>
>>>On November 20, 2003 at 12:28:57, Marcus Prewarski wrote:
>>>
>>>>I've been completely rewriting the evaluation function of my engine
>>>>DrunkenMaster (not a strong one) because I was tired of seeing it make some
>>>>really ugly moves and I want to give it better knowledge of king safety and pins
>>>>and better passed pawn evals.  When I watch it play 5 minute games against an
>>>>earlier version it seems like the evaulation is better overall.  However it
>>>>seems like these evaluation changes have made the branch factor a bit worse in
>>>>several test postions I have.  And it performs worse in WAC test suites which
>>>>seems to agree with my observations.  I would think that improving my evaluation
>>>>function would improve the search branch factor if anything.  So my question is
>>>>does this mean that my newer evaluation function is actually worse in most cases
>>>>than my old one or could it be something else like my move ordering is bad to
>>>>begin with?
>>>>
>>>> -Marcus
>>>
>>>More eval -> fewer '=' beta cutoffs.  Its just a fact of life :(  IIRC, Tim
>>>Foden posted some numbers where material-only GLC outsearched normal GLC by ~4
>>>ply.  Of course, it lost all its games.
>>>
>>>anthony
>>
>>I do not agree.
>>
>>Maybe if you compare only material with something more complex you are right but
>>I see no reason for it to be the case with piece square table relative to more
>>complex evaluation.
>
>that is because you didn't look.  Uri, it is annoying to have to explain myself
>in full every post.  But just this once, I'm going to do your thinking for you.
>
>Suppose we are searching a PV node with 10 children to depth 1 with 2 programs,
>A and B.  In program A, 4 of the children have duplicate evals, while 6 are
>different, and in program B all of the children have different evals.  Let us
>assume that the 'best move' is one of the 4 with duplicate evals (40% chance).
>Program A will get 3 more beta cutoffs (move - eval - stand pat) whereas program
>B will be trying more stuff in Q-search (maybe some checks, futility captures,
>etc).
>
>Now, it is also clear that the more eval you have the less chance of getting
>child two nodes that have the same value.


I am not sure about it.

If you always count in 1/100 pawns then even piece square table evaluation
usually does not give the same value for 2 different nodes.

If you add knowledge it does not mean reducing the chance to have different
evaluation.

For example if you have knowledge about drawn endgames then it increase the
chance of having 2 0's.

A stupid program without tablebases may evaluate KB vs K as different advantage
for the side with the bishop that is dependent on piece square table when a
better program that knows that it is drawn may evaluate it as draw so more
knowledge increase the chance of having 2 nodes with the same value.

There are positions that you cannot evaluate by tablebases and more knowledge
about endgames should give a lot of 0.00 scores(for example KRPP vs KRPP when
the pawns are on the same side of the board can be evaluated as a draw if some
relevant conditions happen)

The program with the knowledge should be able to search deeper because in the
relevant positions it does not need to search because thanks to knowledge it can
return 0.00(of course you need to be careful to include only position when it is
a clear draw as 0.00)

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.