Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Bitboards and Evaluation

Author: Andrew Dados

Date: 14:38:11 05/31/01

Go up one level in this thread


On May 31, 2001 at 17:28:17, Robert Hyatt wrote:

>On May 31, 2001 at 16:55:21, Andrew Dados wrote:
>
>>On May 31, 2001 at 16:36:55, Robert Hyatt wrote:
>>
>>>On May 31, 2001 at 16:19:05, Andrew Dados wrote:
>>>
>>>>On May 31, 2001 at 10:37:25, Robert Hyatt wrote:
>>>>
>>>>>On May 31, 2001 at 08:09:52, Vincent Diepeveen wrote:
>>>>>
>>>>>>Hello i had in mind comparing assembly output of 2 things
>>>>>>but now i look in crafty 18.9 i see the pattern is not even
>>>>>>in crafty 18.9, though it's a basic pattern. so it's hard
>>>>>>to compare things.
>>>>>>
>>>>>>But i'm pretty amazed by that every thing is
>>>>>>getting referenced as
>>>>>>   tree->pos.board[sq]
>>>>>>
>>>>>>If i would be roman catholic i would now make a cross and say
>>>>>>"lucky i'm multiprocessor",
>>>>>>
>>>>>>because what i would be using there is
>>>>>>   board[sq]
>>>>>>
>>>>>>And i'm using that everywhere in my evaluation. Bob
>>>>>>however smartly got rid of the thing by using a define
>>>>>>that however translatest to it PcOnSq() it's called.
>>>>>>
>>>>>>But in the assembly code you still see it!
>>>>>>
>>>>>>Also what i see is the general problem of bitboards:
>>>>>>  if( (something[indexed]&bitmask) == pattern )
>>>>>>
>>>>>>Where i can do
>>>>>>  if( something[indexed] == pattern )
>>>>>>
>>>>>>So i save an AND there.
>>>>>
>>>>>come on.  Show me your one if to see if a pawn is isolated.  Or if it is
>>>>>passed.  Or if it is passed and can't be caught by the opposing king.  Or
>>>>>if your two rooks or rook/queen are connected on the 7th rank...  or if you
>>>>>have the "absolute 7th"...
>>>>>
>>>>>you are comparing apples to oranges...
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>Also i'm pretty amazed by 'signed char bval_w[64]'.
>>>>>>
>>>>>>First of all in DIEP i am happy to announce that i threw out all
>>>>>>8 bits arrays.
>>>>>>
>>>>>>I didn't know crafty is still using 8 bits arrays!
>>>>>>I thought it was a mixture of 32 bits with 64 bits!
>>>>>
>>>>>
>>>>>Vincent, I suspect there is a _lot_ you don't know.  :)  I use them because
>>>>>they are faster on the Intel hardware.  The test to prove it is quite simple.
>>>>>64 bytes is 2 cache lines.  64 words is 8 cache lines.  It's just that simple.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>The second thing i wonder about is why this is getting done *every*
>>>>>>evaluation. bval_w gives a piece square table value which is constant
>>>>>>for bishops.
>>>>>>
>>>>>>You can do that incremental in makemove/unmakemove !!
>>>>>
>>>>>
>>>>>Have you read my reasons for doing this in the past?  Apparently not.  So
>>>>>one more time:
>>>>>
>>>>>"I do everything in evaluate.c to make it easy to change the code.  If I do
>>>>>things incrementally, then I have to modify _two_ pieces of code when I change
>>>>>it.  Modifying one thing is easier.  I'm worried less about speed than I am
>>>>>about quality.  I don't know, for example, that the bishop piece/square table
>>>>>will always exist.  In fact, I am pretty sure it won't.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>This is a pure waste of system time!
>>>>>>
>>>>>>Note in DIEP i would do that in my makemove as:
>>>>>>  int *p;
>>>>>>  global int tablescore;
>>>>>>  p = psq[piece];
>>>>>>
>>>>>>  tablescore += p[to_square]-p[from_square];
>>>>>>
>>>>>>Crafty does it every evaluation!!!
>>>>>>
>>>>>>Bob something to improve in your evaluation!
>>>>>
>>>>>Nope.  One day as pieces of the evaluation become "static" and don't change
>>>>>anymore, some of it _might_ get done incrementally.  But in endgames, your
>>>>>"idea" is not so good.  Suppose you move your bishop twice in the search path?
>>>>>You evaluate that piece/square entry _twice_.  I evaluate it _once_.
>>>>>
>>>>>The deeper the depth, the less effective that incremental stuff is.  Just try
>>>>>it on Fine #70 for example.  I'll toast you good there...
>>>>>
>>>>
>>>>[snip]
>>>>
>>>>Bob I saw that reasoning few times already and I thing you are wrong here.
>>>>Suppose branching factor of 2 and suppose we use that evaluation code on every
>>>>ply you will get spent time:
>>>>
>>>>const*(1+1/2+1/4+1/8+...)~<=2*const;
>>>
>>>I don't agree with your math:
>>>
>>>Say I search to 20 plies.  Nothing but kings can move.  I am going to do
>>>the king update 20 times, 10 for each side.  If I do it at the tips, I am
>>>going to do it _twice_... once for each king.
>>>
>>>That is _way_ more efficient.  You are overlooking the problem that for _every_
>>>terminal position evaluated, there is an N-ply path from the root to that
>>>position that is going to be incrementally evaluated D times (D=depth).
>>>
>>>It is pretty obvious that doing something twice is more efficient than doing it
>>>D times, except where D is < 2...
>>>
>>>
>>>
>>>>
>>>>I can write it like that because time spent in eval 3 plys deep from leaves will
>>>>be destributed among 8 children.
>>>
>>>But look at how many MakeMove() operations you had to do.  at the last ply I
>>>am going to do one MakeMove() for each Evaluate() call I do.  What about the
>>>_interior_ MakeMove() calls?
>>>
>>>
>>>IE to search 3 plies deep, you are going to do 2 + 4 + 8 MakeMove() calls,
>>>and 8 Evaluate() calls.  There it is pretty bad in simple endings.  I agree
>>>that this is an ending issue of course.  In the middlegame you move different
>>>pieces more often.
>>>
>>>But for fine 70, this is a killer.
>>>
>>>
>>>>
>>>>so *in worst case* you will have 2 times cost of normal leaf eval.
>>>>However most paths would give you const*(0+0+1/4+0+1/16+...) or sth like that.
>>>>So incremental eval cost is comparable to 'each leaf eval' no matter how many
>>>>plys. With BF of 3 worst case would give you 0.5 more time spent in incremental
>>>>eval (1/3+1/9+1/27+...=0.5). While it is hard to estimate average case (path) I
>>>>think incremental eval will still outperform 'each leaf eval'.
>>>>
>>>>And the worse the BF the better incremental eval gets.
>>>
>>>I don't think it is as much a BF problem as it is a depth problem.  The question
>>>is, in a given path, how many times do you move the same piece more than once.
>>>Because when that happens, you lose time compared to just doing it once at the
>>>end of the path.
>>
>>There is a huge difference when you move a piece at ply=5 and time taken to
>>update that gets distributed among all 32 children and when you move a piece in
>>leaf node.
>>
>>Even when I move my bishop 4 times on a path, but not on last ply, then I will
>>save time with incremental eval. Exponentially-shaped search tree takes care of
>>that.
>>
>>-Andrew-
>>
>
>
>Sure... but now the other shoe falls:  I don't do that bishop evaluation on
>every terminal node.  I do it on about 10% of them or so, due to the lazy
>eval exit.  But the incremental work gets done _every_ time that piece is
>moved...
>

Agreed... Then on the other hand fact that I know _EXACT_ eval at each ply lets
me do alpha-based pruning and few other tricks which would be impossible or
inaccurate in your approach.

As usual - one small design/implementation issue may later affect greatly chess
program as a whole :)

Andrew

>
>
>>>
>>>My point was that I don't believe that in many cases, incremental eval saves
>>>a lot.  Perhaps in the opening and middlegame it saves more than it loses,
>>>since a 12 ply search probably won't move the same piece 6 times.  But in
>>>endgames, it does become more noticable.  We did a lot of incremental stuff in
>>>Cray Blitz, and we noticed that we paid a bit for it in simple endings.
>>>
>>>
>>>
>>>>
>>>>
>>>>-Andrew-



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.