Computer Chess Club Archives




Subject: Re: Differences between 0x88 ,10x12 and Bitboards!?

Author: Robert Hyatt

Date: 17:20:14 11/19/02

Go up one level in this thread

On November 19, 2002 at 20:09:14, Steffan Westcott wrote:

>As Gerd points out, an advantage of the bitboard representation is the inherent
>parallel computation. It is somewhat ironic that you perceive complex pattern
>recognition as a weakness, as that's where bitboards can be very powerful, in my
>Let's take your simple example and see some actual code...
>> We are now at square sq and want to know whether a complex pattern is
>> happening. For example:
>> if( sq >= 8 && board[sq-8] == bishop && board[sq-7] == rook && ...
>> What's your bitboard equivalent here?
>> [snip]
>> So here the bitboard equivalent without caring that in both
>> examples we can save out a CMP:
>> if( sq >= 8 && (Bishops[side]&Mask[sq-8]) && (Rooks[side]&Mask[sq-7]) && ...
>No, that's not the best approach when using bitboards. For this particular
>example, much better is to find all instances of the pattern at once, like this:
>typedef unsigned __int64 BitBoard;
>// Uses this ordering for bits <->squares
>// 56 57 58 59 60 61 62 63
>// 48 49 50 51 52 53 54 55
>// 40 41 42 43 44 45 46 47
>// 32 33 34 35 36 37 38 39
>// 24 25 26 27 28 29 30 31
>// 16 17 18 19 20 21 22 23
>//  8  9 10 11 12 13 14 15
>//  0  1  2  3  4  5  6  7
>BitBoard ShiftRight    (BitBoard b) { return (b<<1) & 0xfefefefefefefefe; }
>BitBoard ShiftLeft     (BitBoard b) { return (b>>1) & 0x7f7f7f7f7f7f7f7f; }
>BitBoard ShiftUp       (BitBoard b) { return b<<8; }
>BitBoard ShiftDown     (BitBoard b) { return b>>8; }
>BitBoard ShiftUpRight  (BitBoard b) { return (b<<9) & 0xfefefefefefefefe; }
>BitBoard ShiftUpLeft   (BitBoard b) { return (b<<7) & 0x7f7f7f7f7f7f7f7f; }
>BitBoard ShiftDownRight(BitBoard b) { return (b>>7) & 0xfefefefefefefefe; }
>BitBoard ShiftDownLeft (BitBoard b) { return (b>>9) & 0x7f7f7f7f7f7f7f7f; }
>BitBoard Vincents_pattern =
>    ShiftUp(ShiftLeft(Rooks[side]) & Bishops[side]);
>Here, the result is a bitboard with a set bit for every pattern match on the
>whole board. For a machine which manipulates 64-bit quantities natively, it
>includes 2 bitwise ANDs (&) and 2 shifts (>> and <<). In your non-bitboard code,
>ignoring the (sq >= 8) term, your _best_ case is 56 compares for the whole
>board, when the subexpression (board[sq-8] == bishop) is always zero ie. no
>bishops on the board.
>We see here where the choice of board representation makes a fundamental
>difference to the design of the surrounding engine. In the case of bitboards,
>position feature/pattern matching is best done on the whole board at once ie. a
>parallel method, which results in a bitwise expression. For other
>representations, position feature/pattern matching is probably done whilst
>iterating (looping) over collections (squares, ranks/files/diagonals, pieces,
>etc) ie. a serial method, which results in selection of code execution path(s).
>In your example, the serial iterator is sq.
>Complex patterns can be decomposed into simpler patterns, many of which are
>common (shared by the parent complex patterns). For bitboards, this manifests as
>common subexpressions, which can be conveniently stored and retrieved during
>evaluation. For other representations, it manifests as common program sequences,
>where convenient opportunities to optimise away the repeated code may be less
>I do not say here which representation is 'better'. My main assertion in this
>post is that for effective use of bitboards, a parallel evaluation of position
>features is usually better than a serial evaluation.

This is why this goes nowhere.  Until you take the time to understand bitmaps,
and get
used to thinking in that box, they won't work.  I used to teach a programming
language course
that included the languages FORTRAN, SNOBOL, APL and Algol.  APL is a different
kind of animal, but unfortunately it allows you to take a fortran-like program,
and index
thru arrays using loops.  Even though the language has operators to multiply
invert matrices, compute the inner product of matrices, and so forth.  In short,
I had a problem
with students writing APL programs that looked like FORTRAN programs.  That is
Vincent does for bitmaps.  He only sees thru non-bitmap glasses, and yet he
attempts to show
how bitmap code would be done without having enough experience to do it right.

Bitmaps are all about masks, shifting, ANDing, ORing and the like.  And some of
things to be done require a lot of thought to do properly.  I don't understand
the approach
of "what I do is good and nothing else can work because I can prove it.  I can
prove it because
I can't do it myself."

That doesn't particularly impress me.  And while I don't particularly think
bitmaps are better
than non-bitmap programs for all things, evaluation is certainly (IMHO) one
strong point.  And
since that is the majority of my execution time, it is important.  And since
bitmap programs
gain even more when migrated to native 64 bit machines, that seems like a good
thing to
keep in mind for the future...

But you can write a good program with either approach.  You can write a bad
program with
either approach...

This page took 0.09 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.