Computer Chess Club Archives


Search

Terms

Messages

Subject: Comparison of board representations

Author: Russell Reagan

Date: 14:02:00 01/22/02


When I first began my chess programming, I chose to go with a 64-byte array. It
seemed logical and simple. After learning about other methods of representing
the board, I began to think that other board representations were superior
because they allowed for shortcuts here and there. To preface my question, here
is a short trip through what I have thought about.

Padding a 64-byte array with a 1 square border would allow for simple edge
detection for sliding pieces, but not for the knight.

Padding a 64-byte (either 120-byte 10x12 or 144-byte 12x12 arrays) array with a
1 or 2 square border for the left and right hand sides with a 2 square border
for the top and bottom allow for simple edge detection including the knight.

Improving upon that you can use a 256-byte array (16x16) to avoid a multiply and
use a shift instead, with the simple edge detection that the 120-byte and
144-byte arrays have.

Now a brief tangent about 0x88. This doesn't seem to work for all squares.
Perhaps someone can explain this to me. If you go to square 112 (A8 I belive),
and you want to generate the rook's legal moves, so you test 112+16 to see if
the rook can move up one square on the board, but 112+16=128, which is out of
bounds of the 8x16 array, so detecting the edge for the top row doesn't work. I
suppose you could simply add rows to the top and bottom and have a 11x16 array
to handle this case, but I've always heard people using 8x16. So I either don't
understand something or something doesn't work right. I'm betting on me not
understanding something :)

Then we have bitboards.

Now, originally I was under the assumption that the reason for using a 16x16
array or 0x88 was that edge detection on a 64-byte array was less efficient than
testing the bordering squares, as is done in the 16x16 implementation.

Now my question...

Let's take the example of generating legal rook moves. You're rook is on g6, and
you start testing if the rook can move to h6. Everything checks out, so you
increment the value for the square h6 and you get a7. You can test to see if
this next square is ok for the rook to move to by doing something like

if(row(i) != row(i+1)) break;

or use a while loop or whatever, with the row comparison being the test. Since
you can compute the row with a macro that does a binary shift, computing the row
is a 1-cycle instruction, so in all you use maybe 3-cycles for the test (2
shifts and an addition?). I'm not very good with ASM so correct me if I'm wrong,
and this may be where I'm confusing myself.

In the 16x16 array method, you'd have to do something like

if(squares[i+offset] == 0) break;

so you'd use 2-cycles for this (addition for i+offset, and another for the
offset of squares), saving 1-cycle, which would add up after looking at millions
of positions.

0x88 would seem to simplify the problem even furth to a 1-cycle AND operation on
the index of the square.

These speedups seem great, but will there be any speed loss for having to
compute an offset (since there are extra squares in 16x16 or 0x88 methods)? In a
64-byte array, there is no need to compute an offset, just keep on incrementing
in a for loop. So you save a few cycles with 16x16 or 0x88 but do you lose a few
back from computing the offset?

Then there are bitboards, and they seem to save tons of time in some aspects, so
it seems easy to be sold on them. For example to compute whether the king is in
check, just do a 1-cycle AND operation on two bitboards (more in reality since
most users are on 32-bit machines) and you've got your answer. No tracing along
rays searching for attacking pieces. Same thing for computing legal moves. You
can do a lookup and an AND operation and get a bitboard with legal destination
squares, but then you have to give some speed back and go looking through the
resulting bitboard, so you might as well have searched through a 64-byte array
in the first place right?

Basically in thinking about all of these methods there seem to be speedups in
one area, only to realize that you have to take a speed hit in another. I'm
trying to sort out all of the pros and cons and figure out what I'm missing in
regards to the supposed "advantages" of using a 16x16 board, 0x88, or bitboards
over a simple 64-byte array. It seems like the extra cycle you'd take to compute
an offset for 16x16 or 0x88 would outweigh the edge detection cycles that are
saved, since there will be more times you are computing offsets than doing edge
detection. And with bitboards, if you have to search not 1, but several
bitboards taking a hit of 64 cycles each, then the gains there seem to diminish
rapidly.

So what am I missing? Why am I not seeing how these other methods are better
than a simple 64-byte array?

Thanks for all of your comments and help on this matter.

Russell



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.