Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Why is Crafty so fast?

Author: Don Dailey

Date: 07:49:06 12/02/97

Go up one level in this thread


>Maybe, but the key is can you do intensive knowledge processing faster ?
>Because, if you can't, and if its more laborious to code/execute; then
>your paradigm is a design compromise that just about forces the designer
>to shy away from intensive knowledge and towards bits, bytes and all the
>other non-chess specific stuff.

>No, no. You can have incrementally updated tables, just as long as you
>include the computation timne to get there. And, I'ld like to know if
>your tables are going to be 64 bitmaps - because if you want to generate
>offset tables to perform the exchange calculations, then ...... :)

>We know you *can* do anything, Bob. What's at issue is what you have
>right now. What bitmaps and so on. if you want more data, that comes at
>cost. And if you want more data, you'll need to decide whether its
>bitmap or offset. I think you'ld be crazy to choose bitmap for piece
>safety on square evaluations.

I would like to point out that at times I've felt like bitmaps were a
little awkward for one thing and that offsets were very awkward for some
other thing.   I don't think a single example of something could
possibly
settle this.   I once did mobility fast by essentially converting via
incremental updating into a bit board representation.  Bit boards was
the
better thing to do here but I had to go to a lot of trouble to simulate
them.  Also there was a small overhead for doing so but it seemed to pay
off.

>Some important chess knowledge - like a full analysis of piece safety on
>moving into a square - just like I suggested.

This seems to be your main example.  This table is at the heart of this
kind of chess program and is tightly integrated into every aspect of it.
It is used for incremental move generation and attack testing and is
probably the most commonly used data structure for todays top chess
programs and rightly so. It comes in nice and handy for evaluation too.
If we were to focus on the many wonderful uses of this table and confine
our discussion only to this one thing,  it would be difficult to make a
strong case for bitmaps.

You mail is littered with the phrase "intensive knowledge processing"
and
you always cite some example like, "what if you wanted to know which
pieces
were attacking this square?"  This is an unfair example to cite because
your
whole program was built to answer this (and mainly this) question only.

I don't wish to attack this kind of approach, I think it is a
wonderfully
intelligent approach and some incredibly fast and strong programs have
been written which prove its value.

But now that 64 bit processors are becoming commonly available, let's
see
what we can do with them!   Even though you strongly believe that you
cannot have a knowledge intensive program using bitmaps (whatever that
means), I see some real possibilities and want to see for myself.

The techniques of building powerful offset based chess programs have
been
being perfected over a long period of time and the results have been
impressive.  Although bitmap stuff has been around also for a long time,
I don't believe that the same effort has been spent on them.  Few people
have had access to them and no commercial efforts that I am aware of.

There is no doubt that bitboard programs  should not try to emulate
offset programs.  Perhaps there will be some improvements in some areas
and some compromises in others.   Each approach will try to maximize
the things it does well and minimize the things it does not.  I happen
to believe that it's probably the case (how is that for non-commital?)
that "intensive knowledge engineering" will be the more natural choice
for bitmaps and that the fastest possible program should use offsets and
 heavy
pre-processing.

But let's just see what happens when a number of our best programmers
start looking at bitboards with 64 bit machines and some time has
passed.

>to compare these two approaches, we need something that is not horribly
>complex to compute,

>Some important chess knowledge - like a full analysis of piece safety on
>moving into a square - just like I suggested.

>My contention is that you wouldn't want to do this with bitmaps.  And, if
>this is so, that the bitmap approach forces a lack of knowledge in chess
>programs.

I certainly agree that you should pick this one as your example.  Your
program has this exact data structure computed and ready to go!

Now, as I promised, here are some reasons why I think I'm in favor of
using bitboard technology and believe it is the way to go:

Bitboards are great when you want to compute something you forgot to
anticipate when you designed the program.  It might be useful to find
examples of things that are NOT in either program.  How quickly can
either approach get to this knowledge?   In a lot of cases I can think
of fast implementations for either approach and in some it might be
awkward in one or the other (or both.)   But with bitboards you can
often build up your data (in the evaluation) and use it to death.  For
instance, I have bitboards of:

  holes : any square that can never be attacked by ememy pawn
  whitesquares : all white squares.
  blacksquares : all black squares
  distance x  : table of bitmaps of squares with distance x from some
target
  open : bitboard of open files.
  half : bitboard of half open files
  clear : bitboard of squares with no obstacles in front.

And many many others.   Most of these are computed (virtually) instantly
and some complement others,  or can be built in a logical progression.
Most of the pawn structure (even though we have pawn structure hash
tables, this should be considered a separate issue.)  is computed in
parallel and is trivial.  I can't imagine being able to flexibly add
so much knowledge with an offset program.  I agree you "pay a price"
for this.  But we have made the decision to abandon pre-processing
(I still don't know if this is wise) in favor of biting the bullet and
doing the stuff dynamically (at leaf nodes) and see what happens.  I
don't pretend that my program is very smart yet but I'll bet it would
be hard to get the same knowledge with an offset program without slowing
way down.  I would like to note that there is nothing that prevents us
from using a piece/square pre-proccessed table either.  There would be
almost zero overhead for this and this has nothing to do with bitamps
or offset generators.

I'll admit there are a few things we do that would be nice to have our
attack board tables back.  It turns out that we compute the same
information anyway as we
need it from scratch!   But it's no "big deal" with bit boards.

Now it's easy to build a table (or compute) whether a square is white
or black with either type of program.  But how quickly can you tell me
how many white pawns are on black squares?    Ok, I could add this to
the list of things I incrementally update if I wanted to.  But then,
what if I wanted to know which ones were on the opponents half of the
board?   Or within 3 squares of the king?   Most of these things I might
not consider with an offset program because it would slow the program
down too much.  These are the type of things that a bitboard style of
program finds very natural to compute so we will try to capture chess
principles in a way that fits within this scheme.   The offset chess
programmer will likewise try to "capture" chess principles that fit
naturally within his design framework.

It will take a lot to convince me that offset based programs have a
natural advantage over bitmaps for capturing chess knowledge, but I
might be convinced the opposite is true.

The one thing I am convinced of, is that a lot of chess knowledge is
not easily captured by either approach!

Don





This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.