Computer Chess Club Archives


Search

Terms

Messages

Subject: impossibility of b.f. <= 4 when searching fullwidth.

Author: Vincent Diepeveen

Date: 07:18:36 10/14/02

Go up one level in this thread


On October 14, 2002 at 10:01:47, Vincent Diepeveen wrote:

>On October 14, 2002 at 08:59:37, James Swafford wrote:

You asked why 3.8 is not possible.

That is because the minimal tree which you have to search,
as proven on paper by Knuth, is roughly:

2 * squareroot(number of possibilities)

Now that 2 is a constant number so not interesting.
I leave it always away. Just like the 1 node reduction
you get.

So roughly searching a tree with on average 40 possibilities
in each position it is 2 * squareroot(40^18) = 2 * 262144000000000

A big number :)

In literature it says the average number of possibilities is 35 in chess.

I always assumed that till a bunch of years ago i started measuring this,
because i was unhappy about my branching factor. I never managed to get
it under 4.2 WITH hashtables.

Because that's the important thing here. Knuth's lemma is true if you
do not use hashtables. And knuth doesn't consider the overhead for
not having an optimal search and knuth doesn't consider quiescencsearch
overhead either.

Crucial to know is how many possibilities a chess position has in order
to know what the theoretical smallest tree is without hashtables and
without nullmove.

So i measured in diep using a huge search from openings position
(18 ply), but i wasn't stupid lucky.

I measured 2 different things
  a) number of possibilities when in check
  b) number of possibilities when not in check.

As we all know, when in check , all chessprograms extend the search
by 1 ply. So they do not count for the tree. Agreed?

The average number of possibilities i measured for b) was
40. I was very surprised, but it answered a quesetion of me.

If each ply deeper there is 40 more possibilities, that means
automatically that your minimal tree without hashtable grows
with:
  squareroot(40) = 6.32

So you will not be amazed when i was told be be an idiot saying
that it was possible to get a branching factor of less than 4.5,
3.5 even when using nullmove,
this at a time where everyone had a branching factor
closer to 10 than to 3.5

In reality branching factor in parallel search is not so clear, because
when all big machines with > 32 processors do a search it takes a while
before all processors have a job.

In fact it takes right now here too long. I'm still busy improving the
time needed to get all processors a job.

Zugzwang needed about 30 seconds to 1 minute to get them all to work.

I played the thing several times and have seen it in action very well
at world champs 99 (july) and ipccc 99 (februari).

It is very logical that Deep Blue with 480 processors faced the same
problems. So the 4.xx branching factor most claim is not real true.

What you see is that it starts the search with a few processors and
after a while many processors join in. That makes the time needed
to get another ply a very BAD one to conclude a branching factor of
4.5 from, like Bob did.

The real branching factor from DB in 1997 was way way bigger.

What you see is simply that from 1 processor it manages to start up
to 50 processors. So it gets 50 times more power to show a 4.xx
branching factor. Get the picture here why the log files are not
showing the b.f. very accurate?

Best regards,
Vincent





This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.