Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: CSTAL2??

Author: Dann Corbit

Date: 18:18:59 05/11/99

Go up one level in this thread


On May 11, 1999 at 20:38:31, Peter McKenzie wrote:
[snip]
>I think the chances of being able to do this are approximately zero.
>An even if you could, its not that hard to put zugzwang detection into a
>nullmove implementation.  It slows it down a little, but not hugely.  The fact
>that few programs bother with this (although some do) indicates that it just
>isn't worth it.
Why do you think the probability is so low?
What would be the chances (for instance) of calculating 375,000 important chess
positions at twelve minuts of PII 300 MHz CPU time (twice -- just to be sure)?
It is quite a few CPU years -- yet we have already done this.

I think to engineer a Zugzwang position and then seek backwards in retrograde
fashion is not nearly so hard as it seems.  In fact, I think I could write a
program to do this automatically.

Extrememly rare is not important as long as you can still make trillions of
them.

One way or another you have to either do a brute force search or trim the tree
(somehow).  If you do trim the tree, that means you are simply ignoring some
positions [for whatever reason] and assuming that they are not important.  If I
can either find or generate these supposedly unimportant positions which turn
out to actually be important, then your program can be put into a trouble spot.

So, you either have the unenviable task of examining about 30+/-{some small
number} more positions for each ply {on average} or generating a giant blind
spot for yourself.  In order to make the blind spot unimportant, you have to
expend more energy.  And to render the blind spot completely unimportant you
have to somehow search the entire blind area or come up with a way to extract
the identical information.

Slow searchers are coming up with smarter ways to ignore the blind area and
examining less nodes.  If we can find a way to do this that is very fast (e.g.
*not* directly proportional to searching} then we can increase the performance
of a chess program's fundamental algorithm.

Some fundamental techniques are:
1.  Hash tables {don't look under a rock we have already looked under}
2.  Move ordering {look at the most likely ones first}
3.  Alpha-Beta {don't waste time looking at moves that stink or that stink for
the opponent} [MTD(f) is sort of like Alpha-Beta with hashing of what we have
done]
4.  Null move {assume we skip a turn -- If this fails high, then probably  your
position is so good that even if the other side can have two moves in a row it
won't help, and hence this position is not worth further search}

None of these is really completely deterministic.  Hash can be over-written.  A
bad looking move might really be best, we might be zugzwang, etc.  In a very
real sense, all of these are chess knowlege (however incomplete).  By
incorporating better knowlege we may possibly improve the big O performance of
the algorithm.  A breakthrough like that is what will revolutionize chess
programming (if it should occur).

Consider subdivision.
Imagine that we have only tiny chess boards to consider -- perhaps 4x4.  We can
solve each of these much more easily than an 8x8 board.  Now, there are only two
edges that each of these 4x4 boards fit together on.  Suppose that we can solve
the 4x4 set, then ennumerate the combinations of these 4x4 boards.  Still
possibly too difficult - but what about 2x2?

I have no idea if that idea has any merit, but what I am saying is that a
fundamental change in algorithm or incorporation of knowlege is what will lead
to revolution instead of evolution.







This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.