Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DB will never play with REBEL, they simple are afraid no to do well

Author: Ratko V Tomic

Date: 10:41:17 10/16/99

Go up one level in this thread


>the issue is 'null-move'  that selectively reduces the depth on non-critical
>branches, which reduces the effective branching factor.

The Sqrt(Width) lower boundary for optimal minimaxing still remains. Anything
with smaller B is an approximate minimaxing, i.e. not guaranteed to return the
same value as the plain alpha-beta would (which always returns the rigorous
minimax value). Hopefully the approximate minimaxing has the error of roughly
the same magnitude as the error due to the uncertainty of the evaluation
function, otherwise the algorithm is performing sub-optimally given the
computing resources and time limits. I suppose one could tune the two sources of
errors by using deeper search as a substitute for exact node value (e.g. to
estimate the error distributions on the samples of regular searches).

> if you try several different positions, you ought to see numbers way
> below 4.5 on average...  Note that tactical tests are the wrong kind
> of positions...  because they, by definition, are not searched optimally...

It may be that what Fritz UI displays as "Nodes=" for Crafty 16.6 engine means
different than what full Crafty program displays. In the message to Jeremiah I
left the position I used, which had Width=38 (and fairly stable), so we can see
whether he gets B=4.5..4.7 at fixed depths 10-12 or B closer to 3.


>there are still massive problems.  IE lazy evaluation.  And when searching
>moves from position "X", the first might produce a score of -.1 which is enough
>to cause a cutoff.  But if you searched further, the second might produce a
>score of -2, which is even more than enough to cause a cutoff.  But alpha/beta
>doesn't search the second...
>

I think your objection is from the viewpoint of trying to find out the
objective/actual amount of "junk" in a tree. In that case, yes, even the plain
aplha-beta won't work. But I was talking about the _evaluated_ amount of "junk"
(vs the evaluated amount of non-junk), as occuring in the state of the art
alpha-beta searchers. The point of that was to establish whether the current
state of the art programs have the _evaluated_ proportion of non-junk going
exponentially to zero with increasing depth. I think they do, meaning that the
their efficiency goes to zero, with depth, as well.


>I wouldn't disagree.  minimax has a huge number of 'junk' positions by any
>definition.  However, the search can't be executed without them.

Well, if you look at human player using computer for in depth analysis of a
position, such as for postal game, you will see that the best result is not
obtained by letting the program think on its own for 24 hours. Instead human
player will guide moves over plausible alternatives and have it do much shorter,
mini searches, along the way, going much deeper than program would do by itself.
(I think there were some results by Althoffer [sp??] showing that "hybrid"
player (human guiding/advising computer) is significantly stronger than either
that human or the program.) That merely shows that there is, at least in
principle, a more effective way to search.

A multilayered control, with alpha-beta search employed as the lower layer and
guided by a more intelligent and flexible type of algorithms into the mini
searches (a la "hybrid player") satisfying certain higher level imposed
constraints (i.e. looking for some objective, thus gaining an extra cutoff
source, in addition to the mini-tree vs full tree gains), will eventually be
created and drive extinct the current alpha-beta searchers (with their ad hoc
shortcuts).

The main underlying reason why these kinds of programs will be superior is that
they will be able to harness coherently the tiny evaluation fluctuations, which
in the current programs appear as evaluation noise (be it from ply to ply among
iterations or among the top alternatives within the same iteration), and pick
them in such a way (by comparing their suitability for some longer term
objectives, i.e. comparing their fit to plans) that over a longer span of moves
they can coherently add up to some longer term advantage. With the current
alpha-beta searchers, these fluctuations add up randomly/incoherently from move
to move, i.e. one move may make it look as if the program has one longer term
objective (well beyond search horizon), the next move may undo that small step
and appear to go for a different long term objective. Like a Brownian particle,
these steps, by undoing incoherently each other, will advance it forward by
sqrt(NumSteps) in the fluctuation space, while a coherent strategy will advance
it lineraly with NumSteps.

Even though you ridicule Botvinnik's work, his scheme is actually one highly
refined form (as only a person of his level of chess understanding could create)
of the next higher layer which should guide the brute force search in much more
coherent and efficient (instead of efficiency exponentially going to 0) way.
Whether he/his cooworker(s), with their severe (Soviet era) resource
limitations, actually had a program do all the simulations or they did some by
hand (without telling always straight which is which), that's ultimately
irrelevant, whatever Hans Berliner says (he has been known as given to grand
exaggerations himself, so it's a case of pot calling cettle black). Even Newton
(and other great scientists, e.g. Dalton) have tweaked or invented the
experimental data to make them appear fitting better their grand theory which
they believed (rightly) to be a major advance.




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.