Computer Chess Club Archives


Search

Terms

Messages

Subject: Idea for "Meta-Engine"? For deep, time consuming analysis...

Author: Armand Tuzel

Date: 00:01:29 01/30/06


As far as I can tell, the search that all serious chess engines currently use
is fundamentally based on the idea of alpha-beta search with static evaluations
at leaf nodes and lots of clever pruning techniques.  So For each additional ply
deeper which such an engine evaluates, it will require an exponential increase
in the running time... (time needed to search n + 1 ply is a factor of about
[avg branching factor] greater than time needed to search n ply.)

Here is an idea for a "meta-engine" that operates a little differently, which
might be useful for deep analysis of a position, and have a very low branching
factor, almost closer to linear time reqired to search 1 ply deeper into a
position.

It is a bit like the "triple brain" feature included in Shredder, but not
exactly the same...

This so called "meta-engine" would be given a position to analyze.  It would
then ask several strong UCI engines (let's say Rybka, Fruit, Deep Shredder)...
to each evaluate the position for a certain period of time (let's say just as an
example, 5 minutes), after which it would take the best move found by each
engine, and assign the position a score based on the average evaluation of the
UCI engines.

Then for each of the positions arising from "best moves" found by the UCI
engines, it would repeat the process.  Thus building a search tree.

There could be several benefits to an approach like this.

First, each individual engine has its own weaknesses and strengths - where one
engine, even the strongest of engines, will fail to find the strongest move in a
position, another engine may find it very quickly.  But one thing I notice is
that engines which could not find the correct move in a position even after
thinking for long periods of time, will very quickly recognize the correct move
as being good once the move has been made and it is evaluating the resulting
position.

(Presumably this phenomenon is due to "blind spots" in the search tree caused by
the different pruning techniques used for each engine.)

So, by forcing each of the several UCI engines to consider (execute) moves
suggested by the other engines, which they may have overlooked, all the engines
may recognize the best continuation rapidly, (to put it roughly, in "linear
time" rather than exponential time), by simply being put in the new position
which results from the move instead of searching millions of futile positions
stemming from the position 1 move earlier.

Second, each engine has its own idiosyncracies in evaluating a position.  Some
positions will be evaluated very differently by different engines, even though
each engine is strong.  There are many cases of positions which one strong
engine will evaluate as +100 centipawns (with a large edge for white), while
another strong engine will say is -100 centipawns (with a large edge for black).
 These two divergent evaluations cannot both be true at the same time...  So
probably, averaging the evaluations of all of the strong engines will overcome
some of the idiosyncratic deviations of each engine from "the chess truth", and
my instinct is that the average, composite evaluation will be overall a more
accurate one.

Third, the branching factor for building the search tree will be low, almost
closer to linear than exponential time required to search increasing ply depths.
 In many positions, most engines (if given sufficient time) will agree on the
best move.  Only for very ambiguous positions will strong engines disagree
widely on the best moves.  Thus the branching factor for the tree which is
built, if each node is evaluated for long enough, will be pretty close to 1...
(even in the worst case, supposing three different UCI engines are used by the
"meta engine", three different best moves will be found for a given position,
but in most cases there will be fewer than three best moves found.)

So although the time to evaluate each "meta-node" will be in terms of minutes
rather than microseconds, the low branching factor would mean that for analysis
over very long time periods, this "meta-engine" would be able to search more
deeply into the critical lines which arise from a position.  Also each
"meta-leaf node" of the meta-engine's search tree would not be evaluated by a
single quick and inaccurate static evaluation function, but by a search by
multiple engines over millions of "sub-nodes."

Effectively this would mean that every "meta-node" that was evaluated would be
evaluated to a fixed ply-depth, from the root of the "meta-search-tree" to the
leaves of that tree.  This is in sharp contrast to a normal search tree, where
the root of the tree is evaluated to a high ply, but the leaves of the tree are
evaluated at "no ply" (just the static evaluation function).

Under the "meta-engine", the decision of which positions to evaluate is
completely seperate from the low-level static evaluation / min-max with pruning
and move sorting carried out by the UCI engines.  The functions of
move-selection and position-evaluation become distinct.  The meta-engine selects
which moves (positions) to evaluate at each fork in the "meta-tree" (totally
eliminating all but a few branches, based on the outputs of the UCI engines),
and then the UCI engines are the ones which evaluate each position in the
"meta-tree".

I think something vaguely analogous must what Rybka is doing, it must be part of
the "secret" which makes it different from other engines... its evaluation
function for "real nodes", at the leaf nodes, is not a static evaluation... is
actually based on some type of additional search on a subtree with lots of
"sub-nodes" on which static evaluation takes place.

What a lengthy bunch of speculations!



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.