Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Theory: Deeper Search creating worse performance due to PE

Author: Stuart Cracraft

Date: 15:31:58 01/04/06

Go up one level in this thread


On January 04, 2006 at 17:41:54, Charles Roberson wrote:

>
>   I've never seen this theory stated before, if anyone in any of the science
>communities has then I want the reference. If anyone has similar experience or
>sees a flaw in my logic, lets hear it.
>
>  Is it possible for an improvement in search depth to result in a performance
>degradation in match play.
>
>   I am thinking yes! The implication is interesting. You improve the search of
>your engine. That is the only change. It now searches two ply deeper. But in
>match play it scores worse. Your natural thought was that all else was the same
>thus you've a bug in your search improvement.
>
>   I think it is possible to improve the search and get worse results. Here is
>how.
>
>    Lets say that your position evaluator (PE) is out of tune on some
>strategic/positional values. Deeper search works with the PE to create an edge
>for your program. Your old search was keep pace (depth) with opponents, but the
>new search sees two ply deeper on average. This gives your engine increased
>opportunity to create an edge. Once the edge is realized, the engine is in a bad
>position and the match is lost.
>
>    Before it couldn't create the edge because it couldn't tactically out see
>the opponents. Seems to me this scenario only happens when the PE is not
>extremely out of tune, but is somewhat close to in tune.
>
>   So, can increases in search depth in match play cause an out of tune PE to
>reveal its issues.
>
>   This seems to be happening in some my tests today. Other data my program
>(prior) to the changes has a propensisty for getting into good opening and
>middle game positions and then blowing it. Thus, increases in search depth may
>allow it to see an advantageous postion (in its thoughts (PE)) and go for it at
>earlier moves in the game. Also, increasing its chances of realizing those
>positions. Thus, producing worse play.

I just finished implementing a first attempt at Tord's late move reduction
and the program got 0.6 smaller branching factor, 1.5 more ply, for the
same amount of time, across 300 positions in a suite. It achieved even
one position more in result on the suite.

Now the question is, is it stronger simply because it is definitively being
more selective. Or in your case, weaker.

Those on this board who spoke to me some time ago would say test it against
other people and programs not on a test suite. That is too soon.

For your question, I think deeper search with the same or better evaluation
will always be better than less.

More is greater than less when you hold other things the same. That latter
part is the hard part.

I would say for your question that deeper search, with all else the
same, will be better.

Take program A and put it on processor B. Now put it on processor C
which is 2x the speed.

You will get improvement.

From my experience outlined above, I was surprised that the selectivity
increased without impact to test suite. That tells me that something
is good about Tord's idea and that my initial implementation was not
negative. Almost all my improvements are negative at first. This is
one of the few that is the same at least.

However, a poorly implemented selectivity can certainly hurt just as
match.

Sorry to waffle on this.

Stuart



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.