Computer Chess Club Archives

Messages

Subject: Re: Couple of chess programming questions

Author: Robert Hyatt

Date: 14:52:55 09/10/02

Go up one level in this thread

```On September 10, 2002 at 17:30:18, Vincent Diepeveen wrote:

>On September 10, 2002 at 14:42:19, Robert Hyatt wrote:
>
>>On September 10, 2002 at 14:07:01, Dann Corbit wrote:
>>
>>>On September 10, 2002 at 13:44:16, Robert Hyatt wrote:
>>>>On September 10, 2002 at 13:11:02, Dann Corbit wrote:
>>>>>On September 10, 2002 at 12:07:37, Vincent Diepeveen wrote:
>>>>>[SNIP]
>>>>>>>(3) Reading Aske Plaat's search & re-search paper, it really seems like mtd(f)
>>>>>>>is something of a magic bullet.  But I note it seems that more programs don't
>>>>>>>use it than do (for example Crafty).  What is wrong with mtd(f) which Plaat
>>>>>>>doesn't say?
>>>>>>
>>>>>>There is nothing wrong with it. However a softwareprogram must have certain
>>>>>>conditions to let it work well (in hardware other issues play a role)
>>>>>>  a) it must be a static evaluation with a small granularity;
>>>>>>     if pawn=32 it will work great, if pawn=1000 like in DIEP it won't
>>>>>>     work at all, other things aren't interesting then.
>>>>>
>>>>>I have found this to be true also with experiments, and the reason is obvious.
>>>>>
>>>>>The MTD(f) algorithm basically does a binary search between two bounds (on
>>>>>average -- it might be faster or slower in individual cases).  If the interval
>>>>>varies from 3276700 to -327600 you have between 22 and 23 searches to resolve
>>>>>it.  If the interval varies from  3276 to -3276 you have between 12 and 13
>>>>>searches to resolve it.
>>>>>
>>>>>Therefore, you have a paradox.  The more accurately you search, the slower
>>>>>MTD(f) will become.  I can't see any solution to it, unless you use progressive
>>>>>accuracy to quickly narrow the search.  If you went to long double accuracy in
>>>>>your evaluation function, it will take a very long time to resolve the search
>>>>>with MTD(f), unless you double the accuracy with each iteration.  So for the
>>>>>first search, only (ugh) one significant digit is used.  Then two digits of
>>>>>accuracy, then 4, etc.
>>>>
>>>>
>>>>One issue with progressive bounds is to "bounce" _over_ the true value.
>>>>Searching from one side is faster than searching from the other, due to simple
>>>>math.  IE I tried several approaches that "died" until I started thinking about
>>>>what bouncing over the true score multiple times was doing to the shape of the
>>>>tree...
>>>
>>>One partial solution is to use a ceil() type idea for the upper bound and a
>>>floor() type idea for the lower bound {perhaps even with a small fuzz window}.
>>>
>>>But I still think PVS wins, in general.  At least I have not figured out how to
>>>make MTD(f) work as well.
>>>
>>>I do have a notion that if you have enough data you can improve the search a
>>>great deal.  MTD(f) is very sensitive to the initial guess and works best if you
>>>guess the true eval accurately.  However, even a perfect guess only halves the
>>>interval.  If you have a lot of data (perhaps by precomutation that has been
>>>stored in a database) you could search a narrow window around a central guess,
>>>and then switch to PVS when you run out of data.
>>>
>>>Suppose (for instance) that you have a 9 ply search for the position in question
>>>and you want to search to 14 plies.  You use your initial estimate (let's say it
>>>is +40 centipawns) and then your window is one full piece of 300 centipawns.  So
>>>you set the intial bounds at +190, -110 and search that interval.  Most of the
>>>time, the true value will be within a window of that size, and you will have
>>>saved a lot of time.  Only 8 searches will be needed if you have a resolution of
>>>1/100 pawn.  The deeper your inital search data is, the narrower your window can
>>>be.
>>
>>
>>Also it is sensitive to searching on the right side (upper side) of the
>>true score.  When you drop over the true value into the lower side of things,
>>failing high is much harder to prove than failing low, and efficiency goes
>>down the can.
>>
>>I never took the time to try an approach that tried lowering the window in a
>>binary-way, but which would raise it even more quickly to get back on the right
>>side of the true score.  That might have a chance of helping significantly.
>
>it's pretty good that you never tried it in a binary way, because you
>would be doing more researches which take each as much nodes as a
>single research with PVS costs.

I _did_ try a binary search, but found it bad.  However, there is a variant
that might be ok...  as I said, the main point is to always stay on the
"high side" of the true score until the last search...

>
>The basic win of MTD is that if you go from 0.001 to 0.002 that you need
>very little nodes to proof it's >= 0.002, whereas doing a research with
>a window of 0.001 to some big value (or in true PVS infinite) is
>just as expensive as a random try of 0.250.
>
>In fact if the score appears to be 0.120 instead of 0.250 you're doing
>hell of a lot of more nodes as you get fail lows now where otherwise
>a fail high occurs, which later again need to get corrected to fail high,
>but when failing low you got back no PV move of course.
>
>Best regards,
>Vincent

Note that I want to avoid fail highs whenever possible (at the root) as fail
lows are faster most of the time.

```