Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: I can't believe this bashing is being allowed on here: "Bad Math Topic"

Author: Robert Hyatt

Date: 11:42:02 09/06/02

Go up one level in this thread


On September 06, 2002 at 13:33:29, Vincent Diepeveen wrote:

>On September 05, 2002 at 14:06:08, Eugene Nalimov wrote:
>
>>Actually, often you don't want to search the objectively best move first. You
>>want to search the move that will cause a beta cutoff and will result in a
>>smallest subtree being searched.
>
>Not really, the best move is usually best, because usually the
>problem of *a move* cutting off is shown next iteration by major
>overhead. So at this iteration i a move could cutoff in very little
>nodes, but if it next iteration fails low it obviously is a whole
>subtree you researched.


Would you _please_ think a bit before jumping in?  Eugene's statement is a
direct premise of any tree searching program based on alpha/beta.  Do the
least amount of work possible.  Given a set of N moves that will produce a
cutoff (fail high) and another set M that will not... If you search any moves
in M first, you waste time and effort and slow down.  If you search any move in
N you get a cutoff and are done.  How can it _not_ be best to pick the one that
requires the least effort to fail high?  Because once you fail high at a node,
you are _finished_ there..


The only exception is that it might cost more to detect which one will require
the least effort than choosing that move saves.  Which is why I don't use ETC
at the moment.  But I will work on it again...






>
>>For example, if you are currently ahead in a material (compared to beta) than
>>you probably don't want to start a deep sequence of mutual checks. All you need
>>is some quiet move that will preserve your advantage.
>>
>>[I first read about that in a 1974 paper published by KAISSA team, but the idea
>>is simple enough and certaintly was formulated earlier].
>>
>>Thanks,
>>Eugene
>>
>>On September 05, 2002 at 13:57:34, Robert Hyatt wrote:
>>
>>>On September 05, 2002 at 13:45:30, Dieter Buerssner wrote:
>>>
>>>>On September 05, 2002 at 10:47:32, Robert Hyatt wrote:
>>>>
>>>>>I don't think our serial searches are very bad.  IE I get the best move
>>>>>first 92% of the time.  I'm not sure how much farther I can go with that
>>>>>as there will _always_ be flaws that only a deep search exposes, when you
>>>>>sort moves in some arbitrary way.
>>>>
>>>>I guess you meant the fraction of beta cutoffs in the first move you try, by the
>>>>92%.
>>>
>>>Yes.  That is the number I measure in Crafty and display.
>>>
>>>> Then, this number may also be misleading. Is it really the best move, or
>>>>just any move, that cutoffs? Many more moves may actually cutoff, but usually we
>>>>don't know this (unless writing some experimental unefficient minimax code for
>>>>collecting the statistics). Other moves may cutoff much faster (with a smaller
>>>>tree following).
>>>
>>>OK... good point.  I will revise that to "Crafty searches a move 'good enough
>>>to cause a cutoff' first 92% of the time.  I don't think it matters, based on
>>>Knuth/Moore's paper.  The important thing is to search a move good enough to
>>>cause a cutoff first.  If you do, then there is no need to search the "best"
>>>move first if several are good enough.  Their math supported this pretty well,
>>>as did mine in the Journal of Parallel Computing back in the late 80's...
>>>
>>>
>>>
>>>> In the extreme, an alternative move may cutoff immediately from
>>>>the HTs. Enhanced transposition cutoff checks for this, but in general, I think
>>>>there are no well known algorithms to find the fastest cutoff move.
>>>
>>>ETC has a chance, of course.  Although for me it was a "break-even" deal.  The
>>>tree shrank a bit, but the speed was lower due to the extra hash-signature
>>>update and extra hash probe.  I chose to stick to the KISS approach and dumped
>>>it.
>>>
>>>
>>>>
>>>>I did some experiments for collecting some statistics a while back. IIRC with
>>>>random move ordering, I often got close to 50% cutoffs in the first tried move,
>>>>in the nodes that got a beta cutoff. Still, the search efficiency became (not
>>>>surprising at all) extremely bad.
>>>
>>>
>>>It's exponential, so 50% is horrible.  Due to that large exponent you have to
>>>apply.
>>>
>>>92% is not great and certainly leaves a lot of room between the real and
>>>minimal tree sizes.
>>>
>>>
>>>
>>>>
>>>>Regards,
>>>>Dieter



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.