Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Bug-free fail soft must pass return_type

Author: Vasik Rajlich

Date: 01:37:50 06/21/05

Go up one level in this thread


On June 20, 2005 at 16:19:19, rasjid chan wrote:

>On June 20, 2005 at 14:58:14, Dieter Buerssner wrote:
>
>>On June 19, 2005 at 12:38:09, Vasik Rajlich wrote:
>>
>>>On June 19, 2005 at 11:25:24, rasjid chan wrote:
>>>
>>>>   I apologize I can never understand how to analyse the way search() intereacts
>>>>   with set board positions. But I understand now, also from dieter's reply,
>>>>   that fail-soft value need not be strictly correct UB / LB;  ie some
>>>>   risk may be taken.
>>>>
>>>
>>>I don't think Dieter is saying this. An UB should always be an UB - if you have
>>>to make a mistake, you set it too high. This happens for example with lazy eval,
>>>or using a margin for pruning in q-search.
>>
>>I fear, I don't fully understand Rasjid's statement neither Vasik's reply.
>>I think, correctly implemented fail soft search gives always correct bounds.
>>They are of course not the strictest bounds, one would like to have.
>>
>
>Actually I too did not completely understand Vasic's reply; it seems
>contradictory.
>
>I mentioned you because of your reply about skipping futile moves:-
>
>  if (move-likely-futile...){
>    best = alpha;//added by me
>    continue;
>  }
>
>Your reply seemed to indicate(optimist_score..margin)some don't always set
>best = alpha; My simple understanding of this is that when best is set to
>anything less than alpha and if finally we fail-low and soft with this best,
>then there is a "risk" as it is a guess of a fail-low and soft UB. So I presume
>others do allow some margin of risk. Vasic probably too misunderstood.
>

Actually I see now that I interpreted the original statement correctly. :)

You never want to set a fail soft score which is "too aggressive" and therefore
has a chance to be incorrect. You are willing to have "not the strictest" bounds
as Dieter says.

Of course, you might in some really rare cases set an incorrect fail soft score,
such as if you underestimate the margin. I guess the formal rule should be: you
should never set a fail soft score which is riskier than the risk you already
have with failing low/high in the first place. For example, if you use "margin"
to prune in quiescence search, then you should remove "margin" from the softness
of your bounds, etc ..

It's a mouthful - but it's quite simple actually. Rasjid, I have seen a number
of cases where you "give up" on fail soft and decide to fail hard anyway by
using a "best = alpha" or "return alpha", because you think that doing otherwise
is too risky. It's not incorrect of course, but (IMO) this is not optimal.

>
>>Sure, lazy eval could give a slightly wrong bound when you misjudge the safety
>>margin (a dynamically set safety margin should catch most cases). But you have
>>the same problem already with fail hard.
>>

Yes, exactly.

>>Also, with fail soft you will search different trees. You can also visit the
>>same node with different bounds at the same root depth. Perhaps with one bound,
>>you get a null move fail high, while with the other (higher bound) will not be
>>enough for the null move to fail high. It may turn out, that this null move fail
>>high was actually "wrong" (search without null move would have shown, that the
>>score of this node is worse than you have expected). But again, this situation
>>can also arise in fail hard search (for example when you use a different HT
>>size). I'd guess, that the chance of something like this happening is higher in
>>fail soft - but it should be low enough to not matter (other than by some random
>>noise).
>>

This is a great point, which I became aware of some time ago. The problem
becomes serious when you start using really small aspiration windows. Another
way to think of it is that you are effective putting null moves into your
principal variations. (It takes a bit of thinking to understand this, but that's
the effect.)

It's a problem which MTD (f) search must solve in some way.

>>One might also note, that for mate scores, the bounds are always totally
>>correct, in fail hard or fail soft (with or without HTs). Ok - one exception, 50
>>moves rule could hit you - but this really could be detected probably, and
>>really won't matter in normal positions.
>>
>>So, the mechanisms that can give you wrong bounds are the same for fail hard and
>>fail soft - without any extensions/pruning/hash you'd never get a wrong bound in
>>either case.
>>
>
>
>>About the REP3 - I had no time yet, to read this carefully (this became a really
>>huge thread ...). You might want to google for "Dennis Breuker thesis". He shows
>>a concrete example in detail, and explains why things go wrong there because of
>>hash and repetition of position. You could try to probe your suggestions against
>>that scenario.
>
>The specific case of rep3 which Vasic ignored is simple.
>He hashed FL and soft when best == 0, but this 0 comes from rep3 FH and soft
>from the ply above. So Vasic's 0 as the FL and soft score(is a correct UB) is
>correct for the particular search.
>
>Hashing the search would give HT path-dependncy corruption. I suggested passing
>down a _rep3_flag as this path-dependency may just propagate downwards. All
>hashing may be corrupted as we never know when such dependency may be
>inherited from above and passed down unless we keep a _rep3_flag. I think
>it may happen also for FH/exact.
>

I need to do some reading/thinking/work on this. The discussion about no
progress pruning is also the same theme. We always put all sorts of junk in the
hash tables, there are dozens of reasons for it. Some types of junk though
really shouldn't go in there. As I see it now, the repetion issue can lead to
scores which should _never_ go in the hash table.

Vas

>Best Regards
>Rasjid
>
>
>>
>>Regards,
>>Dieter



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.