Author: rasjid chan
Date: 13:19:19 06/20/05
Go up one level in this thread
On June 20, 2005 at 14:58:14, Dieter Buerssner wrote:
>On June 19, 2005 at 12:38:09, Vasik Rajlich wrote:
>
>>On June 19, 2005 at 11:25:24, rasjid chan wrote:
>>
>>> I apologize I can never understand how to analyse the way search() intereacts
>>> with set board positions. But I understand now, also from dieter's reply,
>>> that fail-soft value need not be strictly correct UB / LB; ie some
>>> risk may be taken.
>>>
>>
>>I don't think Dieter is saying this. An UB should always be an UB - if you have
>>to make a mistake, you set it too high. This happens for example with lazy eval,
>>or using a margin for pruning in q-search.
>
>I fear, I don't fully understand Rasjid's statement neither Vasik's reply.
>I think, correctly implemented fail soft search gives always correct bounds.
>They are of course not the strictest bounds, one would like to have.
>
Actually I too did not completely understand Vasic's reply; it seems
contradictory.
I mentioned you because of your reply about skipping futile moves:-
if (move-likely-futile...){
best = alpha;//added by me
continue;
}
Your reply seemed to indicate(optimist_score..margin)some don't always set
best = alpha; My simple understanding of this is that when best is set to
anything less than alpha and if finally we fail-low and soft with this best,
then there is a "risk" as it is a guess of a fail-low and soft UB. So I presume
others do allow some margin of risk. Vasic probably too misunderstood.
>Sure, lazy eval could give a slightly wrong bound when you misjudge the safety
>margin (a dynamically set safety margin should catch most cases). But you have
>the same problem already with fail hard.
>
>Also, with fail soft you will search different trees. You can also visit the
>same node with different bounds at the same root depth. Perhaps with one bound,
>you get a null move fail high, while with the other (higher bound) will not be
>enough for the null move to fail high. It may turn out, that this null move fail
>high was actually "wrong" (search without null move would have shown, that the
>score of this node is worse than you have expected). But again, this situation
>can also arise in fail hard search (for example when you use a different HT
>size). I'd guess, that the chance of something like this happening is higher in
>fail soft - but it should be low enough to not matter (other than by some random
>noise).
>
>One might also note, that for mate scores, the bounds are always totally
>correct, in fail hard or fail soft (with or without HTs). Ok - one exception, 50
>moves rule could hit you - but this really could be detected probably, and
>really won't matter in normal positions.
>
>So, the mechanisms that can give you wrong bounds are the same for fail hard and
>fail soft - without any extensions/pruning/hash you'd never get a wrong bound in
>either case.
>
>About the REP3 - I had no time yet, to read this carefully (this became a really
>huge thread ...). You might want to google for "Dennis Breuker thesis". He shows
>a concrete example in detail, and explains why things go wrong there because of
>hash and repetition of position. You could try to probe your suggestions against
>that scenario.
The specific case of rep3 which Vasic ignored is simple.
He hashed FL and soft when best == 0, but this 0 comes from rep3 FH and soft
from the ply above. So Vasic's 0 as the FL and soft score(is a correct UB) is
correct for the particular search.
Hashing the search would give HT path-dependncy corruption. I suggested passing
down a _rep3_flag as this path-dependency may just propagate downwards. All
hashing may be corrupted as we never know when such dependency may be
inherited from above and passed down unless we keep a _rep3_flag. I think
it may happen also for FH/exact.
Best Regards
Rasjid
>
>Regards,
>Dieter
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.