Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Bayesian Forward Pruning

Author: Christophe Theron

Date: 06:15:37 04/10/04

Go up one level in this thread


On April 10, 2004 at 05:35:31, Uri Blass wrote:

>On April 09, 2004 at 23:24:03, Christophe Theron wrote:
>
>>On April 09, 2004 at 18:51:14, Uri Blass wrote:
>>
>>>On April 09, 2004 at 17:53:04, Dann Corbit wrote:
>>>
>>>>On April 09, 2004 at 16:49:43, Uri Blass wrote:
>>>>
>>>>>On April 09, 2004 at 16:24:42, Dann Corbit wrote:
>>>>>
>>>>>>I have been thinking about forward pruning.  (Did you smell wood burning?)
>>>>>>
>>>>>>The ten cent description of Baysian logic to those who have not examined it:
>>>>>>As more information comes in, we revise our probability estimates.  The Monty
>>>>>>Hall problem is an excellent example of it.
>>>>>>
>>>>>>Anyway, when you look at the techniques used to decide whether or not to
>>>>>>exercise some sort of forward pruning that are not complete no-brainers like
>>>>>>Alpha-Beta cutoffs, it seems logical to me to employ Baysian logic.  The reason
>>>>>>is that advancing search depths give increased information.
>>>>>>
>>>>>>It seems a perfect fit for the theory.
>>>>>>
>>>>>>It seems to me it could even be used with a notion like:
>>>>>>Given the large number of available moves and the huge negative score, do we
>>>>>>even need to verify this null move?
>>>>>>
>>>>>>And things of that nature.
>>>>>>
>>>>>>Has anyone tried it?
>>>>>
>>>>>I do not understand what you suggest.
>>>>
>>>>This is what made me think about it:
>>>>http://www.enm.bris.ac.uk/teaching/enjfb/emat31600/Fuzzy%20Decision%20Trees.ppt
>>>>
>>>>>If you talk about using probabilities for pruning  decisions and to prune moves
>>>>>with big probability to fail low then I agree that it may be a good idea but the
>>>>>problem is how to evaluate probabilities.
>>>>>
>>>>>I use some statistics about history fail high history fail low in movei forward
>>>>>pruning and I have no doubt that it can be improved.
>>>>
>>>>Suppose (for instance) that we have some branch that loses a whole piece.
>>>>Perhaps there is a way to formulate a rule about how deeply to explore that
>>>>branch.
>>>
>>>
>>>I basically use evaluation for pruning only in the last plies.
>>>My logic is that thanks to recursive null move pruning I do not spend relatively
>>>a lot of time on lines that one side lose a lot of material if I am not close to
>>>the leaves.
>>>
>>>I can prune a line when the side to move has a piece advantage if the remaining
>>>depth is 1 or 2 and there are some more conditions like the king is not in
>>>check.
>>>
>>>I think that null move pruning is already productive in fast pruning of lines
>>>that lose a lot of material when the remaining depth is bigger so it may be a
>>>mistake to search to reduced depth because the question is not the depth that
>>>you search but the number of nodes that you search.
>>>
>>>
>>>You want to search less nodes for lines that you are almost sure to be bad
>>>lines.
>>>null move pruning already helps you to search less nodes for lines when you are
>>>almost sure that they are bad when the remaining depth is big because even if
>>>the move threats something there is a good chance that next moves are going to
>>>be pruned by null move.
>>>
>>>Let look at the following diagram that you can get from a search from the
>>>opening position.
>>>
>>>[D]rn1qkb1r/ppp1pppp/3p1n2/8/2B1P1Q1/8/PPPP1PPP/RNB1K1NR w KQkq - 0 4
>>>
>>>White is a piece up but black threats the queen
>>>
>>>If the remaining depth is small(1 or 2) you can safely return beta and do not
>>>search.
>>
>>
>>
>>Safely? This is guaranteed to generate disasters if all you do is test that
>>eval>=beta and you are not in check! You are going to miss a lot of simple
>>tactical shots. What if there is a fork? A dangerous pin? A mate at the next
>>move? There are many cases where this will hurt badly.
>>
>>
>>
>>    Christophe
>
>I do not claim that it is safe to prune everytime when eval>=beta(of course it
>is not safe to do it)
>
>some comments:
>
>1)It is eval>beta+some margin when the margin is dependent on the remaining
>depth.
>
>2)I have more conditions to decide if it is margin or margin/2 or not to prune.
>
>When I posted I did not remember the values of my table
>I looked at my table again and I see that it says margin=8 pawns for depth 2 and
>it means
>that when the remaining depth is 2 I do not prune based on being 3 pawns above
>beta but only based on having an advantage of at least 4 pawns above beta.
>
>I will prune in more cases based on being more than 8 pawns above beta but again
>not in every case(it is possible that it is better to reduce the margin and my
>number are not tuned).
>
>My table today when 100 is a pawn is
>{0,200,800,1800,3200,5000};
>
>when the remaining depth is 0 I call qsearch so if the evaluation is above beta
>and the side to move is not in trouble of check or a pin of big piece I prune.
>
>I am almost sure that there are better values because I only tested that these
>values are better than no pruning but it is also possible to build a better
>function to decide about danger of fail low based on the remaining depth and the
>position.
>
>2)I am not claiming that I can be 100% sure of no mistake but result with that
>pruning is better than without that pruning
>
>I believe that margin and margin/2 is not the best and I think that it is better
>to have some danger function and prune only if the evaluation is bigger than the
>result of danger when danger consider the remaining depth and the position.
>
>Uri



That's something you'd better work on as soon as possible. Just using margins is
not efficient at all.



    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.