Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Bayesian Forward Pruning

Author: Uri Blass

Date: 15:51:14 04/09/04

Go up one level in this thread


On April 09, 2004 at 17:53:04, Dann Corbit wrote:

>On April 09, 2004 at 16:49:43, Uri Blass wrote:
>
>>On April 09, 2004 at 16:24:42, Dann Corbit wrote:
>>
>>>I have been thinking about forward pruning.  (Did you smell wood burning?)
>>>
>>>The ten cent description of Baysian logic to those who have not examined it:
>>>As more information comes in, we revise our probability estimates.  The Monty
>>>Hall problem is an excellent example of it.
>>>
>>>Anyway, when you look at the techniques used to decide whether or not to
>>>exercise some sort of forward pruning that are not complete no-brainers like
>>>Alpha-Beta cutoffs, it seems logical to me to employ Baysian logic.  The reason
>>>is that advancing search depths give increased information.
>>>
>>>It seems a perfect fit for the theory.
>>>
>>>It seems to me it could even be used with a notion like:
>>>Given the large number of available moves and the huge negative score, do we
>>>even need to verify this null move?
>>>
>>>And things of that nature.
>>>
>>>Has anyone tried it?
>>
>>I do not understand what you suggest.
>
>This is what made me think about it:
>http://www.enm.bris.ac.uk/teaching/enjfb/emat31600/Fuzzy%20Decision%20Trees.ppt
>
>>If you talk about using probabilities for pruning  decisions and to prune moves
>>with big probability to fail low then I agree that it may be a good idea but the
>>problem is how to evaluate probabilities.
>>
>>I use some statistics about history fail high history fail low in movei forward
>>pruning and I have no doubt that it can be improved.
>
>Suppose (for instance) that we have some branch that loses a whole piece.
>Perhaps there is a way to formulate a rule about how deeply to explore that
>branch.


I basically use evaluation for pruning only in the last plies.
My logic is that thanks to recursive null move pruning I do not spend relatively
a lot of time on lines that one side lose a lot of material if I am not close to
the leaves.

I can prune a line when the side to move has a piece advantage if the remaining
depth is 1 or 2 and there are some more conditions like the king is not in
check.

I think that null move pruning is already productive in fast pruning of lines
that lose a lot of material when the remaining depth is bigger so it may be a
mistake to search to reduced depth because the question is not the depth that
you search but the number of nodes that you search.


You want to search less nodes for lines that you are almost sure to be bad
lines.
null move pruning already helps you to search less nodes for lines when you are
almost sure that they are bad when the remaining depth is big because even if
the move threats something there is a good chance that next moves are going to
be pruned by null move.

Let look at the following diagram that you can get from a search from the
opening position.

[D]rn1qkb1r/ppp1pppp/3p1n2/8/2B1P1Q1/8/PPPP1PPP/RNB1K1NR w KQkq - 0 4

White is a piece up but black threats the queen

If the remaining depth is small(1 or 2) you can safely return beta and do not
search.

If the remaining depth is big then searching not to reduced depth is not
expensive because you will probably get something like Qf3 a6 prune by null move
pruning
Qf3 a5 pruned by null move pruning....

This means that you will get a small tree after bad moves even if you do not do
special forward pruning.

If the remaining depth is 2 it is not the case because you need to search Qf3
and every move of black after it to understand that white is a piece up with
normal null move pruning.

If the remaining depth is small pruning based on evaluation can save a lot of
nodes and is less risky because even in case of a correct sacrifice the
probability to discover that it is correct is smaller with small remaining
depth.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.