Author: Uri Blass
Date: 13:49:43 04/09/04
Go up one level in this thread
On April 09, 2004 at 16:24:42, Dann Corbit wrote: >I have been thinking about forward pruning. (Did you smell wood burning?) > >The ten cent description of Baysian logic to those who have not examined it: >As more information comes in, we revise our probability estimates. The Monty >Hall problem is an excellent example of it. > >Anyway, when you look at the techniques used to decide whether or not to >exercise some sort of forward pruning that are not complete no-brainers like >Alpha-Beta cutoffs, it seems logical to me to employ Baysian logic. The reason >is that advancing search depths give increased information. > >It seems a perfect fit for the theory. > >It seems to me it could even be used with a notion like: >Given the large number of available moves and the huge negative score, do we >even need to verify this null move? > >And things of that nature. > >Has anyone tried it? I do not understand what you suggest. If you talk about using probabilities for pruning decisions and to prune moves with big probability to fail low then I agree that it may be a good idea but the problem is how to evaluate probabilities. I use some statistics about history fail high history fail low in movei forward pruning and I have no doubt that it can be improved. Uri Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.