Author: Bas Hamstra
Date: 14:16:53 10/06/99
Go up one level in this thread
On October 06, 1999 at 09:22:17, Alessandro Damiani wrote: >On October 06, 1999 at 04:59:47, Inmann Werner wrote: > >>On October 05, 1999 at 16:10:49, Robert Hyatt wrote: >> >>>On October 05, 1999 at 15:32:02, Inmann Werner wrote: >>> >>>>On October 05, 1999 at 10:40:49, Robert Hyatt wrote: >>>> >>>>>On October 05, 1999 at 05:25:21, Bas Hamstra wrote: >>>>> >>>>>>What are good ways to cut down the number of evals? I saw Bob Hyatt post that he >>>>>>could easily double NPS when using "Lazy Eval". >>>>>> >>>>>>What is a correct way to do that? Is there more to it than the qsearch "delta" >>>>>>type of pruning? >>>>>> >>>>>> >>>>>>Regards, >>>>>>Bas Hamstra. >>>>> >>>>> >>>>>The idea is that in general, your eval _must_ return scores > alpha and >>>>>< beta, or they are not useful, correct? (please ignore this if you use >>>>>mtd(f) of course, as it is more complicated then). Suppose alpha=-.30 and >>>>>beta=+.30. When you get into your eval, if you can figure out that you >>>>>can't possible bring the score within that window, you can return the >>>>>appropriate bound quickly. IE if you come in and material is at -9.00 (You >>>>>have lost a queen somewhere in this path) then do you have an eval term that >>>>>can add +9.00 to the score to bring it inside the window? If not, you can >>>>>either return -9.00, or the more safe -.30, since the score is at least >>>>>that bad. >>>>> >>>>>You can use this at several points to bail out after you are sure you can't >>>>>get "in the box" with the score... >>>> >>>>why is giving back -0.30 a more safe way then returning the material_balance of >>>>-9.00, what I do now? >>>> >>>>Werner >>> >>> >>>Question is, "which is closer to the right value?" >>> >>>for some positions, your -9 is closer. For others, the -.030 might be >>>closer (ie if the -9 can be offset by an unstoppable pawn, for >>>example...) >>> >>>I prefer to be 'conservative here' as I will remember that -9 and it might be >>>overstated... >>> >>>I'd rather guess "score is < -.30" than "score is < -9.00"... >> >>But it is anyway a cutoff. (not the right word, i think) >>Problems can occur with hashtables, if the position is searched again with a >>wide open window, where -9 is a right value to accept. (or -0.30) >>But anyway, if I use -9, i forgot all about positional evaluation, if i use -.30 >>the move may be choosen without any evaluation reason, forgetting the "lost >>queen". >>For me something like a unsolvable problem, one should not think to much about, >>cause solving it takes to much speed, produces more problems. >>If I write in the hashtables "not accurate value", the entry gets senseless and >>coming to the position causes full research. If I write in the wrong value I >>have to live with it. >> >>Werner > >I think lazy evaluation is worst for algorithms with null-window searches. Like >every change to the search tree (forward pruning, extensions, ...) lazy >evaluation may introduce search anomalies: in PVS/NegaScout the test with >null-window tells you that score>x and the research tells you score<=x. >Contradiction! > >Now I am doing without lazy. Not forever. > >Alessandro I like Jon Dart's idea of determining the maximum of the positional component. He has a (switchable) check switch that prints a warning if any positional component goes beyond that bound. If it never does, I think lazy eval should be 100% safe. And you could return <= -9.00 + MaxPositional without problem, thus keeping the information of the lost queen and still be 100% accurate. Conservative approach, maybe, but accurate. I am going to try it. Regards, Bas Hamstra.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.