Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Side effects of lazy eval?

Author: Vincent Diepeveen

Date: 15:01:33 10/01/00

Go up one level in this thread


On September 30, 2000 at 11:30:39, Dieter Buerssner wrote:

>On September 29, 2000 at 22:48:35, Vincent Diepeveen wrote:
>
>>On September 28, 2000 at 12:29:35, Dieter Buerssner wrote:
>>
>>>On September 28, 2000 at 12:02:42, Vincent Diepeveen wrote:
>>>
>>>>On September 28, 2000 at 11:41:50, Dieter Buerssner wrote:
>>>>
>>>>>On September 28, 2000 at 05:00:59, Bas Hamstra wrote:
>>>>>
>>>>>>On September 27, 2000 at 16:16:21, Peter McKenzie wrote:
>>>>>>
>>>>>>>On September 27, 2000 at 07:47:18, Bas Hamstra wrote:
>>>>>>>
>>>>>>>>Supposing no "lazy-errors" at all were made, does anyone know if there are
>>>>>>>>serious side-effects to lazy eval?
>>>>>>>
>>>>>>>You can't get the full benefits of fail-soft using lazy eval.
>>>>>>
>>>>>>I agree. This is the only factor I can think off too, you lose some bound info.
>>>>>>
>>>>>>Yet, I ran a couple of WAC tests at very short time controls, with and without
>>>>>>LE. And kept track of the average depth that was reached. In that quick test
>>>>>>NPS went up, but the average depth stayed the same!
>>>>>>
>>>>>>So it seems what you win in speed, you lose in bound info, net result zero? At
>>>>>>least in this case. I will rerun it more accurately, at longer tc.
>>>>>
>>>>>You might want to give the following idea a try. I think this could be called a
>>>>>fail soft version of lazy eval:
>>>>
>>>>I heart someone mention this trick before a couple of years ago,
>>>>but when i measured the largest eval score i had so far during the
>>>>search the trick looked a bit silly
>>>>
>>>>>    es = s + largest_evalscore[side];
>>>>
>>>>So that's roughly (can be a bit more or less):
>>>
>>>Sorry, I was too sloppy. s is the material score and largest_evalscore
>>>is the largest positional score found so far.
>>>
>>>With this, would you still think, this gives worse bounds?
>>
>>i have turned off lazy eval for years.
>>
>>for a quicko test i turned it on for a few days to test with it
>>short before wmcc after many questions from some kids in the
>>dutch computer chess list.
>>
>>i let it solve win at chess at 20 seconds a move. i forgot how many
>>the one solved, but i remember the difference was about 10 positions
>>or so, basically because of a smaller search.
>>
>>i didn't lazy eval for the largest positional score which is tens of pawns
>>in diep all together, but i lazy evalled only at 5 pawns.

>How do you define largest positional score? I think I have used another bad

i have allocated an array which i clean in advance,
to cut'n paste the code i use to fill it after a full eval has
been done:

  if( lazyeval-lazymarge-LazyKing(side) >= beta
   && lazyeval-lazymarge-LazyKing(side)-i > 0 ) {
    if( (lazyeval-lazymarge-LazyKing(side)-i)/4 < 2000 )
      afwijking[(lazyeval-lazymarge-LazyKing(side)-i)/4]++;
    LazyTeller++;
  }
  else if( lazyeval+lazymarge+LazyKing(side^1) <= alfa
   && i-lazyeval-lazymarge-LazyKing(side^1) > 0 ) {
    if( (i-lazyeval-lazymarge-LazyKing(side^1))/4 < 2000 )
      afwijking[(i-lazyeval-lazymarge-LazyKing(side^1))/4]++;
    LazyTeller++;
  }

the /4 is used because array gets way too big otherwise as a pawn is 1000
points in my eval.

Lazy eval is the lazy eval score. Lazymarge is the margin dependant upon
conditions in the positions like it is bigger if there are passed pawns.
LazyKing is a rude king safety.

So basically i'm NOT interested in a lazyscore s which is smaller as
it should be if score is >= beta, as the lazy score gives a cutoff anyway
then.

>term. I think it in this context as the maximum of
>positional_score[side]-positional_score[xside] seen so far. With this, I usually
>get something around 2 pawns for either side. When there are very large factors
>involved (say for a trapped bishop on a7, or for bad trades), that can be
>evaluated before the lazy exit. Or as an alternative, two lazy exits can be
>used. One at the start of eval and one after large factors are evaluated. This
>of course would nee, to keep track of two maximum psoitional score for each
>side.
>
>>so margin = 5 pawns;
>>  if material value + 5 pawns <= alfa then return alfa
>>  if material value - 5 pawns >= beta then return beta
>>
>>basically i search less deeply for a start. apart from that it appears
>>that many positions get solved faster with the big positional values.
>>
>>not many positions get the high 'compensation' values, but the ones
>>getting them are definitely influencing move choice and especially
>>the branching factor.
>>
>>also other values as alfa or beta i tried in past a lot. like most
>>unsafe is of course returning material value, then material value +/- 5
>>pawns etcetera.
>>
>>So lazy eval not only makes my search less 'pure', it also is bad for b.f.
>>and for tactics.
>>
>>Now where is it good for then?
>
>Ok. I made a little experiment with WAC, because you mentioned it. Search time
>is 5 seconds, hash 30MB (which doesn't gets filled by about 25%). AMD K6-2 475
>MHz.
>
>Without lazy eval:
>
>found 285, average finished search depth: 7.35, average knps: 115.2
>
>With, what I would call fail hard lazy eval:
>
>found 290, average finished depth: 7.74, average knps: 176.4
>found all 285 solutions of the former.
>
>With what I have called fail soft lazy eval:
>
>found 291, average finished depth: 7.79, average knps: 174.6
>found one additional solution of the former.
>
>With fail soft I have worse hash table efficiency (ratio probe/find 42% vs.
>46%). This seems to indicate to me, that there are better bounds in the hash
>table, so that the same positions have been visited less often. In all the
>positions I have checked, the fail soft variant needed less nodes to find the
>solution than fail hard.
>
>-- Dieter



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.