Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: When to do a null move search - an experiment

Author: rasjid chan

Date: 21:44:34 04/26/04

Go up one level in this thread


On April 26, 2004 at 20:07:00, Uri Blass wrote:

>On April 26, 2004 at 13:41:53, Vincent Diepeveen wrote:
>
>>On April 26, 2004 at 12:14:33, José Carlos wrote:
>>
>>>On April 26, 2004 at 11:57:43, Vincent Diepeveen wrote:
>>>
>>>>On April 26, 2004 at 11:48:35, José Carlos wrote:
>>>>
>>>>>On April 26, 2004 at 11:32:26, Tord Romstad wrote:
>>>>>
>>>>>>On April 26, 2004 at 10:39:42, José Carlos wrote:
>>>>>>
>>>>>>>  An interesting experiment, of course. But I think your conditions are rather
>>>>>>>different from 'most' programs. I mean:
>>>>>>>  - You allow any number of null moves in a row (most programs don't do even
>>>>>>>two)
>>>>>>
>>>>>>This has no importance, I think.  My experience is that I almost always get the
>>>>>>same score and PV when I enable/disable several null moves in a row, and that
>>>>>>the difference in number of moves searched is *very* tiny.
>>>>>
>>>>>
>>>>>  You're probably right, as you've tested and I speak from intuition, but at
>>>>>first sight, it seems that the fact that you allow several null moves in a row
>>>>>will increase your percentage of null-moves-tries/total-nodes-searched, and thus
>>>>>that avoiding unnecessary null moves will be a good idea.
>>>>
>>>>In *all* experiments i did with nullmove and a program not using *any* forward
>>>>pruning other than nullmove, the best thing was to *always* nullmove.
>>>
>>>
>>>  Yes, that's what other programmers also said (including me) in the thread we
>>>had last week. That's pretty intuitive. With not any other forward pruning (or
>>>very little) but null move, the cost of not trying a null move that would have
>>>produced a cutoff it terrible compared to the benefit of saving an useless null
>>>move try. So avoid null move, in this case, must be only in a very few cases
>>>where you're 99.99% certain you'll fail low... if any.
>>
>>99.99% means 1 in 10k nodes.
>
>No
>
>You can be 99.99% sure about fail low more often than 1 in 10k nodes.
>
>>
>>So doing nullmove always is cheaper, because in a lot of cases
>>transpositiontable is doing its good job and in other cases you search more than
>>10k nodes which you avoid searching now.
>>
>>>  Gothmog is very different from that 'paradigm' (he does a lot of forward
>>>prunning and applies many ideas he has commented here), hence it works pretty
>>>well for him.
>>
>>I get impression evaluation function plays a major role in when something is
>>useful or when it isn't.
>>
>>Checks in qsearch is also a typical example of this.
>>
>>>
>>>>Double nullmove i invented to proof nullmove gives the same results like a
>>>>normal fullwidth search for depth n which i may pick, and i use it as it finds
>>>>zugzwangs and i am sure that is very helpful, because the weakest chain counts.
>>>>
>>>>So double nullmove always completely outgunned doing a single nullmove then
>>>>disallowing a nullmove and then allowing the next one.
>>>
>>>  I tried double null move some time ago, and it didn't work for me. Probably I
>>>did something wrong, but I recall an old post (see the archives) from C. Theron
>>>where he gave some points why double null move should not work. I, myself,
>>>didn't invest too much time though as I had much weaker points to fix in my
>>>program before.
>>
>>Christophe didn't post it doesn't work AFAIK.
>>
>>Further i must remind you that majority of commercial programmers posting here
>>is not busy letting you know what works for them or doesn't work for them.
>>
>>To quote Johan: "don't inform the amateurs".
>
>What reason do you have to tell other what works for you and what does not work
>for you?
>
>You do not plan to inform the amateurs about better code for tablebases than the
>nalimov tablebases so I do not see you as a person who try to help the amateurs.
>
>>
>>I remember that Christophe also posted that evaluation function is not so
>>important.
>>
>>His latest postings here made more sense however than the crap posted before
>>that.
>

>I understand that you claim that basically Christophe's claim that most of the
>improvement in tiger came from better search and not from better evaluation was
>disinformation.

Firstly there is not that BIG a stake for disinformation and posting
here is also just normal human behaviour that does not require
asking "....why do I post ? ". Then also ask why do I talk.

I think Christophe was quite clear about the reasons why chess programming
is NOT about evaluation(not dumb evaluation). After pawn structures, passed
pawns etc, it is very difficult to try to improve on it. The curve for
evaluation is logarithmic for elo-increase/code-increase + huge overhead,
the very reverse of exponential.Search almost have no trend patterns and
search improvements usually have no overhead, you just need to be smarter
then the rest. Assume your opponent searches on average 3 plys ahead.
How do you do a good evaluation that can see 3 plys ahead? Evaluation is horizon
dumb.

Rasjid

>I see no reason that we should believe that the things that you post are not
>disinformation.
>
>Uri



This page took 0.15 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.