Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: When to do a null move search - an experiment

Author: Robert Hyatt

Date: 09:04:09 04/28/04

Go up one level in this thread


On April 28, 2004 at 10:41:08, Vincent Diepeveen wrote:

>On April 28, 2004 at 06:06:37, Vasik Rajlich wrote:
>
>>On April 27, 2004 at 21:05:11, Vincent Diepeveen wrote:
>>
>>>On April 27, 2004 at 17:10:16, Dann Corbit wrote:
>>>
>>>>On April 27, 2004 at 06:10:04, Vasik Rajlich wrote:
>>>>
>>>>>On April 27, 2004 at 01:38:49, Uri Blass wrote:
>>>>>
>>>>>>On April 27, 2004 at 00:44:34, rasjid chan wrote:
>>>>>>
>>>>>>>On April 26, 2004 at 20:07:00, Uri Blass wrote:
>>>>>>>
>>>>>>>>On April 26, 2004 at 13:41:53, Vincent Diepeveen wrote:
>>>>>>>>
>>>>>>>>>On April 26, 2004 at 12:14:33, José Carlos wrote:
>>>>>>>>>
>>>>>>>>>>On April 26, 2004 at 11:57:43, Vincent Diepeveen wrote:
>>>>>>>>>>
>>>>>>>>>>>On April 26, 2004 at 11:48:35, José Carlos wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On April 26, 2004 at 11:32:26, Tord Romstad wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>On April 26, 2004 at 10:39:42, José Carlos wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>  An interesting experiment, of course. But I think your conditions are rather
>>>>>>>>>>>>>>different from 'most' programs. I mean:
>>>>>>>>>>>>>>  - You allow any number of null moves in a row (most programs don't do even
>>>>>>>>>>>>>>two)
>>>>>>>>>>>>>
>>>>>>>>>>>>>This has no importance, I think.  My experience is that I almost always get the
>>>>>>>>>>>>>same score and PV when I enable/disable several null moves in a row, and that
>>>>>>>>>>>>>the difference in number of moves searched is *very* tiny.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>  You're probably right, as you've tested and I speak from intuition, but at
>>>>>>>>>>>>first sight, it seems that the fact that you allow several null moves in a row
>>>>>>>>>>>>will increase your percentage of null-moves-tries/total-nodes-searched, and thus
>>>>>>>>>>>>that avoiding unnecessary null moves will be a good idea.
>>>>>>>>>>>
>>>>>>>>>>>In *all* experiments i did with nullmove and a program not using *any* forward
>>>>>>>>>>>pruning other than nullmove, the best thing was to *always* nullmove.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  Yes, that's what other programmers also said (including me) in the thread we
>>>>>>>>>>had last week. That's pretty intuitive. With not any other forward pruning (or
>>>>>>>>>>very little) but null move, the cost of not trying a null move that would have
>>>>>>>>>>produced a cutoff it terrible compared to the benefit of saving an useless null
>>>>>>>>>>move try. So avoid null move, in this case, must be only in a very few cases
>>>>>>>>>>where you're 99.99% certain you'll fail low... if any.
>>>>>>>>>
>>>>>>>>>99.99% means 1 in 10k nodes.
>>>>>>>>
>>>>>>>>No
>>>>>>>>
>>>>>>>>You can be 99.99% sure about fail low more often than 1 in 10k nodes.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>So doing nullmove always is cheaper, because in a lot of cases
>>>>>>>>>transpositiontable is doing its good job and in other cases you search more than
>>>>>>>>>10k nodes which you avoid searching now.
>>>>>>>>>
>>>>>>>>>>  Gothmog is very different from that 'paradigm' (he does a lot of forward
>>>>>>>>>>prunning and applies many ideas he has commented here), hence it works pretty
>>>>>>>>>>well for him.
>>>>>>>>>
>>>>>>>>>I get impression evaluation function plays a major role in when something is
>>>>>>>>>useful or when it isn't.
>>>>>>>>>
>>>>>>>>>Checks in qsearch is also a typical example of this.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>Double nullmove i invented to proof nullmove gives the same results like a
>>>>>>>>>>>normal fullwidth search for depth n which i may pick, and i use it as it finds
>>>>>>>>>>>zugzwangs and i am sure that is very helpful, because the weakest chain counts.
>>>>>>>>>>>
>>>>>>>>>>>So double nullmove always completely outgunned doing a single nullmove then
>>>>>>>>>>>disallowing a nullmove and then allowing the next one.
>>>>>>>>>>
>>>>>>>>>>  I tried double null move some time ago, and it didn't work for me. Probably I
>>>>>>>>>>did something wrong, but I recall an old post (see the archives) from C. Theron
>>>>>>>>>>where he gave some points why double null move should not work. I, myself,
>>>>>>>>>>didn't invest too much time though as I had much weaker points to fix in my
>>>>>>>>>>program before.
>>>>>>>>>
>>>>>>>>>Christophe didn't post it doesn't work AFAIK.
>>>>>>>>>
>>>>>>>>>Further i must remind you that majority of commercial programmers posting here
>>>>>>>>>is not busy letting you know what works for them or doesn't work for them.
>>>>>>>>>
>>>>>>>>>To quote Johan: "don't inform the amateurs".
>>>>>>>>
>>>>>>>>What reason do you have to tell other what works for you and what does not work
>>>>>>>>for you?
>>>>>>>>
>>>>>>>>You do not plan to inform the amateurs about better code for tablebases than the
>>>>>>>>nalimov tablebases so I do not see you as a person who try to help the amateurs.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>I remember that Christophe also posted that evaluation function is not so
>>>>>>>>>important.
>>>>>>>>>
>>>>>>>>>His latest postings here made more sense however than the crap posted before
>>>>>>>>>that.
>>>>>>>>
>>>>>>>
>>>>>>>>I understand that you claim that basically Christophe's claim that most of the
>>>>>>>>improvement in tiger came from better search and not from better evaluation was
>>>>>>>>disinformation.
>>>>>>>
>>>>>>>Firstly there is not that BIG a stake for disinformation and posting
>>>>>>>here is also just normal human behaviour that does not require
>>>>>>>asking "....why do I post ? ". Then also ask why do I talk.
>>>>>>>
>>>>>>>I think Christophe was quite clear about the reasons why chess programming
>>>>>>>is NOT about evaluation(not dumb evaluation). After pawn structures, passed
>>>>>>>pawns etc, it is very difficult to try to improve on it. The curve for
>>>>>>>evaluation is logarithmic for elo-increase/code-increase + huge overhead,
>>>>>>>the very reverse of exponential.Search almost have no trend patterns and
>>>>>>>search improvements usually have no overhead, you just need to be smarter
>>>>>>>then the rest. Assume your opponent searches on average 3 plys ahead.
>>>>>>>How do you do a good evaluation that can see 3 plys ahead? Evaluation is horizon
>>>>>>>dumb.
>>>>>>>
>>>>>>>Rasjid
>>>>>>
>>>>>>I did not claim that christophe claimed wrong things.
>>>>>>It is Vincent who claimed it.
>>>>>>
>>>>>>I prefer not to talk about the top programs.
>>>>>>I can only say that it is clear for me that I can get much by search
>>>>>>improvements.
>>>>>>
>>>>>>Certainly searching 3 plies forward or doing something equivalent can help
>>>>>>significantly but the problem is how to do it.
>>>>>>If you are optimistic about doing it with no price by intelligent extensions and
>>>>>>reductions
>>>>>>and better order of moves then it is clear that going for search is the right
>>>>>>direction.
>>>>>>
>>>>>>If you are not optimistic even about getting something equivalent to 1 ply
>>>>>>forward then evaluation is the right direction.
>>>>>>
>>>>>>Uri
>>>>>
>>>>>I don't think you need objective answers to these questions.
>>>>>
>>>>>You just need a game plan.
>>>>>
>>>>>A plain, reasonably tuned eval combined with a state of the art selective search
>>>>>seems like a perfectly reasonable game plan to me.
>>>>>
>>>>>Ditto for plain search combined with a state-of-the art evaluation.
>>>>
>>>>Bruce Moreland (who's program Ferret was at one time among the top two or three
>>>>in the world) found a great annoyance when as he improved his evaluation: he
>>>>discovered that he was then being outsearched.
>>>>
>>>>I think the lesson is simple:
>>>>If your new smarter eval makes the program stronger, then keep the new
>>>>evaluation terms.  If not, rip them out.
>>>
>>>I feel search is overrated too much. It's so so hard to make a good eval, only
>>>when you have a real well tested and tuned eval making a good search is
>>>important.
>>>
>>>Even searching fullwidth i can beat Crafty.
>>>
>>>In fact i saw diep at a P4 2.8Ghz in blitz beat crafty at its dual Xeon lately
>>>at icc. I was pretty amazed because i always say that you need at least 12-14
>>>ply to search outside tactical barrier.
>>>
>>>Perhaps i'll change that back to 10 ply one day :)
>>
>>Eval is hard, true. So is selective search.
>
>The difference is that Eval brings more points. Selective search doesn't
>(assuming you use nullmove and have a pretty efficient search, some programs we
>cannot say that from though).
>
>>Moreland's approach is definitely not the way to end up with a great eval. When
>>your eval is cheap, you can't just add one expensive thing to it and expect it
>>to pay off. One expensive idea might cost you 20% of your nodes. With this
>>approach, if you're testing honestly, you'll always end up throwing out the
>>expensive stuff.
>
>I'm not commenting here on Bruce in a negative sense. In the days that ferret
>was strong, chessprograms simply didn't search deep enough yet. Around 1997 the
>search dominated because it was hard to get > 10 ply.

This is wrong.  In 1996 in Jakarta, Crafty and Ferret were both hitting 12-13
plies on every search in the middlegame...  Both running on pentium pro 200's...


>
>So his approach in those days made sense.
>
>>If you want a program with a great eval, you have to commit to it, let
>>everything slow down, and go from there.
>
>Big eval also means that a lot of dubious search stuff no longer works.
>
>Correctness of search and worst case behaviour of forward pruning plays a big
>role.
>
>>Another thing to keep in mind: a few years ago, it might well be the case that
>>allowing onesself to be outsearched was a death wish. This may be changing now.
>
>When posting in 1997 at RGCC that in the future just searching deeper wasn't
>holy, i remember that all the internet guys were laughing loud for me (also
>there programmers hardly posted).
>
>>Chess Genius dominated computer chess with a very unusual selective algorithm
>>that worked great at low depths. Eventually, it became irrelevant. As hardware
>>changes, all of these tradeoffs change.
>
>In the days that scientific programs searched at 100 nodes a second at 512
>processors (Zugzwang), there genius was searching at a single cpu 10Mhz machine
>already like what was it 2000 nodes a second or so?
>
>Of course using very dubious selective search and dubious qsearch methods.
>
>Even till today i do not know 100% sure what he was doing, but i know that at a
>10Mhz machine Genius will beat diep, because diep will search 3 ply at it in C
>and genius will get a tactical 10 ply and completely outgun it.
>
>Same holds true for Rebel. His dubious searching methods of course were genius
>finds around 1991.
>
>This has changed however in 2004.
>
>>Vas



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.