Author: Dann Corbit
Date: 14:10:16 04/27/04
Go up one level in this thread
On April 27, 2004 at 06:10:04, Vasik Rajlich wrote: >On April 27, 2004 at 01:38:49, Uri Blass wrote: > >>On April 27, 2004 at 00:44:34, rasjid chan wrote: >> >>>On April 26, 2004 at 20:07:00, Uri Blass wrote: >>> >>>>On April 26, 2004 at 13:41:53, Vincent Diepeveen wrote: >>>> >>>>>On April 26, 2004 at 12:14:33, José Carlos wrote: >>>>> >>>>>>On April 26, 2004 at 11:57:43, Vincent Diepeveen wrote: >>>>>> >>>>>>>On April 26, 2004 at 11:48:35, José Carlos wrote: >>>>>>> >>>>>>>>On April 26, 2004 at 11:32:26, Tord Romstad wrote: >>>>>>>> >>>>>>>>>On April 26, 2004 at 10:39:42, José Carlos wrote: >>>>>>>>> >>>>>>>>>> An interesting experiment, of course. But I think your conditions are rather >>>>>>>>>>different from 'most' programs. I mean: >>>>>>>>>> - You allow any number of null moves in a row (most programs don't do even >>>>>>>>>>two) >>>>>>>>> >>>>>>>>>This has no importance, I think. My experience is that I almost always get the >>>>>>>>>same score and PV when I enable/disable several null moves in a row, and that >>>>>>>>>the difference in number of moves searched is *very* tiny. >>>>>>>> >>>>>>>> >>>>>>>> You're probably right, as you've tested and I speak from intuition, but at >>>>>>>>first sight, it seems that the fact that you allow several null moves in a row >>>>>>>>will increase your percentage of null-moves-tries/total-nodes-searched, and thus >>>>>>>>that avoiding unnecessary null moves will be a good idea. >>>>>>> >>>>>>>In *all* experiments i did with nullmove and a program not using *any* forward >>>>>>>pruning other than nullmove, the best thing was to *always* nullmove. >>>>>> >>>>>> >>>>>> Yes, that's what other programmers also said (including me) in the thread we >>>>>>had last week. That's pretty intuitive. With not any other forward pruning (or >>>>>>very little) but null move, the cost of not trying a null move that would have >>>>>>produced a cutoff it terrible compared to the benefit of saving an useless null >>>>>>move try. So avoid null move, in this case, must be only in a very few cases >>>>>>where you're 99.99% certain you'll fail low... if any. >>>>> >>>>>99.99% means 1 in 10k nodes. >>>> >>>>No >>>> >>>>You can be 99.99% sure about fail low more often than 1 in 10k nodes. >>>> >>>>> >>>>>So doing nullmove always is cheaper, because in a lot of cases >>>>>transpositiontable is doing its good job and in other cases you search more than >>>>>10k nodes which you avoid searching now. >>>>> >>>>>> Gothmog is very different from that 'paradigm' (he does a lot of forward >>>>>>prunning and applies many ideas he has commented here), hence it works pretty >>>>>>well for him. >>>>> >>>>>I get impression evaluation function plays a major role in when something is >>>>>useful or when it isn't. >>>>> >>>>>Checks in qsearch is also a typical example of this. >>>>> >>>>>> >>>>>>>Double nullmove i invented to proof nullmove gives the same results like a >>>>>>>normal fullwidth search for depth n which i may pick, and i use it as it finds >>>>>>>zugzwangs and i am sure that is very helpful, because the weakest chain counts. >>>>>>> >>>>>>>So double nullmove always completely outgunned doing a single nullmove then >>>>>>>disallowing a nullmove and then allowing the next one. >>>>>> >>>>>> I tried double null move some time ago, and it didn't work for me. Probably I >>>>>>did something wrong, but I recall an old post (see the archives) from C. Theron >>>>>>where he gave some points why double null move should not work. I, myself, >>>>>>didn't invest too much time though as I had much weaker points to fix in my >>>>>>program before. >>>>> >>>>>Christophe didn't post it doesn't work AFAIK. >>>>> >>>>>Further i must remind you that majority of commercial programmers posting here >>>>>is not busy letting you know what works for them or doesn't work for them. >>>>> >>>>>To quote Johan: "don't inform the amateurs". >>>> >>>>What reason do you have to tell other what works for you and what does not work >>>>for you? >>>> >>>>You do not plan to inform the amateurs about better code for tablebases than the >>>>nalimov tablebases so I do not see you as a person who try to help the amateurs. >>>> >>>>> >>>>>I remember that Christophe also posted that evaluation function is not so >>>>>important. >>>>> >>>>>His latest postings here made more sense however than the crap posted before >>>>>that. >>>> >>> >>>>I understand that you claim that basically Christophe's claim that most of the >>>>improvement in tiger came from better search and not from better evaluation was >>>>disinformation. >>> >>>Firstly there is not that BIG a stake for disinformation and posting >>>here is also just normal human behaviour that does not require >>>asking "....why do I post ? ". Then also ask why do I talk. >>> >>>I think Christophe was quite clear about the reasons why chess programming >>>is NOT about evaluation(not dumb evaluation). After pawn structures, passed >>>pawns etc, it is very difficult to try to improve on it. The curve for >>>evaluation is logarithmic for elo-increase/code-increase + huge overhead, >>>the very reverse of exponential.Search almost have no trend patterns and >>>search improvements usually have no overhead, you just need to be smarter >>>then the rest. Assume your opponent searches on average 3 plys ahead. >>>How do you do a good evaluation that can see 3 plys ahead? Evaluation is horizon >>>dumb. >>> >>>Rasjid >> >>I did not claim that christophe claimed wrong things. >>It is Vincent who claimed it. >> >>I prefer not to talk about the top programs. >>I can only say that it is clear for me that I can get much by search >>improvements. >> >>Certainly searching 3 plies forward or doing something equivalent can help >>significantly but the problem is how to do it. >>If you are optimistic about doing it with no price by intelligent extensions and >>reductions >>and better order of moves then it is clear that going for search is the right >>direction. >> >>If you are not optimistic even about getting something equivalent to 1 ply >>forward then evaluation is the right direction. >> >>Uri > >I don't think you need objective answers to these questions. > >You just need a game plan. > >A plain, reasonably tuned eval combined with a state of the art selective search >seems like a perfectly reasonable game plan to me. > >Ditto for plain search combined with a state-of-the art evaluation. Bruce Moreland (who's program Ferret was at one time among the top two or three in the world) found a great annoyance when as he improved his evaluation: he discovered that he was then being outsearched. I think the lesson is simple: If your new smarter eval makes the program stronger, then keep the new evaluation terms. If not, rip them out.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.