Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Root move ordering - an experiment

Author: Robert Hyatt

Date: 11:04:06 09/28/04

Go up one level in this thread


On September 28, 2004 at 13:58:05, Stuart Cracraft wrote:

>On September 28, 2004 at 10:45:07, Robert Hyatt wrote:
>
>>On September 28, 2004 at 05:17:26, Peter Fendrich wrote:
>>
>>>On September 27, 2004 at 23:45:54, Stuart Cracraft wrote:
>>>
>>>>I experimented with reordering root ply at iterative depth iply >  1
>>>>where 1 is the root ply, with the results of iply-1 sorted by the
>>>>total nodes of quiescence and main search defined as the # of entries
>>>>for each of those subroutines.
>>>>
>>>>I didn't sort at root node on the first sort by quiescence but instead
>>>>by my normal scheme though I tried quiescence and it was worse. I felt
>>>>this gave a better chance to the above method.
>>>>
>>>>I sorted moves at the root ply for iply > 1 in the following way
>>>>for 7 different parts to the experiment.
>>>>
>>>>   sort by normal method (history heuristic, mvv/lva, see, etc.
>>>>   sort exactly by subtree node count, nothing else
>>>>   sort by subtree node count added to normal score (hh, mvv/lva, see, etc.)
>>>>   same as previous but node count x 10 before addition
>>>>   same as previous but node count x 100 before addition
>>>>   same as previous but node count x 1000 before addition
>>>>   same as previous but node count x 10000 before addition
>>>>
>>>>The results, measured by # right on Win-at-Chess varied from
>>>>250 for the first in the list above to 234 for the last.
>>>>Most bunched up between 244-247 except the first was 250,
>>>>my current best on WAC with handtuning everything.
>>>>
>>>>For me, I'm convinced that this style of sorting root ply is
>>>>slightly less good for my short searches compared to what I am using:
>>>>a combination of history, heuristic, see(), and centrality with
>>>>various bonuses, about a half page of code sprinkled about.
>>>>
>>>>The advantage  of sorting root node by subtree is the simplicity.
>>>>It eliminates about a half a page of code and introduces
>>>>about a quarter page of code for only slightly lesser results
>>>>(within 1-2% of my current result) so that is good.
>>>>
>>>>Still I think I'll leave it #ifdefed out for now and use it as
>>>>a baseline that is only improvable upon with handtuning of my
>>>>current methods and others to be discovered.
>>>>
>>>>Stuart
>>>
>>>I've noticed that you often refer to WAC and also do very quick searches.
>>>If you get 247 in one test and 250 in another that doesn't mean a thing
>>>if you don't examine the positions that changed. Very often you will find that
>>>the difference is due to random coincidents.
>>>I'm sure that you could get such differences just by making some innocent change
>>>somewhere in the code...
>>
>>Correct.  Change the move ordering.  Change the hash table order of storing
>>things.  Occasionally change the best move or score as a result.
>>
>>1 sec WAC runs is not the way to test for performance, particularly when 1 sec
>>really means 1 sec +/- N where N is a non-trivial amount of quantization error
>>in timing...
>>
>>
>>
>>>There will always be some part of your move ordering (in the tree) that is
>>>random and the same goes for what positions and moves that happens to stay in
>>>the hash table, killer- and history-lists.
>>>/Peter
>
>I don't change the hash table ordering. It is deterministic between
>program runs because the zobrist keys are fixed. The test suite is fixed.
>The system itself is running nothing other than the OS. The OS is enough
>to introduce variance and it is definitely seen, but small.

Wrong.  _any_ time you change move ordering, you change hashing order.  And we
are talking about ordering things differently at the root.  Think about it for a
minute and you'll understand the point...


>
>It is about as deterministic as it gets and my variance is no more than
>1-3 problems on WAC for my tests from run to run of the same identical code.

If you can run the same test set, same program, and get different results, it
points out a _serious_ flaw in your testing methodology.  Search to fixed depth
and that will go away.  Or search for 1 second but _always_ finish the last
iteration.  The variance will disappear as well.





>
>That's about as good as it gets and still giving me a reasonable turnaround
>on testing.
>
>I do plan to start a fixed depth search for WAC and have a tool that
>compares total number of searches that have their node counts decreased
>without making a good solution bad and use that as an additional method.
>However, deciding on a single fixed depth that fits the various WAC
>problems won't be likely and I've been pulling away from doing it
>for that reason. I could record the last full iteration searched
>for each problem and search to that for the fixed depth search and
>then have the above tool, as I already collect every test problem
>solution, node count and various other statistics like hash table
>accuracy, beta cut percentage, etc.
>
>But it's not of much use if the overall problem solution rate drops so that's
>why I've been going with the problem solution rate. But now that problem
>solution rate has been static it's probalby time to dig in and use
>additional measures.
>
>Stuart



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.