Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Root move ordering - an experiment

Author: J. Wesley Cleveland

Date: 10:50:34 09/28/04

Go up one level in this thread


On September 28, 2004 at 10:45:07, Robert Hyatt wrote:

>On September 28, 2004 at 05:17:26, Peter Fendrich wrote:
>
>>On September 27, 2004 at 23:45:54, Stuart Cracraft wrote:
>>
>>>I experimented with reordering root ply at iterative depth iply >  1
>>>where 1 is the root ply, with the results of iply-1 sorted by the
>>>total nodes of quiescence and main search defined as the # of entries
>>>for each of those subroutines.
>>>
>>>I didn't sort at root node on the first sort by quiescence but instead
>>>by my normal scheme though I tried quiescence and it was worse. I felt
>>>this gave a better chance to the above method.
>>>
>>>I sorted moves at the root ply for iply > 1 in the following way
>>>for 7 different parts to the experiment.
>>>
>>>   sort by normal method (history heuristic, mvv/lva, see, etc.
>>>   sort exactly by subtree node count, nothing else
>>>   sort by subtree node count added to normal score (hh, mvv/lva, see, etc.)
>>>   same as previous but node count x 10 before addition
>>>   same as previous but node count x 100 before addition
>>>   same as previous but node count x 1000 before addition
>>>   same as previous but node count x 10000 before addition
>>>
>>>The results, measured by # right on Win-at-Chess varied from
>>>250 for the first in the list above to 234 for the last.
>>>Most bunched up between 244-247 except the first was 250,
>>>my current best on WAC with handtuning everything.
>>>
>>>For me, I'm convinced that this style of sorting root ply is
>>>slightly less good for my short searches compared to what I am using:
>>>a combination of history, heuristic, see(), and centrality with
>>>various bonuses, about a half page of code sprinkled about.
>>>
>>>The advantage  of sorting root node by subtree is the simplicity.
>>>It eliminates about a half a page of code and introduces
>>>about a quarter page of code for only slightly lesser results
>>>(within 1-2% of my current result) so that is good.
>>>
>>>Still I think I'll leave it #ifdefed out for now and use it as
>>>a baseline that is only improvable upon with handtuning of my
>>>current methods and others to be discovered.
>>>
>>>Stuart
>>
>>I've noticed that you often refer to WAC and also do very quick searches.
>>If you get 247 in one test and 250 in another that doesn't mean a thing
>>if you don't examine the positions that changed. Very often you will find that
>>the difference is due to random coincidents.
>>I'm sure that you could get such differences just by making some innocent change
>>somewhere in the code...
>
>Correct.  Change the move ordering.  Change the hash table order of storing
>things.  Occasionally change the best move or score as a result.
>
>1 sec WAC runs is not the way to test for performance, particularly when 1 sec
>really means 1 sec +/- N where N is a non-trivial amount of quantization error
>in timing...
>

Wouldn't the right way to test this be to search to a fixed depth and compare
node counts ? You would expect to get the same results except for equal evals or
nullmove hash table, or extension wierdness.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.