Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Root move ordering - an experiment

Author: Stuart Cracraft

Date: 10:47:36 09/28/04

Go up one level in this thread


On September 28, 2004 at 09:07:05, Jan K. wrote:

>On September 28, 2004 at 05:17:26, Peter Fendrich wrote:
>
>>On September 27, 2004 at 23:45:54, Stuart Cracraft wrote:
>>
>>>I experimented with reordering root ply at iterative depth iply >  1
>>>where 1 is the root ply, with the results of iply-1 sorted by the
>>>total nodes of quiescence and main search defined as the # of entries
>>>for each of those subroutines.
>>>
>>>I didn't sort at root node on the first sort by quiescence but instead
>>>by my normal scheme though I tried quiescence and it was worse. I felt
>>>this gave a better chance to the above method.
>>>
>>>I sorted moves at the root ply for iply > 1 in the following way
>>>for 7 different parts to the experiment.
>>>
>>>   sort by normal method (history heuristic, mvv/lva, see, etc.
>>>   sort exactly by subtree node count, nothing else
>>>   sort by subtree node count added to normal score (hh, mvv/lva, see, etc.)
>>>   same as previous but node count x 10 before addition
>>>   same as previous but node count x 100 before addition
>>>   same as previous but node count x 1000 before addition
>>>   same as previous but node count x 10000 before addition
>>>
>>>The results, measured by # right on Win-at-Chess varied from
>>>250 for the first in the list above to 234 for the last.
>>>Most bunched up between 244-247 except the first was 250,
>>>my current best on WAC with handtuning everything.
>>>
>>>For me, I'm convinced that this style of sorting root ply is
>>>slightly less good for my short searches compared to what I am using:
>>>a combination of history, heuristic, see(), and centrality with
>>>various bonuses, about a half page of code sprinkled about.
>>>
>>>The advantage  of sorting root node by subtree is the simplicity.
>>>It eliminates about a half a page of code and introduces
>>>about a quarter page of code for only slightly lesser results
>>>(within 1-2% of my current result) so that is good.
>>>
>>>Still I think I'll leave it #ifdefed out for now and use it as
>>>a baseline that is only improvable upon with handtuning of my
>>>current methods and others to be discovered.
>>>
>>>Stuart
>>
>>I've noticed that you often refer to WAC and also do very quick searches.
>>If you get 247 in one test and 250 in another that doesn't mean a thing
>>if you don't examine the positions that changed.
>
>Exactly what I tell him every time.....btw when running wac at 1s, my results
>easily differ by 10 or more positions, sometimes I run the test in the
>background, sometimes i play some music with winamp,
>QueryPerformanceCounter(that i use to measure time) in w98 seems not as precise
>as in w2k and I call it not as often as I should in this short time period when
>every ms matters...million things that can change 1s wac result....
>

Mine don't differ by 10 positions. 1-3 is what I get in terms of variance.

I shutdown all possible Windows processes in the system tray, all other
applications are also killed manually from the task list. The result is
exactly one process, Explorer, in the task list and I obviously can't
kill that for the OS's sake.

Having done this I get nearly repeatable (within 1% of total) runs.

Stuart



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.