Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new idea on managing time using depth reduction at root

Author: scott farrell

Date: 13:36:46 02/08/03

Go up one level in this thread


On February 08, 2003 at 11:12:38, Miguel A. Ballicora wrote:

>On February 08, 2003 at 00:50:22, scott farrell wrote:
>
>>My idea relies on an underlying use of PVS search (principal variation serach),
>>an aspiration window, and IID (internal iterative deepening).
>>
>>When searching the root position, I search the first move as per normal.
>>
>>Then I search all remaining root moves at depth-1, basically everything gets a
>>reduction. This speeds the already fast PVS search. If any of the PVS searches
>>at root fail require a research, I research without the reduced depth.
>>
>>If at any time I fail low at root without a PV move already, then you panic and
>>add time etc, and dont do any depth reductions.
>>
>>If you fail high, I often just take the move if its near to the time allowed for
>>this move, especially if its value is more than our last move on the board.
>>
>>The idea is based on a few thoughts:
>>"why do you try ply 9 when you already have a nice move on ply 8"
>>"are you trying to ensure the move is safe?"
>>"are you trying to find a better move?"
>>
>>I think proving your move is safe is far more important. And that is what my
>>idea above does. It spends more time on checking that your move is safe, rather
>>than looking for a better move. This really helps, when there are 2 candidates
>>moves, and they are very close in score, and your engine spends lots of time
>>flip-flopping between them as best. My idea disregards the second, until it can
>>be shown it is much better.
>>
>>When you have finished searching say ply8, you have really only searched ply8
>>for the best move, and ply7 for everything else, unless you found a problem with
>>the best move and panicked.
>>
>>In positions where you have 1 clear good move, things really get an amazing
>>boost. In positions with lots of traps, it takes just as long as normal (ie. no
>>slower), but finds the traps more quickly, and during a game gives you the fail
>>low more quickly in order to allocate more time.
>>
>>I implemented this in chompster, and it seems to have had no drawbacks. It has
>>been averaging around 2450 on ICC in recent weeks, pretty good for Java !!
>>
>>This will be especially useful for 'newer' engines that arent real good on time
>>management, and only search full plies at the moment - this sort of allows you
>>to search partial plies when it is safe to do do.
>>
>>Let me know what you think, and if you might give it a try in your engine.
>>
>>Scott
>
>Some months ago, I tried this almost exactly as you described and I did not like
>the results. I did not discard the idea because to me it looks logical and I
>thought that there was something wrong with the implementation (or maybe I was
>not patient enough to do good tests). I think that there are lots of room for
>improvement. I still have this #undef in my code. I promise that I will try it
>again and post the results when I have some time. My "to-do" list is huge, my
>"done" list does not grow too much and I spend all my free time reading CCC :-)
>
>Miguel

thanks for your comments,

I really do beleive this has legs.

I am interested to find out your results. Make sure I see them, maybe I miss
some CCC posts, if I dont comment please email me.

Thanks
Scott



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.