Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new idea on managing time using depth reduction at root

Author: Robert Hyatt

Date: 18:59:48 02/08/03

Go up one level in this thread


On February 08, 2003 at 16:34:33, scott farrell wrote:

>On February 08, 2003 at 10:31:37, Robert Hyatt wrote:
>
>>On February 08, 2003 at 00:50:22, scott farrell wrote:
>>
>>>My idea relies on an underlying use of PVS search (principal variation serach),
>>>an aspiration window, and IID (internal iterative deepening).
>>>
>>>When searching the root position, I search the first move as per normal.
>>>
>>>Then I search all remaining root moves at depth-1, basically everything gets a
>>>reduction. This speeds the already fast PVS search. If any of the PVS searches
>>>at root fail require a research, I research without the reduced depth.
>>>
>>>If at any time I fail low at root without a PV move already, then you panic and
>>>add time etc, and dont do any depth reductions.
>>
>>
>>
>>The question that has to be asked is this:
>>
>>
>>"what justifies comparing the results of a depth D+1 search for the first
>>move at the root to a depth D search for the remainder of the root moves?"
>>
>>All this does is introduce horizon effects on the remaining moves as they are
>>searched less deeply.
>>
>>If you are doing null-move properly, the rest of the root moves take only a
>>small fraction of the time (combined) that the first root move requires to
>>search.
>>
>>
>>
>>>
>>>If you fail high, I often just take the move if its near to the time allowed for
>>>this move, especially if its value is more than our last move on the board.
>>
>>
>>This is yet another problem.  You just failed high on a search one ply less
>>deep than the search you are failing high over.  What happens if you were to
>>re-search this move with the right depth and it fails low way below the
>>first move's score?
>
>As per normal PVS you take the second search as the true value, having searched
>with the true alpha,beta bounds, this is no different than normal PVS issues.
>


Think a bit longer.  What if you fail high and you have no more time?  What
did you just store in the hash table?  A fail high with the wrong depth.  Not
to mention the fact that you now have a move searched one ply shallower, that
you _think_ is better than the best move so far, which was searched one whole
ply deeper.  How does a fail high at depth D compare to a true score at depth
D+1?  It doesn't...  Now you are forced to complete the fail-high search to
see if it is real or phoney, because you can certainly fail high at depth D,
but the re-search to depth D+1 can fail miserably low because it sees more
tactics.



>>
>>
>>
>>>
>>>The idea is based on a few thoughts:
>>>"why do you try ply 9 when you already have a nice move on ply 8"
>>>"are you trying to ensure the move is safe?"
>>>"are you trying to find a better move?"
>>
>>The latter.
>>
>
>This is interesting, the whole objective of this idea is to player safer, and
>look less for better moves. If this premise is wrong, or doesn't hold for
>crafty, than this whole idea isnt going to hold water.
>
>>
>>
>>>
>>>I think proving your move is safe is far more important. And that is what my
>>>idea above does. It spends more time on checking that your move is safe, rather
>>>than looking for a better move. This really helps, when there are 2 candidates
>>>moves, and they are very close in score, and your engine spends lots of time
>>>flip-flopping between them as best. My idea disregards the second, until it can
>>>be shown it is much better.
>>>
>>>When you have finished searching say ply8, you have really only searched ply8
>>>for the best move, and ply7 for everything else, unless you found a problem with
>>>the best move and panicked.
>>>
>>>In positions where you have 1 clear good move, things really get an amazing
>>>boost. In positions with lots of traps, it takes just as long as normal (ie. no
>>>slower), but finds the traps more quickly, and during a game gives you the fail
>>>low more quickly in order to allocate more time.
>>
>>
>>
>>>
>>>I implemented this in chompster, and it seems to have had no drawbacks. It has
>>>been averaging around 2450 on ICC in recent weeks, pretty good for Java !!
>>>
>>>This will be especially useful for 'newer' engines that arent real good on time
>>>management, and only search full plies at the moment - this sort of allows you
>>>to search partial plies when it is safe to do do.
>>>
>>>Let me know what you think, and if you might give it a try in your engine.
>>>
>>>Scott



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.