Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hashtable size: diminishing returns?

Author: Andrew Dados

Date: 20:30:11 03/28/00

Go up one level in this thread


On March 28, 2000 at 22:06:42, Dave Gomboc wrote:

>On March 28, 2000 at 17:19:34, Andrew Dados wrote:
>
>>On March 28, 2000 at 16:41:17, Dave Gomboc wrote:
>>
>>>On March 28, 2000 at 13:27:59, Andrew Dados wrote:
>>>
>>>>On March 28, 2000 at 13:14:59, Dave Gomboc wrote:
>>>>
>>>>>On March 28, 2000 at 12:17:14, Tom Kerrigan wrote:
>>>>>
>>>>>>On March 28, 2000 at 11:08:59, Bas Hamstra wrote:
>>>>>>
>>>>>>>Question is: is move ordering very important near the root and less important
>>>>>>>near the leafs? Bob Hyatt says so. One mistake near the root costs millions of
>>>>>>>nodes. On the other hand almost all nodes are leafnodes, so bad ordering near
>>>>>>>the leafs costs millions of nodes too.
>>>>>>
>>>>>>I think so. At least in my program, hash table size does not matter much. (See
>>>>>>previous posts in this thread.)
>>>>>>
>>>>>>>Has anyone measured if you search somewhat deeper if you do extensive ordering
>>>>>>>near the root and cheap (just MVV/LVA) ordering near the leafs and in the
>>>>>>>qsearch?
>>>>>>
>>>>>>Because SEE only improves things by ~%10 over MVV/LVA, I don't think doing SEE
>>>>>>at the root and MVV/LVA otherwise would make much sense. You can do searches to
>>>>>>improve move ordering, which is the idea behind internal interative deepening.
>>>>>>So yes, people are doing this, and I believe it works, although I haven't tried
>>>>>>it myself.
>>>>>>
>>>>>>-Tom
>>>>>
>>>>>I have yet to figure out why people who use iterative deepening and understand
>>>>>why it works do not also use internal iterative deepening.  Could somebody
>>>>>enlighten me?
>>>>>
>>>>>Dave
>>>>
>>>>It will only work when node you do it at will fail high, otherwise it's waste of
>>>>nodes... so eg Bob does it along PV and only when no good move from HT... Some
>>>>programs have 'double null move', which accomplishes pretty much the same.
>>>>
>>>>-Andrew-
>>>
>>>(I wasn't suggesting to use it everywhere.)
>>>
>>>I don't understand how double null-move accomplishes pretty much the same (no
>>>doubt because I'm not exactly sure what you mean by "double null move" ;-).
>>>Let's say you have depth 7 left to go, and you see that don't have a good move
>>>from the hash table.  IID will pick a move using searches of depths 1-7... and
>>>the searches at low depths are no more wasted than they are with normal
>>>iterative deepening.  What will "double null move" do in this circumstance?
>>>
>>>Dave
>>
>>IID idea is to get a good move to try first. If there is a move from HT which
>>failed high on previous iteration, then IID is useless here (will get same info
>>as stored in HT). Same for fail low on previous iteration (IID will get fail low
>>here as well with no good move).
>
>Like I said, I didn't suggest using IID everywhere!  I take that for granted,
>but I should have been more explicit, I guess.  More concretely: let's take the
>situation where the hash table lookup comes up with no information.  That's a
>basic situation where I think IID can be useful.  Let's also assume that there
>is a reasonable amount of depth left to go.  (People can always come up with
>tricks if there's hardly any depth left -- which isn't relevant to the main
>debate.)
>
>>If there is no data in HT (entry was owerwritten) we don't know whether this
>>node will get FL (IID will just waste nodes here) or FH (IID can be useful to
>>get a good move, but maybe our move ordering is good enough not to try IID).
>
>So in the case where there is going to be a fail-low, you think that doing IID
>will use more time than just searching every move immediately at the required
>depth?  Why?  (If that's not what you meant, then I misunderstood you.)

That's what I ment, yes. You *have to* search all the moves at that ply to full
depth anyway, so searching first to depth 1, then 2, etc is not helpful at all
in FL node. You'll waste all those nodes.

>
>I specifically disbelieve this, for the same reason that I disbelieve that
>iteratively deepening from ply 1 to ply 7 "will just waste nodes" compared to
>starting out with a ply 7 search.
>

There is a huge difference between IID at root and in the middle of A-B search,
namely: at root you need to come up with best move and score, anywhere else but
on PV you need to get a bound only. That makes for not using IID anywhere else
(unless, I repeat, you can be sure your search will fail within given window
bounds).
You can simply modify any freeware code to check nodecounts.

>>Either way - IID in *most*  cases will not save us a bit, but add useless nodes.
>>If we could be sure that given node will fail high (or we search non-zero
>>window) IID would probably save us nodes by giving a good move to search first.
>>(Note Bob estimates some 10% less nodes in many positions when doing IID along
>>PV, when we can be fairly sure score will fail within window)
>>
>>Double null trick is to allow 2 null moves in a row (but not 3). Idea is to
>>detect zugzwang positions while using advantage of null move pruning.
>>Practically it's then IID with reduced depth of (2*null_reduction).
>
>It's not practically IID at all, if IID isn't implemented.
>
>Dave

Well, you skipped 2 moves, same side to move and same position, but depth is
reduced now... how far is it from IID?

-Andrew-



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.