Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Iterative deepening -- Why add exactly one ply?

Author: Vincent Diepeveen

Date: 07:56:50 05/29/04

Go up one level in this thread


On May 29, 2004 at 07:04:50, Vasik Rajlich wrote:

>On May 28, 2004 at 14:17:57, Vincent Diepeveen wrote:
>
>>On May 27, 2004 at 11:57:33, Vasik Rajlich wrote:
>>
>>>On May 27, 2004 at 11:22:40, Vincent Diepeveen wrote:
>>>
>>>>On May 27, 2004 at 06:39:23, Vasik Rajlich wrote:
>>>>
>>>>>On May 26, 2004 at 13:49:38, Tord Romstad wrote:
>>>>>
>>>>>>On May 26, 2004 at 13:34:23, Robert Hyatt wrote:
>>>>>>
>>>>>>>I have used "IID" for years, but in a very restricted way, namely to handle the
>>>>>>>case along the PV where I have no hash move.  I've never tried it _everywhere_
>>>>>>>before, so have no data.  But I intend to try to see if it is something that
>>>>>>>could work, or if it is a waste...
>>>>>>
>>>>>>I am fairly sure you will find that _everywhere_ is a waste.  It is probably
>>>>>>not worth doing near the leaf, you have a hash table move to search, or when
>>>>>>a fail-low is most likely.  Perhaps you should also use a somewhat bigger
>>>>>>reduction factor than in your "along-the PV IID".
>>>>>>
>>>>>>Note that it could also be interesting to look for good ways to make use of the
>>>>>>return value of the internal search.  It gives a reasonably reliable estimate
>>>>>>of the value of a full-depth search, and can be useful as an ingredient in
>>>>>>pruning tricks.  The most obvious (and entirely risk-free) case is when the
>>>>>>reduced-depth search returns a mate score.  When this happens, it is clearly
>>>>>>not necessary to do a full-depth search.
>>>>>>
>>>>>>Tord
>>>>>
>>>>>Yes, there is lots of room for playing with IID.
>>>>>
>>>>>Note that 95% of all nodes fail high in some way, so you can be pretty
>>>>>aggressive.
>>>>
>>>>that sounds very high.
>>>
>>>Ok - just checked - it's more like 93-94%, and I'm doing MTD (f).
>>
>>mtd versus pvs is rather irrelevant for this IMHO.
>>
>
>Maybe. In MTD (f) you have more [fail highs + fail lows]. I guess the total fail
>rate should still be pretty high in PVS.

It's not statistical significant enough to see a difference there.

>>More important is whether you do pruning in whatever form last few plies.
>>
>>It's not amazing to me that MiT guys go MTD by the way :)
>
>:)
>
>I won't claim a 15% speedup from it though like Don Dailey ...

I've never seen Don do any claims about MTD except that it worked for his
beancounter where even king safety could deliver at most 0.20 of a pawn or so,
so it had any king safety at all.

Basically the only proof we have about MTD is that it is performing better at
random trees without qsearch than plain alfabeta and this for very small search
depths, as the tests performed by Aske Plaat show.

In diep pawn=1000 which already eliminates the usage of MTD, so any discussion
in that respect is useless.

>>
>>I bet you like opteron. MTD must use hashtable everywhere, also in qsearch.
>>
>>How much does it speed you up the opteron/a64 ?
>>
>
>Yeah, my opteron speedup is huge. Over 2x the NPS going from same GHz k7, ie XP
>2600+ (2.06 GHz) to Amd64 3400+ (2.2 GHz), 64-bit compile with the crappy 2003
>DDK MS compiler.

130k nps at AMD64 with the februari 2003 sdk
135k nps with net2003

ICC8 with some hacking i'll test soon too.

>I don't understand though what this has to do with MTD (f). Yes, I hash

I do not see any correlation between IID and MTD versus IID and PVS.

>everywhere, and do ETC in the top layer of my search. Yes, an MTD (f) engine
>should spend more time waiting for memory than a PVS engine. However - memory
>latency speedup is less than 2x from k7 to k8, so you should be waiting for
>memory more on k8.

Memory latency is 2.5 to 3 times faster, more memory lines can get opened
simultaneously.

Please keep in mind that when i say memory latency i mean the average random
access latency to memory when using a big hashtable (say 400MB).

Softwarebenches like linbench and such pumping sequential a few gigabytes
through the machine and then divide that by the search time. Then you have
bandwidth. 1/bandwidth = latency they claim.

I would prefer calling that 'streaming latency'. It's full name officially is
though 'cross bandwidth latency'.

For chesssoftware that cross bandwidth latency is completely irrelevant.

>>>>
>>>>>The IID principle can also apply to some additional situations:
>>>>
>>>>>1) You have a hash move, but it's at depth-2 rather than depth-1. You can do
>>>>>another IID layer in this case.
>>>>
>>>>In that case hashmoves works better of course.
>>>>
>>>>>2) Your fail-high hash move (for some engines the only possible kind of hash
>>>>>move) fails low. Here you can do IID to get an alternative move.
>>>>
>>>>This is highly unlikely as your IID is at depth-i where i > 0.
>>>>
>>>>So most likely that hashmove is already from a position j >= depth - i, which
>>>>makes IID a complete waste of your time.
>>>
>>>I meant an IID where the move that already failed low is thrown out. You want
>>>the second-best move at the reduced depth.
>>
>>Use double nullmove. works better than IID and the first move you already get
>>the best move :)
>
>The depth reduction is too high. More experiments are needed - but it would be
>quite a coincidence if the best IID depth reduction just happened to be exactly
>twice the best null move depth reduction.
>>
>>>Usually, you will waste a few nodes this way of course. The idea is to avoid-the
>>>worst case scenario - of doing a full search through a bunch of other moves,
>>>before finding the fail-high move.
>>
>>You can add 1000 conditions, but if something doesn't work in general, it won't
>>work with 1000 conditions either. It just is harder to test in a way that
>>objective and statistical significant conclusions are possible to statistical
>>significant conclude whether it works or doesn't.
>>
>
>In Rybka, IID works. Further, I haven't found any conditions which make it work
>better, although I didn't try anything really fancy - just some comparisons
>between current eval and the bound. Anyway, I read your reply to Tord, and will
>keep retesting as the engine evolves.

I didn't find a single condition under which it works for DIEP. It's just a
waste of system time IMHO.

>>>>
>>>>>And - as Tord mentioned - an IID search can be turned into the final
>>>>>reduced-depth search, based on its result.
>>>>>Vas
>>>>
>>>>Depth reducing the current search?
>>>>
>>>>Sounds like a rather bad idea to me.
>>>
>>>Well that's the million dollar question, isn't it?
>>
>>Seems there is 2 camps.
>>
>>I'm currently in the camp that i tried both worlds and concluded that depth
>>reducing with nullmove is already enough.
>>
>>I can imagine last few plies some types of forward pruning somehow work. So far
>>i could not prove that last though.
>>
>>I have a hard time believing that forward pruning in the entire tree is going to
>>beat the nullmove pruning.
>>
>>We both are titled chessplayers, and i see simply that the few mistakes todays
>>engines make, usually it is a dubious move caused by bugs in the forward
>>pruning.
>>
>>Shredder is clearest example.
>
>Yes Shredder has some blind spots, but it can also search really deep,
>especially when it's attacking. It's always nice to search deeper in the
>critical lines. Anyway - I'm still checking out both camps.

Well it's not so hard to add 7 plies to your search depth because your
'selective search' might see 7 more (which in fact it does in diep).

I prefer a 14 ply search depth with just nullmove above 18 with the chance that
all your search lines are depth reduced and last few plies you supernullmove and
in qsearch you lazy evaluate :)

With forward pruning at every ply like shredder seems to do you only see faster
what it sees anyway. What your eval doesn't see, search won't find either
because you nonstop shorten such lines more than my '14 ply search depth' is
doing.

>The key is to think of the future - because it will soon be here. I really don't
>care which search misses more tactics on some 32 bit 1 GHz machine ...
>Vas

Last time Shredder ran on a 1 Ghz machine at a world champs was in world champs
London 2000, so those days are long gone.

>>
>>>Vas



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.