Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Checks in the Qsearch

Author: Uri Blass

Date: 11:00:31 07/10/02

Go up one level in this thread


On July 10, 2002 at 11:14:54, Robert Hyatt wrote:

>On July 10, 2002 at 02:03:01, Christophe Theron wrote:
>
>>On July 09, 2002 at 12:48:18, Robert Hyatt wrote:
>>
>>>On July 09, 2002 at 03:40:31, Christophe Theron wrote:
>>>
>>>>On July 08, 2002 at 23:32:09, Robert Hyatt wrote:
>>>>
>>>>>On July 08, 2002 at 13:58:51, Christophe Theron wrote:
>>>>>
>>>>>>On July 08, 2002 at 11:39:40, Robert Hyatt wrote:
>>>>>>
>>>>>>>On July 08, 2002 at 00:21:23, Christophe Theron wrote:
>>>>>>>
>>>>>>>>On July 07, 2002 at 23:53:16, Robert Hyatt wrote:
>>>>>>>>
>>>>>>>>>On July 07, 2002 at 23:42:03, Omid David wrote:
>>>>>>>>>
>>>>>>>>>>On July 07, 2002 at 21:43:47, Robert Hyatt wrote:
>>>>>>>>>>
>>>>>>>>>>>On July 07, 2002 at 16:47:33, Omid David wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On July 07, 2002 at 16:36:57, Robert Hyatt wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>On July 07, 2002 at 11:48:27, Omid David wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>On July 06, 2002 at 23:23:28, Robert Hyatt wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>On July 06, 2002 at 22:29:44, Omid David wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>On July 06, 2002 at 10:20:17, Robert Hyatt wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>On July 06, 2002 at 01:07:36, Ricardo Gibert wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>Okay, but so what?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>So perhaps the idea of "forward pruning" is foreign to us as well...
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>I see no logical difference between deciding which moves are interesting and
>>>>>>>>>>>>>>>>>>worth looking at and deciding which moves are not interesting and not worth
>>>>>>>>>>>>>>>>>>looking at. It looks to me like 2 sides of the same coin, so your speculation
>>>>>>>>>>>>>>>>>>that "perhaps the idea of "forward pruning" is foreign to us as well..." does
>>>>>>>>>>>>>>>>>>not seem to be of any consequence.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>However, that has been _the point_ of this entire thread:  Is DB's search
>>>>>>>>>>>>>>>>>inferior because it does lots of extensions, but no forward pruning.  I
>>>>>>>>>>>>>>>>>simply said "no, the two can be 100% equivalent".
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>Just a quick point: The last winner of WCCC which *didn't* use forward pruning
>>>>>>>>>>>>>>>>was Deep Thought in 1989. Since then, forward pruning programs won all WCCC
>>>>>>>>>>>>>>>>championships...
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>In 1992 no "supercomputer" played.  In 1995 deep thought had bad luck and lost
>>>>>>>>>>>>>>>a game it probably wouldn't have lost had it been replayed 20 times.   No
>>>>>>>>>>>>>>>"supercomputer" (those are the programs that likely relied more on extensions
>>>>>>>>>>>>>>>than on forward pruning due to the hardware horsepower they had) has played
>>>>>>>>>>>>>>>since 1995...
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>I'm not sure that means a lot, however.  IE I don't think that in 1995 fritz
>>>>>>>>>>>>>>>was a wild forward pruner either unless you include null move.  Then you
>>>>>>>>>>>>>>>would have to include a bunch of supercomputer programs including Cray Blitz
>>>>>>>>>>>>>>>as almost all of us used null-move...
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>I personally consider null-move pruning a form of forward pruning, at least with
>>>>>>>>>>>>>>R > 1. I believe Cray Blitz used R = 1 at that time, right?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>I believe that at that point (1989) everybody was using null-move with R=1.
>>>>>>>>>>>>>It is certainly a form of forward pruning, by effect.
>>>>>>>>>>>>
>>>>>>>>>>>>Yes, and today most programs use at least R=2... The fact is that new ideas in
>>>>>>>>>>>>null-move pruning didn't cause this change of attitude, just programmers
>>>>>>>>>>>>accepted taking more risks!
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>I think it is more hardware related.  Murray Campbell mentioned R=2 in the
>>>>>>>>>>>first null-move paper I ever read.  He tested with R=1, but mentioned that
>>>>>>>>>>>R=2 "needs to be tested".  I think R=2 at 1980's speeds would absolutely
>>>>>>>>>>>kill micros.  It might even kill some supercomputers.  Once the raw depth
>>>>>>>>>>>with R=2 hits 11-12 plies minimum, the errors begin to disappear and it starts
>>>>>>>>>>>to play reasonably.  But at 5-6-7 plies, forget about it.
>>>>>>>>>>
>>>>>>>>>>So using a fixed R=3 seems to be possible in near future with faster hardware,
>>>>>>>>>>doesn't it?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Very possibly.  Or perhaps going from 2~3 as I do now to 3~4 or even 4~5 for
>>>>>>>>>all I know...  I should say that going from 2 to 3 is not a huge change.  Bruce
>>>>>>>>>and I ran a match a few years ago with him using Ferret vs Crafty with Ferret
>>>>>>>>>using pure R=2, and then pure R=3.  We didn't notice any particular difference
>>>>>>>>>at that time.  It played about the same, searched about the same depth, etc...
>>>>>>>>
>>>>>>>>
>>>>>>>>Increasing R is pointless after 3.
>>>>>>>>
>>>>>>>>Because instead of having a null move search using 5% of your time (just an
>>>>>>>>example, I do not know the exact value), it will use only 2% or 3%.
>>>>>>>>
>>>>>>>>The speed increase is ridiculous, and the risks are getting huge.
>>>>>>>>
>>>>>>>>The only thing you can get by increasing R after that is having a percentage of
>>>>>>>>search spent in null move close to 0. So a potential of 2% or 3% increase in
>>>>>>>>speed.
>>>>>>>>
>>>>>>>>And an big potential to overlook easy combinations everywhere in the tree.
>>>>>>>>
>>>>>>>>That's why I believe that working on R>3 is a waste of time.
>>>>>>>>
>>>>>>>>
>>>>>>>>    Christophe
>>>>>>>
>>>>>>>
>>>>>>>You are overlooking _the_ point here.  At present, doing 12-14 ply searches,
>>>>>>>R>3 doesn't make a lot of difference.  But in the future, when doing (say)
>>>>>>>18 ply searches, R=4 will offer a lot more in terms of performance.  Same as
>>>>>>>R=3 did when we got to 12-14 plies...  _then_ it might make sense to up R
>>>>>>>once again.
>>>>>>
>>>>>>
>>>>>>
>>>>>>It doesn't matter to what depth you are searching.
>>>>>>
>>>>>>Increasing R can in the best case only give a minor speedup.
>>>>>>
>>>>>>The speedup you can get by increasing R is bounded to a few percent.
>>>>>
>>>>>
>>>>>No it isn't... This is _still_ an exponential problem.  The problem at
>>>>>present, with R=4, as that there are not many nodes where that is better
>>>>>than R=3 since either one takes depth to zero.  When you start averaging
>>>>>18 plies, then R=4 has 14 plies to influence the search, not just 8 as it
>>>>>does with 12 ply searches...
>>>>>
>>>>>Just try R=1, then R=2, then R=3 for shallow and then deep and then deeper
>>>>>searches.  R=3 doesn't do much for blitz at all.  For longer games, it begins
>>>>>to make a difference.  I suspect R=4 will do the same thing.  Again, it is
>>>>>not just a "few percent" when you refute a move with a depth D search vs a
>>>>>depth D-3 search vs a depth d-4 search.  The D-4 search will take 1/3 the
>>>>>time of the D-3 search.  That is pretty significant.
>>>>
>>>>
>>>>
>>>>You are right in the case where the search following the null move fails high.
>>>>
>>>>The speedup is limited to a small percentage in the case where the null move
>>>>search does not fail high.
>>>>
>>>>So it all depends on the percentage of fail high and fail low after a null move.
>>>>
>>>>
>>>>
>>>>    Christophe
>>>>
>>>
>>>
>>>No.  Why should it?  If you do a null-move search, does it _matter_
>>>whether it fails high or fails low?  You _still_ had to do it.  So the
>>>issue is "how big is the tree?"
>>>
>>>Here is some sample data from crafty:
>>>
>>>nodes searched: 9.25M
>>>nodes searched below a NULL move: 7.75M
>>>null_move searches failing high: 1.02M
>>>null_move searches failing low: .235M
>>>
>>>Another position:
>>>
>>>nodes searched: 15M
>>>nodes searched below a NULL move: 5.5M
>>>null_move searches failing high: 2.0M
>>>null_move searches failing low: .5M
>>>
>>>That is why I said "this is not about a few percentage points."
>>>
>>>First position researched with R=4, just for fun:
>>>
>>>Nodes:  4.7M
>>>below NULL: 3.3M
>>>fail high: .624M
>>>fail low: .138M
>>>
>>>Going from R=2~3 to R=4 reduced the search time by 50%.
>>>
>>>_very_ non-trivial...
>>
>>
>>
>>(I tried to post this message earlier but went into server errors)
>>
>>
>>Yes, you are right.
>>
>>It is due to the high number of null move searches failing high (in your test
>>run there are 5 null move searches failing high for one failing low).
>>
>>The searches that fail low are followed by a normal search (going up to the
>>normal depth), and that's why in this case the speedup is bounded. Doing a D-4
>>plies search instead of D-3 plies just before a D plies search is a very small
>>saving.
>>
>>On the other hand if you get a fail high you have saved a lot of time by doing a
>>D-4 search instead of D-3. That's right.
>
>Notice that over the two positions I posted, fail highs are way in the
>majority...  otherwise null-move would be worthless in the first place.
>But even when they don't fail low, a R=4 fails low faster than a R=3 search
>will, by a factor of 3x roughly.
>
>Note that I ran a R=4 just for fun and it was 2x faster overall than R=2~3.
>That's certainly worth looking into...  2x is almost another ply.

The question is how much speed improvment you get from 4 relative to 3.
I suspect that most of the improvement is from 3 relative to 2-3.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.