Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Checks in the Qsearch

Author: Robert Hyatt

Date: 10:30:33 07/08/02

Go up one level in this thread


On July 08, 2002 at 12:29:17, Uri Blass wrote:

>On July 08, 2002 at 11:39:40, Robert Hyatt wrote:
>
>>On July 08, 2002 at 00:21:23, Christophe Theron wrote:
>>
>>>On July 07, 2002 at 23:53:16, Robert Hyatt wrote:
>>>
>>>>On July 07, 2002 at 23:42:03, Omid David wrote:
>>>>
>>>>>On July 07, 2002 at 21:43:47, Robert Hyatt wrote:
>>>>>
>>>>>>On July 07, 2002 at 16:47:33, Omid David wrote:
>>>>>>
>>>>>>>On July 07, 2002 at 16:36:57, Robert Hyatt wrote:
>>>>>>>
>>>>>>>>On July 07, 2002 at 11:48:27, Omid David wrote:
>>>>>>>>
>>>>>>>>>On July 06, 2002 at 23:23:28, Robert Hyatt wrote:
>>>>>>>>>
>>>>>>>>>>On July 06, 2002 at 22:29:44, Omid David wrote:
>>>>>>>>>>
>>>>>>>>>>>On July 06, 2002 at 10:20:17, Robert Hyatt wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On July 06, 2002 at 01:07:36, Ricardo Gibert wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>Okay, but so what?
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>So perhaps the idea of "forward pruning" is foreign to us as well...
>>>>>>>>>>>>>
>>>>>>>>>>>>>I see no logical difference between deciding which moves are interesting and
>>>>>>>>>>>>>worth looking at and deciding which moves are not interesting and not worth
>>>>>>>>>>>>>looking at. It looks to me like 2 sides of the same coin, so your speculation
>>>>>>>>>>>>>that "perhaps the idea of "forward pruning" is foreign to us as well..." does
>>>>>>>>>>>>>not seem to be of any consequence.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>However, that has been _the point_ of this entire thread:  Is DB's search
>>>>>>>>>>>>inferior because it does lots of extensions, but no forward pruning.  I
>>>>>>>>>>>>simply said "no, the two can be 100% equivalent".
>>>>>>>>>>>
>>>>>>>>>>>Just a quick point: The last winner of WCCC which *didn't* use forward pruning
>>>>>>>>>>>was Deep Thought in 1989. Since then, forward pruning programs won all WCCC
>>>>>>>>>>>championships...
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>In 1992 no "supercomputer" played.  In 1995 deep thought had bad luck and lost
>>>>>>>>>>a game it probably wouldn't have lost had it been replayed 20 times.   No
>>>>>>>>>>"supercomputer" (those are the programs that likely relied more on extensions
>>>>>>>>>>than on forward pruning due to the hardware horsepower they had) has played
>>>>>>>>>>since 1995...
>>>>>>>>>>
>>>>>>>>>>I'm not sure that means a lot, however.  IE I don't think that in 1995 fritz
>>>>>>>>>>was a wild forward pruner either unless you include null move.  Then you
>>>>>>>>>>would have to include a bunch of supercomputer programs including Cray Blitz
>>>>>>>>>>as almost all of us used null-move...
>>>>>>>>>
>>>>>>>>>I personally consider null-move pruning a form of forward pruning, at least with
>>>>>>>>>R > 1. I believe Cray Blitz used R = 1 at that time, right?
>>>>>>>>
>>>>>>>>
>>>>>>>>I believe that at that point (1989) everybody was using null-move with R=1.
>>>>>>>>It is certainly a form of forward pruning, by effect.
>>>>>>>
>>>>>>>Yes, and today most programs use at least R=2... The fact is that new ideas in
>>>>>>>null-move pruning didn't cause this change of attitude, just programmers
>>>>>>>accepted taking more risks!
>>>>>>
>>>>>>
>>>>>>I think it is more hardware related.  Murray Campbell mentioned R=2 in the
>>>>>>first null-move paper I ever read.  He tested with R=1, but mentioned that
>>>>>>R=2 "needs to be tested".  I think R=2 at 1980's speeds would absolutely
>>>>>>kill micros.  It might even kill some supercomputers.  Once the raw depth
>>>>>>with R=2 hits 11-12 plies minimum, the errors begin to disappear and it starts
>>>>>>to play reasonably.  But at 5-6-7 plies, forget about it.
>>>>>
>>>>>So using a fixed R=3 seems to be possible in near future with faster hardware,
>>>>>doesn't it?
>>>>
>>>>
>>>>Very possibly.  Or perhaps going from 2~3 as I do now to 3~4 or even 4~5 for
>>>>all I know...  I should say that going from 2 to 3 is not a huge change.  Bruce
>>>>and I ran a match a few years ago with him using Ferret vs Crafty with Ferret
>>>>using pure R=2, and then pure R=3.  We didn't notice any particular difference
>>>>at that time.  It played about the same, searched about the same depth, etc...
>>>
>>>
>>>Increasing R is pointless after 3.
>>>
>>>Because instead of having a null move search using 5% of your time (just an
>>>example, I do not know the exact value), it will use only 2% or 3%.
>>>
>>>The speed increase is ridiculous, and the risks are getting huge.
>>>
>>>The only thing you can get by increasing R after that is having a percentage of
>>>search spent in null move close to 0. So a potential of 2% or 3% increase in
>>>speed.
>>>
>>>And an big potential to overlook easy combinations everywhere in the tree.
>>>
>>>That's why I believe that working on R>3 is a waste of time.
>>>
>>>
>>>    Christophe
>>
>>
>>You are overlooking _the_ point here.  At present, doing 12-14 ply searches,
>>R>3 doesn't make a lot of difference.  But in the future, when doing (say)
>>18 ply searches, R=4 will offer a lot more in terms of performance.  Same as
>>R=3 did when we got to 12-14 plies...  _then_ it might make sense to up R
>>once again.
>
>I do not know.
>I did not investigated different R's but I suspect that constant R may be a bad
>idea and R should be function of the position.
>
>I do not see a reason to use R=4 in the future and not to use it today at the
>same conditions.
>
>Uri


I adjust R between 2 and 3 already.

The reason to use R=4 in the future is easy:  Which would you rather do to
reject a move at the current ply...  (a) a search to depth D, or (b) a search
to depth D-3, or (c) a search to depth D-4?  That is what the R value is all
about.  And it makes a significant difference at deeper depths.



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.