Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Checks in the Qsearch

Author: Robert Hyatt

Date: 13:38:50 07/07/02

Go up one level in this thread


On July 07, 2002 at 11:42:54, Uri Blass wrote:

>On July 07, 2002 at 10:10:07, Robert Hyatt wrote:
>
>>On July 07, 2002 at 01:44:27, Uri Blass wrote:
>>
>>>On July 06, 2002 at 23:31:15, Robert Hyatt wrote:
>>>
>>>>On July 06, 2002 at 18:48:07, Uri Blass wrote:
>>>>
>>>>>On July 06, 2002 at 17:19:21, Robert Hyatt wrote:
>>>>><snipped>
>>>>>>
>>>>>>OK...  first, me, in Cray Blitz.  1994.  GCP modified Crafty and used the
>>>>>>Hsu-definition of SE in doing so.
>>>>>
>>>>>And what was the result?
>>>>
>>>>It was tactically significantly stronger with than without.  Unfortunately
>>>>we had a severe bug in 1994 and did poorly.   Harry had limited the max
>>>>search depth to 64 plies to match the older cray vector length (newer machines
>>>>had 128 word vectors but he wanted it to run on older machines too since he
>>>>had access to several of them).  He also took out the MAXPLY check.  And it
>>>>never caused a problem thru 1993 as the search extensions were pretty "sane"
>>>>and didn't go that deep.  But in a few cases in 1994, singular extensions
>>>>drove the search beyond 64 plies with devastating effect on the chess board
>>>>and the alpha/beta scores backed up as a result.  We never had a chance to play
>>>>real games to see how deep the SE stuff could extend, which was a common problem
>>>>with cray access back then...
>>>>
>>>>Crafty I am not sure about yet.  Mike Byrne has been playing with this
>>>>further and seems to like the results he is getting.  I have not yet looked
>>>>at the changes he is using, but I will when I have time.  It does seem to be
>>>>very good tactically, producing some good WAC test scores in very short time
>>>>limits.
>>>>
>>>>>
>>>>>Is the new crafty with singular extensions better?
>>>>>I guess it is worse because you do not use it in games.
>>>>
>>>>I never found something I like.  However, I am not sure that aggressive
>>>>null-move mixes well with cute extensions.  They almost work against each
>>>>other without some controls to limit this interaction...
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>><snipped>
>>>>>>>There are programmers who use singular extensions but I know about no programmer
>>>>>>>of one of the top programs of today that use it in the way that they use it.
>>>>>>
>>>>>>
>>>>>>So?  They choose to implement a less-than-optimal version to control the
>>>>>>computational cost.
>>>>>
>>>>>Deep thought already used singular extensions in the past and the top programs
>>>>>of today that search similiar number of nodes do not use them in the same way
>>>>>because they prefer to play better and not to use "optimal" extensions.
>>>>
>>>>How can you possibly say "play better"?  The current SE approach used in Ferret,
>>>>and the way I did it originally for Crafty, is simply a 'cheapo version" that
>>>>misses things that deep thought would not miss,  extension wise.  But the cost
>>>>was more palatable for the much slower hardware we have to use.  However, HiTech
>>>>used it at 150K nodes per second, so it worked for them.  And it worked fine in
>>>>Cray Blitz as well...
>>>
>>>You always give examples that are not about the default version of the top
>>>programs of today(Cray Blitz or Hitech are history and Crafty does not use the
>>>GCP version because you did not like the result).
>>>
>>>It is possible that some people believed that it is better and did not have time
>>>to compare results with and without it.
>>>
>>>I want example of one of the top programs of today that use it and not some GCP
>>>version of Crafty that is not the best crafty.
>>
>>
>>Wchess from Dave Kittinger.  Implemented 1/2 of Hsu's algorithm, namely the
>>PV-singular test.
>>
>>
>>
>>>
>>>My point is that if the algorithms that they used in deep thought are not
>>>inferior then it is logical to expect one of the top programs of today to use
>>>them(they already search similiar number of nodes).
>>
>>
>>The "singular extension algorithm" is precisely defined.  At any node in the
>>tree, you simply prove that one move is better than all other moves by a window
>>"S".  This is very precise.  What Bruce does, and what I tried, was a very
>>limited sub-set of that.  It was not as accurate as a proper implementation.  It
>>was also not as expensive.  But the main point is that it was not as accurate.
>>
>>
>>>
>>>If the algorithm helps to find some tactics and miss other things that are more
>>>important in games and not in test positions because of smaller depth then the
>>>algorithm is inferior.
>>>
>>>Uri
>>
>>They tested SE enough to _prove_ that it didn't hurt, if you read their
>>paper.
>
>It is possible that it did not hurt their program because they did not do
>pruning.
>
>My point is that their algorithm does hurt the programs of today and it means
>that their search is inferior relative to the search of the top programs of
>today.
>
>Uri


And I _still_ say that is simply a badly flawed assumption.  One more time:

You can accomplish the same thing with forward pruning or with selective
extensions.  They chose the latter.  Others are choosing the former.  Some
of us do _both_.  I see _nothing_ that suggests that any of the three
approaches is better than either of the other two.  They should, in theory,
all be equivalent...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.