Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Feng-Hsiung Hsu's talk at Microsoft

Author: Robert Hyatt

Date: 11:42:33 10/08/02

Go up one level in this thread


On October 08, 2002 at 14:16:38, Vincent Diepeveen wrote:

>On October 08, 2002 at 10:48:52, Robert Hyatt wrote:
>
>>On October 08, 2002 at 08:04:58, Vincent Diepeveen wrote:
>>
>>>On October 08, 2002 at 00:52:38, Eugene Nalimov wrote:
>>>
>>>thanks for your explanation Eugene,
>>>it's very consequent to what is mentionned in the paper.
>>>
>>>Note that I'm amazed he's still defending not using nullmove. In 1997
>>>i remember the many postings from bob saying how dubious nullmove
>>>was. Nowadays with more powerful processors reality has proven
>>>otherwise.
>>
>>
>>No it hasn't, really.  There are two ways to accomplish this "thing".  You can
>>reduce the
>>depth on uninteresting moves (null-move) or you can increase the depth on
>>interesting
>>moves (selective extensions).  The two approaches are theoretically
>>_identical_...
>
>Not at all Bob. See some old postings of me explaining the difference.
>there is a huge difference between storing something of depth
>n in hashtable and storing something of depth 1 which later gets extended
>to n > 1.
>


Vincent, they are the _same_.  You can say all you want.  But it won't make it
become fact.  Selective extensions (not just SE, but selective extensions in
general) is just as effective as selective depth reductions (null-move).  It is
trivially provable and well-known in theory.  Opposite approaches that
produce identical results when both are done with the same goal in mind...


>Apart from that, SE simply doesn't catch all those moves at all. instead
>it sees things already when it's too late. the only reason i use SE is
>to solve tactical tricks from the testsets quicker and because i already
>search not much deeper than 10 to 11 plies. Above that SE is too expensive.

If you don't believe it helps in games, why do it?

wait, don't answer that...






>
>Best regards,
>Vincent
>
>>
>>
>>
>>>
>>>Note2 in diep i combine SE with nullmove. It is not easy indeed, but
>>>a better branching factor is simply too important to not use nullmove
>>>nowadays :)
>>>
>>>the forward pruning he referred to is 'no progress' pruning and it is
>>>described in his paper.
>>
>>Not quite.  It came from the Kaissa group 25 years ago.  "no progress"
>>is something else.
>>
>>
>>>
>>>Did he also use that forward pruning in software?
>>>I ask this for 2 reasons, the most important one being that
>>>you explicitly mention it was used in hardware. The paper also
>>>explicitly refers to that.
>>>
>>>Note that it's not trivial to implement no progress pruning in hardware.
>>>
>>>It takes like 2 extra hardware clock cycle at least which is expensive,
>>>because each extra clock cycle means you get like half a million nodes
>>>a second less.
>>
>>Actually it would be 200K nodes a second less.  It takes 10 cycles to search a
>>node.
>>
>>But there is _nothing_ that says that doing this would take _any_ additional
>>time.  It might
>>well be slipped into the hardware at places where there are already "synch
>>delays" so that
>>it might not slow things down at all...
>>
>>
>>
>>
>>
>>>
>>>obviously i do a lot in diep extra for open files. It's not slowing
>>>me down 2 to 3 times and i'm evaluating thousands of patterns not 20
>>>to 30 only.
>>>
>>>It's more like 10 to 20 times slowdown :)
>>>
>>>However it is a near to logarithmical slowdown. You can binary evaluate
>>>patterns:
>>>
>>>if( general pattern criterium ) {
>>>  do these 1/2 n patterns
>>>}
>>>else {
>>>  do the other 1/2 n patterns
>>>}
>>>
>>>the only thing that really eats system time in diep is scanning.
>>>
>>>>Wrong.
>>>>
>>>>Today I visited the talk by Feng-Hsiung Hsu he gave at Microsoft. Here are some
>>>>points from memory:
>>>>
>>>>They used forward pruning in the hardware, and according to Hsu it gives them
>>>>5x-10x speedup. He wrote about that in the book, too, but without any details.
>>>>In the talk he named that pruning as "analogy cutoff" and mentioned that "if the
>>>>move is useless in some position, it is also useless in the similar position".
>>>>In the book he writes "it can be done in the hardware as long as it does not
>>>>have to be 100% correct".
>>>>
>>>>They used null-move thread detection, as well as not only singular extension,
>>>>but also extension on only 2 or 3 good replies. They used fractional extensions.
>>>>He also says that their Q-search is much more powerful than the one that is
>>>>usually used in the software-only programs.
>>>>
>>>>Hsu gave some details why they don't use null-move:
>>>>(1) He thinks that singular extensions and null-move gave more-or-less the same
>>>>rating difference (100-200 points IIRC). Null move allows you to search 3-4
>>>>plies deeper *everywhere*, but Hsu thinks that they have enough depth. He thinks
>>>>that on the highest level it's absolutely necessary to analyze critical lines
>>>>*much* deeper, and singular extensions do exactly that.
>>>>(2) It's very hard to implement both null move and singular extensions in the
>>>>same program.
>>>>(3) When they played commercial programs that use null-move those programs
>>>>regularly played themselves into zugzwang [by first pushing bad events over the
>>>>horizon], and Hsu thinks that is direct result of the using null move.
>>>>
>>>>When asked by Tom, Hsu gave 2 examples of the search terms that are very
>>>>expensive to evaluate:
>>>>(1) Rooks on open file -- when they tried to analyze super-GM games, it looks
>>>>that those GMs value rooks on the open file much less that usually thought. It
>>>>happened that you need not only rook on the open file, you also need to be able
>>>>control 7-th and 8-th rank (or is it file? sorry for my English). To properly
>>>>calculate "will you control those ranks or not" you need lot of computing power.
>>>>(2) King safety -- before castling they calculated 3 values (center, left, and
>>>>right positions after castling), and to properly calculate control over those
>>>>squares you need lot of power.
>>>>
>>>>[Note: here I disagree with Tom. Yes, those calculations can be done in the
>>>>software, but assuming that there are ~20 such factors, and each slows the
>>>>software search by ~10-20%, resulting 3x slowdown looks too much for me].
>>>>
>>>>Thanks,
>>>>Eugene
>>>>
>>>>On October 07, 2002 at 10:05:27, Gian-Carlo Pascutto wrote:
>>>>
>>>>>On October 06, 2002 at 21:21:17, Robert Hyatt wrote:
>>>>>
>>>>>>Lack of null-move is not a known deficiency.  I can name a pretty successful
>>>>>>program
>>>>>>or two (commercial) that didn't rely on it...
>>>>>
>>>>>They use other ways of forward pruning that replace nullmove. The DB
>>>>>team did nothing of that sorts.
>>>>>
>>>>>--
>>>>>GCP



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.