Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DB NPS (anyone know the position used)?

Author: Peter W. Gillgasch

Date: 20:38:06 01/25/00

Go up one level in this thread


On January 25, 2000 at 22:27:40, Ernst A. Heinz wrote:

>I exactly understand what you mean but I am not so sure
>about your conclusions.
>
>  1.  The static evaluation hardware has "fast" and "slow"
>      parts which differ considerably in execution cycles.
>      The "slow" part is guarded by a futility-style lazy
>      mechanism which leads to differences in execution time
>      per node.
>
>  2.  The shape of the search tree itself leads to differences
>      in execution time per node because the number of moves
>      followed before you get a cutoff varies.
>
>(1) + (2) ==> search speed as measured in nodes per second
>              may differ significantly in some positions even
>              for chess hardware such as Deep Blue!

I have thought about both issues before posting.

Ad 1: This was true for Deep Thought since the difference
      between the fast and the slow eval was noticeable
      due to the sequential properties introduced by
      the Xylinx devices looking at the position on a
      file by file basis. Since DBII is not hampered
      by issues like FPGA capacities this is the first
      bottleneck that was to be removed. As Dave Footland
      has reported they had interesting things in their
      evaluation. In the light of this and in the light
      of things Hans has said it it extremely unlikely
      that they ever take a "slow eval" in DB since there
      is (a) probably no speed gain in doing so and (b)
      things like pins and king safety can add up quite
      a bit, taking a "slow eval" makes no sense in a
      machine which knows that the queen is pinned and
      will be lost versus a bishop or that there is a
      mate in one versus your king.

      Of course, we do not have proof if FHH still did
      that "slow eval" design error in DB.

Ad 2: Nodes are counted after generating a move, executing
      it, evaluating the sibling position, retracting the
      move and routing it through A-B or a variant thereof.
      It is obvious that in a hardware one by one generator
      there is no cost difference between "give me the
      first move" and "give me the next move". The timing of
      the inner loop stays the same, no matter if the first
      move produces a cutoff or the last one. The time that
      is spent in the father node that expands too many
      siblings compared with a perfectly ordered tree should
      be offset by each superfluent sibling incrementing the
      node count in constant time.

      In software like DarkThought with its staged move
      generation scheme there is potentially a significant
      difference in cost depending on the quality of the move
      ordering heuristics in relation to the position. If
      you need to expand siblings beyond the killer move
      generation zone you pay dearly. In a one by one setting
      it does not matter at all.

-- Peter



This page took 0.18 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.