Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DB Chip will kill all comercial programs or.....

Author: Robert Hyatt

Date: 14:06:07 05/14/99

Go up one level in this thread


On May 14, 1999 at 15:24:38, Dave Gomboc wrote:

>On May 14, 1999 at 15:02:28, Robert Hyatt wrote:
>
>>On May 14, 1999 at 14:47:20, Dave Gomboc wrote:
>>
>>>On May 14, 1999 at 13:39:30, Robert Hyatt wrote:
>>>
>>>>On May 14, 1999 at 12:40:35, Dave Gomboc wrote:
>>>>
>>>>>On May 14, 1999 at 10:00:03, Robert Hyatt wrote:
>>>>>
>>>>>>On May 13, 1999 at 23:00:53, Eelco de Groot wrote:
>>>>>>
>>>>>>>
>>>>>>>Robert, Mr. Hyatt, thanks for all the new info on the 'Deep Blue for consumers'
>>>>>>>chip! Does Mr. Hsu already have a name for it? I suppose you could call it 'Baby
>>>>>>>Blue' , but maybe that is too innocent a name for this monster... (A topic for
>>>>>>>the polls, maybe, choosing a good name?). Regarding your thoughts on 'guts' , I
>>>>>>>am not a programmer, but does not the 'soul' of a program  reside for a large
>>>>>>>part in its positional understanding also? Since the chip can be operated in
>>>>>>>parallel to a software program, could  it not be used mainly for a deep tactical
>>>>>>>evaluation? Letting the program do a 1 ply search on all the positional features
>>>>>>>Deep Blue is not very good at, while the chip does a 4 ply mainly tactical
>>>>>>>search? It would be up to the programmer then to decide how much weight each of
>>>>>>>the two evaluations must get to retain the original character of the program. Am
>>>>>>>I making any sense here?
>>>>>>>
>>>>>>
>>>>>>yes... but the problem here is that this is what programs like Fritz/Nimzo/etc
>>>>>>do to an extent.  They do a lot of work at the root of the tree, and then have
>>>>>>a very primitive evaluation at the tips.  And they make gross positional
>>>>>>mistakes as a result.  The _right_ way to search is a good search, followed by
>>>>>>a _full_ positional evaluation.  And that is _very_ slow (which is why the fast
>>>>>>programs don't do this).  DB _does_ however, because they do the eval in
>>>>>>hardware and the cost is minimal compared to our cost.
>>>>>
>>>>>"_Right_" depends on what works the best.  If you find assumptions that carry
>>>>>over to all of the leaf positions that matter, and save yourself from the cost
>>>>>of eval at each one of them, you will be much faster.  Sometimes a leaf position
>>>>>that matters will get hit, and you get toasted up.  Tough one. :)  Zobrist
>>>>>hashing is no different.  I don't think it is categorically an error to do such
>>>>>a thing.
>>>>>
>>>>
>>>>I can't think of a single things that I can evaluate at the root, and then
>>>>expect for that to still hold 20 plies into the tree.  Not one single thing.
>>>>Not even the fact that we are in an opening, or middlegame, or endgame position,
>>>>because a _lot_ can happen 20 plies from the root.  And if you watch Crafty play
>>>>on ICC, in blitz games it generally searches 9-10 plies all the time, except
>>>>for when it reaches simpler endgames where this goes up to 15-20.  And for those
>>>>9-10 ply searches, the PV is often 20+ moves long.  What would you notice at the
>>>>root and expect that it _still_ applies that far away from the root?  Very
>>>>little, IMHO.
>>>
>>>Good argument, but what if we decide to to this e.g. 2 or 4 ply above the leaves
>>>instead of at the root?  Now the error is reduced, and the time savings are
>>>still mostly present.
>>>
>>>Dave
>>
>>
>>How much error will you accept?  IE most of my 12 ply searches have PV's
>>that are much longer... say an average of 20 plies.  is searching 20 plies
>>beyond the 'root eval' bad?  I think so.  Is only searching 16 plies beyond
>>a 4-ply "root eval" bad?  I _still_ think so.  And if you run that eval out
>>to 10 plies the computational cost starts showing up...
>
>I'm not sure if you misunderstood me or not.  I did not suggest preprocessing at
>4 plies from the root (searching 16 plies beyond the 4-ply "root eval"), but
>preprocessing when depth_remaining = 4.  Sure, the computational cost starts
>showing up, but I am thinking it should still be significantly less than always
>evaluating when depth_remaining = 0.  I don't know if the error rate would be
>tolerable or not, but it seems worth a try, anyway.
>
>I can see this not being worth it if you are able to effectively hash common
>subterms (like you do for pawn structure): this would already capture the
>overlapping computation.  Are there terms in a typical chess eval that are not
>readily amenable to hashing as subterms yet cost much more to assess than to
>determine if they need assessment?  I am thinking that these are the terms that
>would be the candidates for preprocessing slighly above the leaf node level.
>
>Dave


The problem with hashing is collisions/overwriting.  Pawns work well because
(a) good programs evaluate pawn structure pretty well so that wild pawn pushes
get quickly cut off.  (b) the pawns move infrequently compared with the rest
of the pieces.

But notice that hashing pawn scores using only pawn positions to form the
hash signature means you can _only_ evaluate pawns and not pieces.  If you
want to evaluate passed pawns with a king supporting them as being better,
you can't hash that using only pawn positions of course, since the king has
to be factored in.  And so many piece terms are interactive with other piece
terms, making hashing difficult if not impossible... because you need to hash
all the piece and pawn positions, and there isn't anywhere near enough memory
to hold that...

As far as evaluating 4 plies from the 'leaf' positions, do you want to evaluate
a position as good, and then let me initiate a sequence of captures that leave
you with isolated pawns, a rook on your 2nd rank, and a knight in a big hole
at e3?

Even 4-5 ply searches screw up when the evaluation is only called at the
leaf positions and then the score is adjusted by the capture search changing
only the material value...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.