Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Selective Searching

Author: Robert Hyatt

Date: 14:09:28 02/08/98

Go up one level in this thread


On February 08, 1998 at 14:10:44, Don Dailey wrote:

>Hi Bob,
>
>>it was "debunked" because of hardware issues.  Chess 4.6 searched 6
>>plies
>>full-width on a cyber 176.  Mac Hack could barely search 5 plies, with
>>no
>>q-search, using severely tapered forward pruning, because of hardware
>>speeds
>>in the 60's...  IE Greenblatt couldn't even do a 2 ply search full-width
>>at those speeds.
>>
>>What selectivity is all about is being careful at the root by going
>>full-
>>width, and taking risks farther out in the tree where you believe that
>>even
>>if you make a mistake, you have time to catch it in the full width part
>>of
>>the search next time and "fix" it.
>>
>>It's a trade-off...  do you want to find deeper tactical things?  Or do
>>you
>>want to find deeper positonal things?  Selectivity overlooks things.  No
>>way to avoid it.  Question is, does what it find offset what it
>>overlooks?
>>Good question.  But your estimate of +100 is simply a wild guess. I'd
>>suspect that a good brute-force program will play just as well as a good
>>selective searcher.  They will each find things the other overlooks.
>>The
>>selective searcher will find deeper things, the full-width searcher will
>>find things the selective searcher pruned away in error.
>>
>>Take your pick.  I tend to err on the side of simpler algorithms...
>>fewer
>>bugs means more wins...
>
>
>
>You need to expand on your statement that a good brute-force program
>will play just as well as a good selective searcher.   I don't agree
>with this unless you are talking semantics (like calling Richard Langs
>program full width because he does a few full width plys.)    Marty's
>program may be the closest thing among the really strong ones but I
>think he just put's all the selectivity in the quies search.
>
>- Don

Here's what I "think":

A full-width search is going to hit N plies deep, using traditional
ideas
like iterative deepening, etc.

A selective program is going to reach depth N+X, where X depends on how
aggressively they prune in the forward direction.  IE null-move is one
way, static analysis is another, etc.

When I talk about "selective" vs "brute-force" however, the idea is
this:
to selectively throw out moves (or selectively include them in the
search)
requires a finite amount of computation.  To make this more accurate
ramps
this requirement up.  What if it ramps it up enough that the very
selective
program does just as much work analyzing what to prune as the full-width
program spends searching those moves that are no good?

Something tells me that both converge to the same asymptotic bound at
some
point.  IE I think it is just as hard to cull moves as it is to search
them,
if it is intended to beat world-champion class players.  My old
selective
program looked like a genius in some positions, and like an idiot in
others.
Too many of the latter will kill overall performance, no matter how many
times you look like a genius.  It's far easier to lose based on one bad
move than it is to win after playing 3 fischer-like moves...  Selective
programs run a risk of dumping a bad move here and there... whether the
selectiveness comes from null-move (crafty) or something else (rebel,
etc.)



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.