Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: To Chrisophe Theron, about Chess Tiger and Genius and Rebel

Author: Robert Hyatt

Date: 10:08:44 10/22/99

Go up one level in this thread


On October 21, 1999 at 15:55:21, Jeremiah Penery wrote:

>On October 21, 1999 at 12:54:47, Christophe Theron wrote:
>
>>
>>But no program of today do brute force. All the good programs are highly
>>selective.
>>
>>If you are not talking about "true brute force" (simple-alpha beta + QSearch) I
>>don't know what you mean. Everything else is selective.
>
>We must be using different definitions.  I don't consider the use of null-move,
>for example, a selective search.



A word of caution.  "selective search" has a precise definition in the CS
literature, dating all the way back to Shannon's "How to program a computer
to play chess" in the 1950's.

Selective search means, very explicitly, to generate all moves at a node in
the tree, and then to dismiss some of those a priori without any searching of
any kind. This is also called 'forward pruning'.

The idea of selective extensions was mentioned by Shannon, in the context of
what he called a "variable depth search" which is exactly what all of our
extensions and null-move reductions actually accomplish.

I personally don't call a "null-move search" a "selective search" because it
is an incorrect term of a previously established term.  Any more than I would
self-define the term "ampere" to mean resistance, rather than using the more
accepted "Ohm".  I don't know of anyone doing what I would call purely
selective search, although our q-search is a perfect example, since we toss
out some moves with no searching of any kind, while keeping others and searching
them deeper, all in a pretty arbitrary way.

It makes more sense to keep a common vocabulary when we discuss things so that
every post doesn't require a personalized "glossary of terms" so that we can
communicate.  :)




>  The way I am defining selective search is more like what a human might do.  A
>truly selective program will completely throw out most of the moves _before_
>even really searching them.

This was Shannon's definition as well..



>  For example, in the initial position, a selective searcher might search ONLY
>the moves 1. e4, 1. d4, and 1. Nf3.  Other moves aren't even considered.  This
>same strategy applies to _all_ positions.
>This is the kind of selectivity they were doing in the past.
>
>Programs of today, however, use a more brute-force approach.  They may
>eventually throw out some moves, but only _after_ they are searched and shown to
>be inferior.  This is the difference.
>
>>>>Many programs use this concept differently: they do only N plies selective. For
>>>>example if N=6 that means that a 6 plies deep search will be totally selective.
>>>>On a 8 plies deep search, the first 2 plies (near the root) are going to be
>>>>searched without any selection, then the next 6 are going to be totally
>>>>selective.
>>>
>>>I am suggesting something vaguely time-based.  If you won't have time to
>>>complete iteration 'x' within the time controls, then switch to the faster (to
>>>reach greater depth) selective search.
>>
>>Why not.
>
>Does this mean it sounds like a good idea? :)
>
>>>>This is not difficult to be. But it's difficult to decide (depending on how good
>>>>your selectivity is) the value of N. For example The King uses by default N=6,
>>>>but some testers think that N=10 is not bad.
>>>
>>>I understood it this way:  They do the normal x-ply search, and ADD another
>>>y-ply selective.  So at depth 1, they would do the normal 1-ply brute-force
>>>search, plus the y plies of extensions.  Then 2 + y, etc.  This is how it is
>>>explained on some webpages I've seen, anyway.
>>
>>That's what Genius displays for example. Depth 3 means it is searching 3 plies
>>brute force (of course I am not sure it is really brute force, but I think it
>>is), then 12 plies selective.
>
>Yes, that's what I was thinking.
>
>>>>>>The paradox is that I have found that highly selective systems produce the best
>>>>>>results when time controls are fast, or computer is slow. That is: when you
>>>>>>cannot reach a high depths!!!
>>>>>
>>>>>
>>>>>I don't think this is so much of a paradox.  If you are not using a selective
>>>>>algorithm, you can only reach very low depths, and strength will be quite low.
>>>>>If you are being selective, you can achieve greater depth even on a slower
>>>>>machine, so you will be not as weak.  This is why all the programs (before Chess
>>>>>4.x ?) were selective, because they didn't have the speed to get past a few ply
>>>>>brute-force.
>>>>
>>>>But they did not really know how to be selective, that's why Chess 4.x had
>>>>better results.
>>>
>>>So, you're saying that _none_ of the programs before Chess 4.x knew how to do a
>>>good selective search?  I'm not exactly sure what you mean by this statement -
>>>How is it that they did 'not really know how to be selective'?  Chess 4.x had
>>>good results because it was the first to be fast enough to be able to use a more
>>>brute-force approach and still attain respectable depths.
>>
>>I say they did not know how to be selective because you can do the following
>>experiment:
>>
>>Take SargonV for PC, which is a brute force program (as far as I know). Run it
>>on a PC that is slow enough so you can have it computing 6 to 7 plies at a given
>>time controls.
>>
>>This Sargon can probably be considered a good simpluation of Chess 4.x I think.
>>At least it is considered as one of the best brute force programs.
>>
>>Then take a good selective program like Rebel or Genius, and let them play a
>>match at equal time controls.
>>
>>The selective program will CRUSH Sargon.
>>
>>If there had been any good selective program in 1979-1980, Chess 4.x would have
>>had more problems.
>
>If you use my definition of what a _true_ selective searcher is, this will not
>be true.  It is why programs of today are not _truly_ selective.
>
>>>>>>They produce good results also at longer time controls or on fast computers, but
>>>>>>the advantage they give is less easy to see.
>>>>>
>>>>>
>>>>>This is because the advantage selective searchers get in depth on some lines is
>>>>>offset by the greater accuracy of the brute-force search.  When the brute-force
>>>>>search is deep enough, it will eventually be better than the selective search.
>>>>
>>>>I don't think so.
>>>
>>>So a brute-force search with 100 ply would never be better than a selective
>>>search? 1000 ply? :)
>>
>>Right. On some selected position the selective program would have problems, but
>>in a real long match the brute force approach has NO chance.
>
>Selective search techniques (whether yours or my definition of selective search)
>are not perfect - they will miss some things.  However, a brute-force search
>(your definition) will _never_ miss anything.  Because of pruning, a
>selective-searcher can miss things (especially at much higher depth, because by
>then much more of the tree is being pruned) that the brute-force search will be
>able to capitalize on.  At 100-ply search, I think it would be impossible to
>beat a true brute-force (alpha-beta + quiescence only) program.  Draws would
>happen, surely, but I don't think any wins.  This is because of the inherent
>inaccuracy in pruning methods used for 'selective search'.
>
>>Unless you let it see deep enough to see the end of the game. But I'm not sure
>>discussing about this case makes any sense. :)
>
>Actually, that's sort of what I'm doing. :/  100-ply isn't the end of the game,
>necessarily, but it's probably deep enough that it doesn't matter.
>
>Jeremiah



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.