Author: Christophe Theron
Date: 15:26:22 10/21/99
Go up one level in this thread
On October 21, 1999 at 15:55:21, Jeremiah Penery wrote:
>We must be using different definitions. I don't consider the use of null-move,
>for example, a selective search.
> The way I am defining selective search is more like what a human might do. A
>truly selective program will completely throw out most of the moves _before_
>even really searching them.
> For example, in the initial position, a selective searcher might search ONLY
>the moves 1. e4, 1. d4, and 1. Nf3. Other moves aren't even considered. This
>same strategy applies to _all_ positions.
>This is the kind of selectivity they were doing in the past.
>
>Programs of today, however, use a more brute-force approach. They may
>eventually throw out some moves, but only _after_ they are searched and shown to
>be inferior. This is the difference.
There is no difference. It does not matter what you do to prune away moves. If
your definition is simply that to be selective you must not try the move before
you prune it away, I don't buy it.
You can consider the selection method to be a "black box". You don't have to
know what happens inside to say that a program is selective or not.
For example you might not really call "makemove" in your selection algorithm,
but maybe the things you do act exactly as if you were doing the move and see
what happens after.
You can define selection the way you propose, but it is too much restricted to
be useful.
Null move for example, could be implemented to some extend without actually
doing the move. In this implementation you would call it selection, in the
classical implementation you would say: "hey, this is no selection at all".
I think you are going to be upset, but I consider all the following to be
selection:
* null move and related techniques
* extensions
* QSearch
Selectivity can be additive or substractive. You restrict selectivity to
"immediate substractive", but in fact doing an extension is substractive
selectivity because you are going to look at everything else less deep. QSearch
is additive selectivity because you generate only some moves, not all.
With your definition of selectivity you miss all these successful techniques.
Anyway, even humans do much more than your definition of selectivity. Almost all
moves pruned by humans are pruned because the guy says: "after I play this move
the opponent play this one and I don't like the position".
Also, all humans use null move: "if I play this, then that, I have a good
position. Now let's see if my opponent can prevent me from playing that after I
have played this".
>>>>Many programs use this concept differently: they do only N plies selective. For
>>>>example if N=6 that means that a 6 plies deep search will be totally selective.
>>>>On a 8 plies deep search, the first 2 plies (near the root) are going to be
>>>>searched without any selection, then the next 6 are going to be totally
>>>>selective.
>>>
>>>I am suggesting something vaguely time-based. If you won't have time to
>>>complete iteration 'x' within the time controls, then switch to the faster (to
>>>reach greater depth) selective search.
>>
>>Why not.
>
>Does this mean it sounds like a good idea? :)
Yes. It should be tried by someone.
>>>>This is not difficult to be. But it's difficult to decide (depending on how good
>>>>your selectivity is) the value of N. For example The King uses by default N=6,
>>>>but some testers think that N=10 is not bad.
>>>
>>>I understood it this way: They do the normal x-ply search, and ADD another
>>>y-ply selective. So at depth 1, they would do the normal 1-ply brute-force
>>>search, plus the y plies of extensions. Then 2 + y, etc. This is how it is
>>>explained on some webpages I've seen, anyway.
>>
>>That's what Genius displays for example. Depth 3 means it is searching 3 plies
>>brute force (of course I am not sure it is really brute force, but I think it
>>is), then 12 plies selective.
>
>Yes, that's what I was thinking.
>
>>>>>>The paradox is that I have found that highly selective systems produce the best
>>>>>>results when time controls are fast, or computer is slow. That is: when you
>>>>>>cannot reach a high depths!!!
>>>>>
>>>>>
>>>>>I don't think this is so much of a paradox. If you are not using a selective
>>>>>algorithm, you can only reach very low depths, and strength will be quite low.
>>>>>If you are being selective, you can achieve greater depth even on a slower
>>>>>machine, so you will be not as weak. This is why all the programs (before Chess
>>>>>4.x ?) were selective, because they didn't have the speed to get past a few ply
>>>>>brute-force.
>>>>
>>>>But they did not really know how to be selective, that's why Chess 4.x had
>>>>better results.
>>>
>>>So, you're saying that _none_ of the programs before Chess 4.x knew how to do a
>>>good selective search? I'm not exactly sure what you mean by this statement -
>>>How is it that they did 'not really know how to be selective'? Chess 4.x had
>>>good results because it was the first to be fast enough to be able to use a more
>>>brute-force approach and still attain respectable depths.
>>
>>I say they did not know how to be selective because you can do the following
>>experiment:
>>
>>Take SargonV for PC, which is a brute force program (as far as I know). Run it
>>on a PC that is slow enough so you can have it computing 6 to 7 plies at a given
>>time controls.
>>
>>This Sargon can probably be considered a good simpluation of Chess 4.x I think.
>>At least it is considered as one of the best brute force programs.
>>
>>Then take a good selective program like Rebel or Genius, and let them play a
>>match at equal time controls.
>>
>>The selective program will CRUSH Sargon.
>>
>>If there had been any good selective program in 1979-1980, Chess 4.x would have
>>had more problems.
>
>If you use my definition of what a _true_ selective searcher is, this will not
>be true. It is why programs of today are not _truly_ selective.
>
>>>>>>They produce good results also at longer time controls or on fast computers, but
>>>>>>the advantage they give is less easy to see.
>>>>>
>>>>>
>>>>>This is because the advantage selective searchers get in depth on some lines is
>>>>>offset by the greater accuracy of the brute-force search. When the brute-force
>>>>>search is deep enough, it will eventually be better than the selective search.
>>>>
>>>>I don't think so.
>>>
>>>So a brute-force search with 100 ply would never be better than a selective
>>>search? 1000 ply? :)
>>
>>Right. On some selected position the selective program would have problems, but
>>in a real long match the brute force approach has NO chance.
>
>Selective search techniques (whether yours or my definition of selective search)
>are not perfect - they will miss some things. However, a brute-force search
>(your definition) will _never_ miss anything. Because of pruning, a
>selective-searcher can miss things (especially at much higher depth, because by
>then much more of the tree is being pruned) that the brute-force search will be
>able to capitalize on. At 100-ply search, I think it would be impossible to
>beat a true brute-force (alpha-beta + quiescence only) program. Draws would
>happen, surely, but I don't think any wins. This is because of the inherent
>inaccuracy in pruning methods used for 'selective search'.
Some thoughts:
* Sometimes it is better to miss some things. It can improve your engine. Funny,
isn't it? But true. Some call this "positive horizon effect" I think.
* I think we should delay this discussion until computers can reach 100 plies
brute force :)
>>Unless you let it see deep enough to see the end of the game. But I'm not sure
>>discussing about this case makes any sense. :)
>
>Actually, that's sort of what I'm doing. :/ 100-ply isn't the end of the game,
>necessarily, but it's probably deep enough that it doesn't matter.
Right.
Christophe
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.