Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: To Chrisophe Theron, about Chess Tiger and Genius and Rebel

Author: Jeremiah Penery

Date: 05:16:29 10/22/99

Go up one level in this thread


On October 21, 1999 at 18:26:22, Christophe Theron wrote:

>On October 21, 1999 at 15:55:21, Jeremiah Penery wrote:
>
>>We must be using different definitions.  I don't consider the use of null-move,
>>for example, a selective search.
>>  The way I am defining selective search is more like what a human might do.  A
>>truly selective program will completely throw out most of the moves _before_
>>even really searching them.
>>  For example, in the initial position, a selective searcher might search ONLY
>>the moves 1. e4, 1. d4, and 1. Nf3.  Other moves aren't even considered.  This
>>same strategy applies to _all_ positions.
>>This is the kind of selectivity they were doing in the past.
>>
>>Programs of today, however, use a more brute-force approach.  They may
>>eventually throw out some moves, but only _after_ they are searched and shown to
>>be inferior.  This is the difference.
>
>
>There is no difference. It does not matter what you do to prune away moves. If
>your definition is simply that to be selective you must not try the move before
>you prune it away, I don't buy it.
>
>You can consider the selection method to be a "black box". You don't have to
>know what happens inside to say that a program is selective or not.
>
>For example you might not really call "makemove" in your selection algorithm,
>but maybe the things you do act exactly as if you were doing the move and see
>what happens after.
>
>You can define selection the way you propose, but it is too much restricted to
>be useful.

Maybe I should use 'full-width' instead of where I've been using 'brute-force'.
Would you agree that programs of today are full-width searchers?  My definition
of selective search has been !full-width - that is, programs that examine <=N
moves/position.  Perhaps the true definition will be somewhere in-between.  I
think the way you define it would make _all_ programs selective searchers.  I
don't think there has ever been a program (other than a program just for the
sake of doing so) that has not used some sort of extensions, quiescence,
...<whatever techniques you think determine whether a program is a selective
searcher>.  I'm not really willing to buy this.
But, before Chess 4.x, most of the programs were selective searchers in the
style I showed - they did such things as search _only_ x moves/position.
Because of this, these programs could play 49 brilliant moves, even on very slow
hardware, but then play one horrible move, losing the game.
Back then, any sort of full-width search would only reach very shallow depth,
and so it would be very weak.  With their 'true' selective search techniques,
they were able to reach respectable depths.

>Null move for example, could be implemented to some extend without actually
>doing the move. In this implementation you would call it selection, in the
>classical implementation you would say: "hey, this is no selection at all".
>
>I think you are going to be upset, but I consider all the following to be
>selection:
>
>* null move and related techniques
>* extensions
>* QSearch
>
>Selectivity can be additive or substractive. You restrict selectivity to
>"immediate substractive", but in fact

>doing an extension is substractive
>selectivity because you are going to look at everything else less deep.

Why would you look at everything else less deep?  Or do you mean only less deep
than the line being extended?

> QSearch
>is additive selectivity because you generate only some moves, not all.

I'm pretty much ignoring quiescence - it can be considered almost part of the
evaluation function, because it is used to prevent the evaluation from being
completely wrong because of tactics.

>With your definition of selectivity you miss all these successful techniques.

I didn't say that with my definition of selectivity one still couldn't use
extensions.  And when depth 0 is reached in the search, quiescense search would
still happen.

>Anyway, even humans do much more than your definition of selectivity. Almost all
>moves pruned by humans are pruned because the guy says: "after I play this move
>the opponent play this one and I don't like the position".

That's only if the move is considered in the first place - Even if there is no
real refutation, many moves simply won't be considered at all for
positional/other reasons.  For example, after 1. e4 2. c5, a good human probably
wouldn't even consider Qf3, even though there is no refutation - it is simply a
pointless move.  IMO, of course, because I'm not a 'good human'. :)

>Also, all humans use null move: "if I play this, then that, I have a good
>position. Now let's see if my opponent can prevent me from playing that after I
>have played this".

True, humans also do this.

>>>>>Many programs use this concept differently: they do only N plies selective. For
>>>>>example if N=6 that means that a 6 plies deep search will be totally selective.
>>>>>On a 8 plies deep search, the first 2 plies (near the root) are going to be
>>>>>searched without any selection, then the next 6 are going to be totally
>>>>>selective.
>>>>
>>>>I am suggesting something vaguely time-based.  If you won't have time to
>>>>complete iteration 'x' within the time controls, then switch to the faster (to
>>>>reach greater depth) selective search.
>>>
>>>Why not.
>>
>>Does this mean it sounds like a good idea? :)
>
>
>Yes. It should be tried by someone.
>
>
>
>
>>>>>This is not difficult to be. But it's difficult to decide (depending on how good
>>>>>your selectivity is) the value of N. For example The King uses by default N=6,
>>>>>but some testers think that N=10 is not bad.
>>>>
>>>>I understood it this way:  They do the normal x-ply search, and ADD another
>>>>y-ply selective.  So at depth 1, they would do the normal 1-ply brute-force
>>>>search, plus the y plies of extensions.  Then 2 + y, etc.  This is how it is
>>>>explained on some webpages I've seen, anyway.
>>>
>>>That's what Genius displays for example. Depth 3 means it is searching 3 plies
>>>brute force (of course I am not sure it is really brute force, but I think it
>>>is), then 12 plies selective.
>>
>>Yes, that's what I was thinking.
>>
>>>>>>>The paradox is that I have found that highly selective systems produce the best
>>>>>>>results when time controls are fast, or computer is slow. That is: when you
>>>>>>>cannot reach a high depths!!!
>>>>>>
>>>>>>
>>>>>>I don't think this is so much of a paradox.  If you are not using a selective
>>>>>>algorithm, you can only reach very low depths, and strength will be quite low.
>>>>>>If you are being selective, you can achieve greater depth even on a slower
>>>>>>machine, so you will be not as weak.  This is why all the programs (before Chess
>>>>>>4.x ?) were selective, because they didn't have the speed to get past a few ply
>>>>>>brute-force.
>>>>>
>>>>>But they did not really know how to be selective, that's why Chess 4.x had
>>>>>better results.
>>>>
>>>>So, you're saying that _none_ of the programs before Chess 4.x knew how to do a
>>>>good selective search?  I'm not exactly sure what you mean by this statement -
>>>>How is it that they did 'not really know how to be selective'?  Chess 4.x had
>>>>good results because it was the first to be fast enough to be able to use a more
>>>>brute-force approach and still attain respectable depths.
>>>
>>>I say they did not know how to be selective because you can do the following
>>>experiment:
>>>
>>>Take SargonV for PC, which is a brute force program (as far as I know). Run it
>>>on a PC that is slow enough so you can have it computing 6 to 7 plies at a given
>>>time controls.
>>>
>>>This Sargon can probably be considered a good simpluation of Chess 4.x I think.
>>>At least it is considered as one of the best brute force programs.
>>>
>>>Then take a good selective program like Rebel or Genius, and let them play a
>>>match at equal time controls.
>>>
>>>The selective program will CRUSH Sargon.
>>>
>>>If there had been any good selective program in 1979-1980, Chess 4.x would have
>>>had more problems.
>>
>>If you use my definition of what a _true_ selective searcher is, this will not
>>be true.  It is why programs of today are not _truly_ selective.
>>
>>>>>>>They produce good results also at longer time controls or on fast computers, but
>>>>>>>the advantage they give is less easy to see.
>>>>>>
>>>>>>
>>>>>>This is because the advantage selective searchers get in depth on some lines is
>>>>>>offset by the greater accuracy of the brute-force search.  When the brute-force
>>>>>>search is deep enough, it will eventually be better than the selective search.
>>>>>
>>>>>I don't think so.
>>>>
>>>>So a brute-force search with 100 ply would never be better than a selective
>>>>search? 1000 ply? :)
>>>
>>>Right. On some selected position the selective program would have problems, but
>>>in a real long match the brute force approach has NO chance.
>>
>>Selective search techniques (whether yours or my definition of selective search)
>>are not perfect - they will miss some things.  However, a brute-force search
>>(your definition) will _never_ miss anything.  Because of pruning, a
>>selective-searcher can miss things (especially at much higher depth, because by
>>then much more of the tree is being pruned) that the brute-force search will be
>>able to capitalize on.  At 100-ply search, I think it would be impossible to
>>beat a true brute-force (alpha-beta + quiescence only) program.  Draws would
>>happen, surely, but I don't think any wins.  This is because of the inherent
>>inaccuracy in pruning methods used for 'selective search'.
>
>
>Some thoughts:
>* Sometimes it is better to miss some things. It can improve your engine. Funny,
>isn't it? But true. Some call this "positive horizon effect" I think.

This is interesting.  Do you know where I can find more information about this?
My thought is that this could be caused by inaccuracy in the evaluation/search,
but I'm not sure about it.  I had a discussion about this recently about how a
lower search depth can often result in an ultimately better move.  I think this
would be an interesting thing to study.

>* I think we should delay this discussion until computers can reach 100 plies
>brute force :)

Yeah, but it's fun to speculate. :)

Jeremiah



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.