Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: To Chrisophe Theron, about Chess Tiger and Genius and Rebel

Author: Jeremiah Penery

Date: 12:55:21 10/21/99

Go up one level in this thread


On October 21, 1999 at 12:54:47, Christophe Theron wrote:

>On October 20, 1999 at 21:49:26, Jeremiah Penery wrote:
>
>>On October 20, 1999 at 21:35:58, Christophe Theron wrote:
>>
>>>On October 20, 1999 at 19:21:02, Jeremiah Penery wrote:
>>>
>>>>On October 20, 1999 at 16:06:29, Christophe Theron wrote:
>>>>
>>>>>On October 20, 1999 at 06:58:33, Francesco Di Tolla wrote:
>>>>>
>>>>>>On October 19, 1999 at 13:15:13, Christophe Theron wrote:
>>>>>>>>What about the, may be trivial idea, of switching on the high selctvity only at
>>>>>>>>a higher ply number, say after a few plys fully/"classically" calculated?
>>>>>>
>>>>>>may be I was not clear: I mean in a given position calculate some plys normally,
>>>>>>and further plys with high selectivity.
>>>>>>
>>>>>>Why this: well you suggest that a high selecttivity is not any more conveninet
>>>>>>respect to normal approaches (that is what I understood, may be I'm wrong). If
>>>>>>this is so it is probably because today hardware permits you to calculate enough
>>>>>>variations anyway without high, and on the other side you take some risks. With
>>>>>>this approach you would calculate the first plys with less risk and same
>>>>>>approach as others, while higher plys with higher selctivity might allow you to
>>>>>>spot something tohers can't, and overcome some horizon errors. For sure the play
>>>>>>style would be different.
>>>>>>(Please consider I never coded a chess program myself, let me know if I am
>>>>>>misunderstanding).
>>>>>>
>>>>>>ciao
>>>>>>Franz
>>>>>
>>>>>OK. That's an idea I have tried myself. But in the end I have found that if your
>>>>>selective system (way of pruning away uninteresting moves) is good enough, you
>>>>>can apply it at any ply depth.
>>>>>
>>>>>But the idea makes sense indeed if you have a very risky selective algorithm.
>>>>>You might want do disable it at narrow depths until you manage to refine it and
>>>>>make it less risky.
>>>>
>>>>
>>>>I think this could be good to do something like this:
>>>>-> Search x ply with brute-force (x=10, maybe?)
>>>>-> Instead of continuing with brute-force after this depth, use the selective
>>>>   search.
>>>>
>>>>At some depth (10 in this example.  At tournament time controls, 10 may be
>>>>accurate here.), the next brute-force ply will take too long to complete, and
>>>>the selective algorithm can help to achieve much higher depth in some lines,
>>>>while still having the 10-ply brute-force search.  And since the brute-force
>>>>search won't be able to find anything new within the time constraints, there can
>>>>be nothing hurt by using the selective algorithm, but you can potentially get
>>>>nice gains.  Especially if this were a 'risky' algorithm.
>>>>This may be quite difficult to do, but I believe it would be an interesting
>>>>experiment of some kind. :)
>>>
>>>I think you would not reach 10 plies on a PC with brute force, maybe not even 9.
>>
>>I don't mean true brute-force (like minimax).  I'm talking about what most
>>programs do today, which is referred to as a brute-force approach.  In a game
>>with average 3min/move, Tiger doesn't get even 10 ply?  I see >10-ply depths
>>often in 20min/game...
>
>But no program of today do brute force. All the good programs are highly
>selective.
>
>If you are not talking about "true brute force" (simple-alpha beta + QSearch) I
>don't know what you mean. Everything else is selective.

We must be using different definitions.  I don't consider the use of null-move,
for example, a selective search.
  The way I am defining selective search is more like what a human might do.  A
truly selective program will completely throw out most of the moves _before_
even really searching them.
  For example, in the initial position, a selective searcher might search ONLY
the moves 1. e4, 1. d4, and 1. Nf3.  Other moves aren't even considered.  This
same strategy applies to _all_ positions.
This is the kind of selectivity they were doing in the past.

Programs of today, however, use a more brute-force approach.  They may
eventually throw out some moves, but only _after_ they are searched and shown to
be inferior.  This is the difference.

>>>Many programs use this concept differently: they do only N plies selective. For
>>>example if N=6 that means that a 6 plies deep search will be totally selective.
>>>On a 8 plies deep search, the first 2 plies (near the root) are going to be
>>>searched without any selection, then the next 6 are going to be totally
>>>selective.
>>
>>I am suggesting something vaguely time-based.  If you won't have time to
>>complete iteration 'x' within the time controls, then switch to the faster (to
>>reach greater depth) selective search.
>
>Why not.

Does this mean it sounds like a good idea? :)

>>>This is not difficult to be. But it's difficult to decide (depending on how good
>>>your selectivity is) the value of N. For example The King uses by default N=6,
>>>but some testers think that N=10 is not bad.
>>
>>I understood it this way:  They do the normal x-ply search, and ADD another
>>y-ply selective.  So at depth 1, they would do the normal 1-ply brute-force
>>search, plus the y plies of extensions.  Then 2 + y, etc.  This is how it is
>>explained on some webpages I've seen, anyway.
>
>That's what Genius displays for example. Depth 3 means it is searching 3 plies
>brute force (of course I am not sure it is really brute force, but I think it
>is), then 12 plies selective.

Yes, that's what I was thinking.

>>>>>The paradox is that I have found that highly selective systems produce the best
>>>>>results when time controls are fast, or computer is slow. That is: when you
>>>>>cannot reach a high depths!!!
>>>>
>>>>
>>>>I don't think this is so much of a paradox.  If you are not using a selective
>>>>algorithm, you can only reach very low depths, and strength will be quite low.
>>>>If you are being selective, you can achieve greater depth even on a slower
>>>>machine, so you will be not as weak.  This is why all the programs (before Chess
>>>>4.x ?) were selective, because they didn't have the speed to get past a few ply
>>>>brute-force.
>>>
>>>But they did not really know how to be selective, that's why Chess 4.x had
>>>better results.
>>
>>So, you're saying that _none_ of the programs before Chess 4.x knew how to do a
>>good selective search?  I'm not exactly sure what you mean by this statement -
>>How is it that they did 'not really know how to be selective'?  Chess 4.x had
>>good results because it was the first to be fast enough to be able to use a more
>>brute-force approach and still attain respectable depths.
>
>I say they did not know how to be selective because you can do the following
>experiment:
>
>Take SargonV for PC, which is a brute force program (as far as I know). Run it
>on a PC that is slow enough so you can have it computing 6 to 7 plies at a given
>time controls.
>
>This Sargon can probably be considered a good simpluation of Chess 4.x I think.
>At least it is considered as one of the best brute force programs.
>
>Then take a good selective program like Rebel or Genius, and let them play a
>match at equal time controls.
>
>The selective program will CRUSH Sargon.
>
>If there had been any good selective program in 1979-1980, Chess 4.x would have
>had more problems.

If you use my definition of what a _true_ selective searcher is, this will not
be true.  It is why programs of today are not _truly_ selective.

>>>>>They produce good results also at longer time controls or on fast computers, but
>>>>>the advantage they give is less easy to see.
>>>>
>>>>
>>>>This is because the advantage selective searchers get in depth on some lines is
>>>>offset by the greater accuracy of the brute-force search.  When the brute-force
>>>>search is deep enough, it will eventually be better than the selective search.
>>>
>>>I don't think so.
>>
>>So a brute-force search with 100 ply would never be better than a selective
>>search? 1000 ply? :)
>
>Right. On some selected position the selective program would have problems, but
>in a real long match the brute force approach has NO chance.

Selective search techniques (whether yours or my definition of selective search)
are not perfect - they will miss some things.  However, a brute-force search
(your definition) will _never_ miss anything.  Because of pruning, a
selective-searcher can miss things (especially at much higher depth, because by
then much more of the tree is being pruned) that the brute-force search will be
able to capitalize on.  At 100-ply search, I think it would be impossible to
beat a true brute-force (alpha-beta + quiescence only) program.  Draws would
happen, surely, but I don't think any wins.  This is because of the inherent
inaccuracy in pruning methods used for 'selective search'.

>Unless you let it see deep enough to see the end of the game. But I'm not sure
>discussing about this case makes any sense. :)

Actually, that's sort of what I'm doing. :/  100-ply isn't the end of the game,
necessarily, but it's probably deep enough that it doesn't matter.

Jeremiah



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.