Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: To Chrisophe Theron, about Chess Tiger and Genius and Rebel

Author: Jeremiah Penery

Date: 18:49:26 10/20/99

Go up one level in this thread


On October 20, 1999 at 21:35:58, Christophe Theron wrote:

>On October 20, 1999 at 19:21:02, Jeremiah Penery wrote:
>
>>On October 20, 1999 at 16:06:29, Christophe Theron wrote:
>>
>>>On October 20, 1999 at 06:58:33, Francesco Di Tolla wrote:
>>>
>>>>On October 19, 1999 at 13:15:13, Christophe Theron wrote:
>>>>>>What about the, may be trivial idea, of switching on the high selctvity only at
>>>>>>a higher ply number, say after a few plys fully/"classically" calculated?
>>>>
>>>>may be I was not clear: I mean in a given position calculate some plys normally,
>>>>and further plys with high selectivity.
>>>>
>>>>Why this: well you suggest that a high selecttivity is not any more conveninet
>>>>respect to normal approaches (that is what I understood, may be I'm wrong). If
>>>>this is so it is probably because today hardware permits you to calculate enough
>>>>variations anyway without high, and on the other side you take some risks. With
>>>>this approach you would calculate the first plys with less risk and same
>>>>approach as others, while higher plys with higher selctivity might allow you to
>>>>spot something tohers can't, and overcome some horizon errors. For sure the play
>>>>style would be different.
>>>>(Please consider I never coded a chess program myself, let me know if I am
>>>>misunderstanding).
>>>>
>>>>ciao
>>>>Franz
>>>
>>>OK. That's an idea I have tried myself. But in the end I have found that if your
>>>selective system (way of pruning away uninteresting moves) is good enough, you
>>>can apply it at any ply depth.
>>>
>>>But the idea makes sense indeed if you have a very risky selective algorithm.
>>>You might want do disable it at narrow depths until you manage to refine it and
>>>make it less risky.
>>
>>
>>I think this could be good to do something like this:
>>-> Search x ply with brute-force (x=10, maybe?)
>>-> Instead of continuing with brute-force after this depth, use the selective
>>   search.
>>
>>At some depth (10 in this example.  At tournament time controls, 10 may be
>>accurate here.), the next brute-force ply will take too long to complete, and
>>the selective algorithm can help to achieve much higher depth in some lines,
>>while still having the 10-ply brute-force search.  And since the brute-force
>>search won't be able to find anything new within the time constraints, there can
>>be nothing hurt by using the selective algorithm, but you can potentially get
>>nice gains.  Especially if this were a 'risky' algorithm.
>>This may be quite difficult to do, but I believe it would be an interesting
>>experiment of some kind. :)
>
>I think you would not reach 10 plies on a PC with brute force, maybe not even 9.

I don't mean true brute-force (like minimax).  I'm talking about what most
programs do today, which is referred to as a brute-force approach.  In a game
with average 3min/move, Tiger doesn't get even 10 ply?  I see >10-ply depths
often in 20min/game...

>Many programs use this concept differently: they do only N plies selective. For
>example if N=6 that means that a 6 plies deep search will be totally selective.
>On a 8 plies deep search, the first 2 plies (near the root) are going to be
>searched without any selection, then the next 6 are going to be totally
>selective.

I am suggesting something vaguely time-based.  If you won't have time to
complete iteration 'x' within the time controls, then switch to the faster (to
reach greater depth) selective search.

>This is not difficult to be. But it's difficult to decide (depending on how good
>your selectivity is) the value of N. For example The King uses by default N=6,
>but some testers think that N=10 is not bad.

I understood it this way:  They do the normal x-ply search, and ADD another
y-ply selective.  So at depth 1, they would do the normal 1-ply brute-force
search, plus the y plies of extensions.  Then 2 + y, etc.  This is how it is
explained on some webpages I've seen, anyway.

>>>The paradox is that I have found that highly selective systems produce the best
>>>results when time controls are fast, or computer is slow. That is: when you
>>>cannot reach a high depths!!!
>>
>>
>>I don't think this is so much of a paradox.  If you are not using a selective
>>algorithm, you can only reach very low depths, and strength will be quite low.
>>If you are being selective, you can achieve greater depth even on a slower
>>machine, so you will be not as weak.  This is why all the programs (before Chess
>>4.x ?) were selective, because they didn't have the speed to get past a few ply
>>brute-force.
>
>But they did not really know how to be selective, that's why Chess 4.x had
>better results.

So, you're saying that _none_ of the programs before Chess 4.x knew how to do a
good selective search?  I'm not exactly sure what you mean by this statement -
How is it that they did 'not really know how to be selective'?  Chess 4.x had
good results because it was the first to be fast enough to be able to use a more
brute-force approach and still attain respectable depths.

>>>They produce good results also at longer time controls or on fast computers, but
>>>the advantage they give is less easy to see.
>>
>>
>>This is because the advantage selective searchers get in depth on some lines is
>>offset by the greater accuracy of the brute-force search.  When the brute-force
>>search is deep enough, it will eventually be better than the selective search.
>
>I don't think so.

So a brute-force search with 100 ply would never be better than a selective
search? 1000 ply? :)



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.