Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New Algorithm for "el cheapo Singular Extensions" :)

Author: Vincent Diepeveen

Date: 11:06:54 01/28/05

Go up one level in this thread


On January 28, 2005 at 11:42:40, Vasik Rajlich wrote:

Nothing there yet.

diep@xs4all.nl is the adress.

>On January 28, 2005 at 07:48:14, Vincent Diepeveen wrote:
>
>>On January 28, 2005 at 04:33:50, Vasik Rajlich wrote:
>>
>>>On January 27, 2005 at 19:56:58, Vincent Diepeveen wrote:
>>>
>>>>On January 27, 2005 at 13:12:09, Vasik Rajlich wrote:
>>>>
>>>>>On January 27, 2005 at 11:16:22, Vincent Diepeveen wrote:
>>>>>
>>>>>>On January 27, 2005 at 04:35:46, Vasik Rajlich wrote:
>>>>>>
>>>>>>>On January 26, 2005 at 08:50:03, Dr. Axel Steinhage wrote:
>>>>>>>
>>>>>>>>Hi all,
>>>>>>>>
>>>>>>>>I registered to this forum just a week ago. However I have quite some experience
>>>>>>>>in Chess-Programming although I always did it for myself only. In the late 80ies
>>>>>>>>I wrote an Assembler Program for Z80 which was on the same level as Colossus4 at
>>>>>>>>that time. Then I stopped programming for more than a decade. One year ago I
>>>>>>>>restarted with a new Engine in ANSI C. I named it "Astimate" and concerning the
>>>>>>>>limited time I can invest in that hobby, I think I am quite far already. I am
>>>>>>>>very proud on the fact that I never ever looked into someone elses code but
>>>>>>>>wanted to discover everything on my own. Being a scientist by education, I read
>>>>>>>>the important publications though! Doing that, I learned a lot about Singular
>>>>>>>>Extensions, starting out from the first paper of the DeepBlue team up to the
>>>>>>>>various comments by Bob and others here in the forum.
>>>>>>>>It seemed to me that so far SE is still a "nice idea" only. The problem seems to
>>>>>>>>be with the efficient implementation. So I sat down for quite some time and
>>>>>>>>tried to come up with an algorithm that works well in practice. Now, I think, I
>>>>>>>>have found one. I made some tests and so far it looks very good as it finds
>>>>>>>>lotsa combinations earlier without adding a lot overhead. Before going into more
>>>>>>>>testing, I would like to hear the programming-gurus' opinion about the idea. So
>>>>>>>>please give your comments. The algorithm works as follows:
>>>>>>>>
>>>>>>>>I do a normal Search (Soft NegaScout, PVS, Aspiration, Verified Nullmove (R=3),
>>>>>>>>Hashtables, Killer, ...) and keep track of the best and the second best move
>>>>>>>>when testing out all possible moves. When the best and the second best differ by
>>>>>>>>a given margin S, I define the move as singular. So far, this is well known. But
>>>>>>>>now come two innovations:
>>>>>>>>1. in case of a fail high, the best move may be singular but I don't know it
>>>>>>>>because I have cut off before searching all moves. This, I prevent as follows:
>>>>>>>>In case of a fail high, I look if the second best move is within the S window.
>>>>>>>>If so, I cut off cuz the best move cannot be singular. If not, I go on searching
>>>>>>>>(although I could cut off already!) with reduced depth (R=2). I do this until I
>>>>>>>>have searched all moves or until I have a second best move within S (or another
>>>>>>>>fail high, of course). If all the other moves are outside the S window, I define
>>>>>>>>the move singular.
>>>>>>>>2. If I found a move to be singular, I do NOT do a research. Instead, I store
>>>>>>>>this information in the Hashtable and prevent this hash-entry from being
>>>>>>>>overwritten in the future. In the next depth-iteration, I know from the
>>>>>>>>Hash-Entry then already upfront that this move might be singular and extend its
>>>>>>>>max depth. Of course, I don't do the singularity search on the move I have
>>>>>>>>already classified singular.
>>>>>>>>
>>>>>>>>Because of the reduced depth singularity-search after cutoff and omitting the
>>>>>>>>research, there is practically no overhead other than the extension itself.
>>>>>>>>Of course, this algorithm is "cheapo SE" as it might miss quite a lot of
>>>>>>>>Singular moves: first, the reduced depth might not discover a singularity.
>>>>>>>>second, the "second best" value may be wrong, as it might also only be a
>>>>>>>>boundary (have to analyse that). Finally, the information that a move is
>>>>>>>>singular stems from the last depth iteration. However, in the current depth
>>>>>>>>iteration, the move may not be singular anymore.
>>>>>>>>
>>>>>>>>Despite of these drawbacks, the algorithm turned out to work quite well on some
>>>>>>>>test positions with my engine. Before pdoing more tests, however, I would rather
>>>>>>>>like to hear what you think about my idea.
>>>>>>>>
>>>>>>>>Axel
>>>>>>>
>>>>>>>I doubt it makes any real difference. Basically you're doing R==3 instead of
>>>>>>>R==2, and skipping an intial R==2 search of the first move which is useful
>>>>>>>anyway as an IID search.
>>>>>>>
>>>>>>>Note also that all the fancy changes to the hash tables usually change your
>>>>>>>engine level by at most 1 rating point, you could safely skip that part.
>>>>>>>
>>>>>>>My problem with SE is that I don't see a top engine from 2008 (let's say) using
>>>>>>>it. It's always nice to see some shot in X ply instead of X+3 ply, but you won't
>>>>>>>see those shots in the really important games.
>>>>>>>Vas
>>>>>>
>>>>>>I'm very sure at least 1 top engine in 2008 will be using it. However most
>>>>>>likely that won't be Diep. Please keep in mind that SE are excellent form of
>>>>>>extensions to compensate dubious forward pruning near the leafs.
>>>>>>
>>>>>>More interesting question is whether in 2008 people will be using multicut.
>>>>>>
>>>>>>The reason why multicut is more interesting than SE is because multicut REDUCES
>>>>>>the branching factor. SE doesn't :)
>>>>>>
>>>>>>It gets Diep a deeper search (0.5 ply or so), tactical it seems to work, but
>>>>>>positional i have my doubts. And the deeper you search the more dubious it gets.
>>>>>>
>>>>>>Stefan Meyer-Kahlen like a real profi obviously doesn't want to discuss them
>>>>>>with me.
>>>>>>
>>>>>>So that's why it's good now to ask this publicly. When i analyze with shredder
>>>>>>7.04 versus shredder8 i can't avoid getting the impression that somehow S8 is
>>>>>>using them. S8 is missing so much more positional than S7.04 that it can only an
>>>>>>algorithm like this explaining it.
>>>>>>
>>>>>>What are your thoughts there?
>>>>>>
>>>>>>Vincent
>>>>>
>>>>>Shredder 8 is definitely not using SE - in fact, it's doing something quite the
>>>>>opposite. Try setting up a position where there is a forcing piece sac at the
>>>>>root, then clear the hash table and set up the position one move later, right
>>>>>after the piece sac, and see how much fewer ply you need for the second search.
>>>>>It's usually >1 ply difference, and I've never seen a 0-ply difference. Junior
>>>>>also shows this behavior.
>>>>>
>>>>>BTW I don't like the probcut idea for chess. Too often the eval just needs a
>>>>>certain amount of search - for example, a manoever Nf3-g1-e2-c3-d5. It looks bad
>>>>
>>>>probcut != multicut
>>>>
>>>>Probcut type ideas are IMHO nonsense.
>>>>
>>>>Multicut more interesting as it requires more than 1 fail high.
>>>>
>>>
>>>So what is multicut?
>>>
>>>BTW Fabien is using the probcut idea in Fruit 2.0 - maybe it's worth a try. I
>>>also agree though, it doesn't sound right.
>>
>>If a program searches inefficient, everything seems to work to get it more
>>efficient.
>>
>>For multicut see online paper from Bjornsson/Marsland (there is an umlaut at the
>>first o from Bjornsson, so it might be more clever to search for the name
>>marsland at altavista.com)
>>
>>Note i have the paper on real paper in an ICGA book.
>>
>>The advantage of being a member of ICGA is you never miss anything.
>
>Thanks - check your email.
>
>Vas
>
>>
>>>Vas
>>>
>>>>>until the end. Of course it's a question of statistics - one thing is for sure,
>>>>>search is a really strange thing.
>>>>>
>>>>>Vas



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.