Computer Chess Club Archives


Search

Terms

Messages

Subject: New Algorithm for "el cheapo Singular Extensions" :)

Author: Dr. Axel Steinhage

Date: 05:50:03 01/26/05


Hi all,

I registered to this forum just a week ago. However I have quite some experience
in Chess-Programming although I always did it for myself only. In the late 80ies
I wrote an Assembler Program for Z80 which was on the same level as Colossus4 at
that time. Then I stopped programming for more than a decade. One year ago I
restarted with a new Engine in ANSI C. I named it "Astimate" and concerning the
limited time I can invest in that hobby, I think I am quite far already. I am
very proud on the fact that I never ever looked into someone elses code but
wanted to discover everything on my own. Being a scientist by education, I read
the important publications though! Doing that, I learned a lot about Singular
Extensions, starting out from the first paper of the DeepBlue team up to the
various comments by Bob and others here in the forum.
It seemed to me that so far SE is still a "nice idea" only. The problem seems to
be with the efficient implementation. So I sat down for quite some time and
tried to come up with an algorithm that works well in practice. Now, I think, I
have found one. I made some tests and so far it looks very good as it finds
lotsa combinations earlier without adding a lot overhead. Before going into more
testing, I would like to hear the programming-gurus' opinion about the idea. So
please give your comments. The algorithm works as follows:

I do a normal Search (Soft NegaScout, PVS, Aspiration, Verified Nullmove (R=3),
Hashtables, Killer, ...) and keep track of the best and the second best move
when testing out all possible moves. When the best and the second best differ by
a given margin S, I define the move as singular. So far, this is well known. But
now come two innovations:
1. in case of a fail high, the best move may be singular but I don't know it
because I have cut off before searching all moves. This, I prevent as follows:
In case of a fail high, I look if the second best move is within the S window.
If so, I cut off cuz the best move cannot be singular. If not, I go on searching
(although I could cut off already!) with reduced depth (R=2). I do this until I
have searched all moves or until I have a second best move within S (or another
fail high, of course). If all the other moves are outside the S window, I define
the move singular.
2. If I found a move to be singular, I do NOT do a research. Instead, I store
this information in the Hashtable and prevent this hash-entry from being
overwritten in the future. In the next depth-iteration, I know from the
Hash-Entry then already upfront that this move might be singular and extend its
max depth. Of course, I don't do the singularity search on the move I have
already classified singular.

Because of the reduced depth singularity-search after cutoff and omitting the
research, there is practically no overhead other than the extension itself.
Of course, this algorithm is "cheapo SE" as it might miss quite a lot of
Singular moves: first, the reduced depth might not discover a singularity.
second, the "second best" value may be wrong, as it might also only be a
boundary (have to analyse that). Finally, the information that a move is
singular stems from the last depth iteration. However, in the current depth
iteration, the move may not be singular anymore.

Despite of these drawbacks, the algorithm turned out to work quite well on some
test positions with my engine. Before pdoing more tests, however, I would rather
like to hear what you think about my idea.

Axel



This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.