Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New Algorithm for "el cheapo Singular Extensions" :)

Author: Anthony Cozzie

Date: 11:05:42 01/27/05

Go up one level in this thread


On January 27, 2005 at 12:23:15, Robert Hyatt wrote:

>On January 27, 2005 at 04:38:04, Dr. Axel Steinhage wrote:
>
>>On January 26, 2005 at 12:59:42, Robert Hyatt wrote:
>>
>>>On January 26, 2005 at 08:50:03, Dr. Axel Steinhage wrote:
>>
>>>Your case of "fail high" seems to be terribly expensive.  Doubling the size of
>>>the tree at a minimum, because to search the second move after you have a
>>>fail-high is pure overhead.  DB did this as well, but they did do a search to a
>>>significantly reduced depth, rather than a normal search.  Hsu called this "FH
>>>singular" (fail-high singular).  I implemented it exactly as he spelled it out,
>>
>>Thanks for your comments! Just wanted to comment on this reply from you (above):
>>My "fail high" is not expensife, because I also go on searching with reduced
>>depth (I think I wrote that I use a reduction factor of R=2).
>
>OK, but even that is a huge cost, as where you normally did a single move search
>of size W^D (less alpha beta reduction of course) you now do W^D + N * W^(D-2)
>which blows the tree way up.  It generally costs a ply in the Cray Blitz testing
>I did in 93-94...  But the program seemed to play better.  But that was with
>null move R=1, not recursively applied.  In Crafty, it seemed to be highly
>inconsistent with null move R=2-3...
>
>>Maybe I did not make myself clear enough. I think the main difference between my
>>algorithm and the others I have seen so far is, that I NEVER touch a move twice
>>and I don't even have to generate all moves most of the time. What Hsu et Al did
>>was to FIRST find the best move and THEN test whether it is singular. What most
>>people do today is FIRST do a complete search (although at reduced depth) to
>>find a singular move and then search the whole branch again with the singular
>>move having higher depth, right?
>
>Yes, but remember that Hsu's approach is _far_ more accurate.  At PV nodes he
>_really_ identifies singular moves.  The "el-cheapo" solution only identifies
>moves that appear to be singular to a reduced depth search.  But what if things
>only get difficult at the full search depth?  You don't notice the singularity
>and overlook that which you were hoping to find...
>
>Also speaking for myself and Bruce only, for the "el-cheapo" we didn't search
>_all_ moves to reduced depth.  We searched only a few, because at an "ALL" node,
>singularity makes no sense and searching ALL nodes to reduced depth is just
>wasting time.  Our approach reduces ALL node overhead, but at the same time
>means we can fail to find "the" singular move at a CUT node, if we don't somehow
>get it near the top of the move list to fall within the set of moves we are
>going to test...
>
>
>> This also means to touch this branch twice.
>>What I do, however, is only to GO ON searching (with reduced depth!) after a
>>fail high. So no RESEARCH but a CONTINUATION of the search. This seems costly,
>>but it may not be! Most of the time the next move I search (after the fail high)
>>is already within the Singular window (or a fail high again).
>
>Nothing wrong with that, that is exactly how Hsu implemented the FH-singular
>test, and is the same way I did it in CB, except we always re-searched to D+1 if
>the singular test was passed, which I think is important, otherwise your last
>iteration (arguably the most important one) will note singular moves but won't
>extend them to see what effect they produce.
>
>
>>Plus, I have many
>>conditions where I don't have to go on searching anyway.
>>It seems that the test for singularity produces nearly no overhead. Only the
>>actual extension. But as you stated: this may already be too much overhead. I
>>can believe, that SE is somehow orthogonal to Nullmove. I will go on testing.
>>I did not do complete testing so far. I only checked the solution times for a
>>set of test positions. But that does not mean anything for the practical game
>>later on. I will keep you up to date on my findings, if you like.
>>
>>Thanks, Axel
>
>
>Certainly, I'm sure everyone is interested.  My observations to date have been
>that SE helps in tactical situations, but when you play a real game, and in
>non-tactical situations you lose a ply of search overall, you make mistakes of a
>different sort by not seeing positionally deep enough...  IE I could never get a
>SE implementation that would beat my non-SE version in heads-up competition on
>equal hardware.  The margin of victory for the non-SE version was approximately
>what would be produced by just playing against the same version on hardware 1/3
>as fast to remove a ply of search that way instead..

I think engines are so strong tactically nowadays that it just doesn't matter.
I'm not worried about finding that X ply trick to beat Shredder, I'm worried
about getting that winning position on the board, which is a hell of a lot
harder.

anthony



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.