Computer Chess Club Archives




Subject: Re: Repeatability (questions for Omid)

Author: Vincent Diepeveen

Date: 15:59:41 12/20/02

Go up one level in this thread

On December 20, 2002 at 09:39:54, Gian-Carlo Pascutto wrote:

>On December 19, 2002 at 08:56:45, Dan Andersson wrote:
>>There is a paper about alpha-beta conspiracy search. The idea is simple. You use
>>fractional extensions. And have two depth variables one for each side. Then you
>>deduct less than a ply based on some criterion. The paper used a sigmoid
>>function based on the mobility for the side to move. And you continue to search
>>deeper as long as one of the depth measures is above zero. I really like this
>>idea. It is easy to implement.
>>MvH Dan Andersson
>It's not so easy to implement correctly IMHO, and it works like shit.

I implemented it in diep and extended extensively with it. A big
issue was it isn't using nullmove and that you always have a conspiracy
of 1 which proofs the mainline.

So what happens is that the mainline keeps flipping.

What really dissappointed me is the theoretic problem with CNS.

In fact normal depth limited alphabeta goes like this:
  you have a line to a depth d. Then if it fails high, you cutoff the
  tree because of that. So to 'disproof' the tree all you need is
  a single cutoff there.

Now with Conspiracy that isn't the case. In fact you need to search a
lot of shit before the algorithm concludes that a branch is not interesting.

Of course i kept everything in memory. That was no problem for a say
5 minute search at all. DIEP is a slow program, before i have filled
the entire RAM with searches it sure needs a lot to search.

Of course a major problem was already adressed by Ulf Lorenz. However
Ulf's program searches at a speed even slower than DIEP (like 1 million
nodes a second or so at 190 processors or so, which makes it like
1 mln / 190 = 5263 nodes a second a processor). Even for DIEP
maintaining that tree in RAM is pretty expensive when doing CNS.

In order to store a real big tree, all i tried to store was of course
the moves played and a single integer referring to the next movelist.

So a minimum number of integers was used to keep the tree. If i would
expand that with a special hashtable to keep track of transpositions,
then the tree would explode in size simply. Completely *explode*.

Alternative is a second hashtable which stores in an 'overwrite' way
the most important positions and just a reference to the position in the
memory having the tree.

I didn't implement that. Just considering it was more than enough for
me to get completely sick of it. Yet a serious CNS implementation
should try implementing that.

My final conclusions of CNS (CNS1 i implemented basically not CNS2 which
is IMHO an unnecessary form of CNS, because it doesn't have the advantages
of CNS search, but does have the disadvantages combined with the disadvantages
of depth limited alphabeta) were about this
  - you are basically busy traversing each time a tree in memory which is
    slowing down the program incredibly
  - taking into account the primitif implementation i had chosen and my
    way of implementing nullmove into the CNS algorithm meant that
    i could solve some tactical positions but:
    CNS, both CNS1 and CNS2 are not near the tactical strength of
    depth limited alfabeta with singular extensions and
    checks in qsearch (that solves in combination with aggressive nullmove
    R=3 up to 4 ply more tactics than not doing checks in qsearch).

So main conclusion 1: if CNS isn't tactically improving my search then it should
clearly give positional advantages over not using CNS.

However there i concluded that
  - mainline keeps continually changing
  - very artificial ways of dividing new searches over the entire tree
    have to be still invented in order to not just expand a single
  - positionally you compare everywhere different depths with each other
    which IMHO hurts the search tree

But most important insight into CNS was next:
  - CNS basically searches the tree based upon conclusions from your own
    evaluation. This is a bad thing because that is not my definition
    of what search is. Search IMHO is not there in order to just search
    what you already can evaluate. If i already can evaluate it. Then why
    search it?

Doesn't take away that CNS is one of these very interesting algorithms
that deserve further work, but the level of chessprograms, because of
commercial interests, is that high nowadays that it is nearly impossible
to compete with that.

That is a bad shame, because though i cannot read the work of Ulf Lorenz
at all (he never produced testset outputs of p.conners which i can
compare with my own software; only a very simple tactical testset
he posted results from years ago), i definitely find it a good thing
if people like him work on things like this.

So the main reason why i abandonned the testing of CNS within DIEP is
*not* because of lacking potential, but because it is basically searching
what your evaluation finds a good idea to research. However that evaluation
will shout 'hooray' when it sees a big win and usually when it shouts
'hooray' it is right. Then you are just busy proving that you were right!

And exactly that evaluation guided eval i find a bad idea
in computerchess and in the world of search in general.

Yet it is very good if there is a compare between those 2 worlds. Right
now because of the lack of P.Conners even joining IPCCC2003, the situation
is not a good situation.



This page took 0.03 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.