Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Searching 18-20 ply just using nullmove

Author: Vincent Diepeveen

Date: 20:16:56 08/17/00

Go up one level in this thread


On August 17, 2000 at 22:23:54, Uri Blass wrote:

>On August 17, 2000 at 21:21:46, Vincent Diepeveen wrote:
>
>>Hello,
>>
>>A few years ago (about 3) i claimed that searching 18-20 ply was possible
>>with huge hashtables, nullmove, a good evaluation function, and
>>several billions of nodes.
>
>searching 18-20 plies with recursive null move pruning is possible and there is
>no doubt about it.
>
>You may miss things because of null move.

Are you referring here that i need a bunch of plies more for a few
very rare positions where way more as 2 nullmoves are not enough to
see the truth? Just like my program won't find that huge mate as
posted a bunch of messages below?

>>I was considered nuts by half of the RGCC population, because no
>>branching factor was capable of being that good, when you would
>>search deeper, your branching factor would NOT get under 4.0, that
>>was considered impossible by a lot of people even.
>>
>>Obviously most people following: "i believe that i see in the ads
>>or where i see the outputs from", they challenged me. Some went even
>>that far and called me nuts, a liar and a frog and many other terms.
>
>I did not read RGCC but I believe that people called you a liar because you make
>the impression that you can see everything in the next 18-20 plies except some
>lines that you will look for only 14-16 plies because of null moves when the
>fact is that you can miss also lines with 10 plies because you use recursive
>null move pruning(I remember that this was my impression when I read your posts
>here about 18-20 plies).
>
>I do not say that using recursive null move is wrong but your 18-20 ply search
>can miss 10 plies lines(If your evaluation is good enough these lines are
>usually not important but I believe that there are cases when they are
>important)

This is incredible hard to believe. The number of positions where
3 free moves + qsearch will fail. If that fails, then there must
be something really wrong in evaluation! It sure can miss things,
but the alternative is to search at most 12 ply fullwidth after
a full night. Give me 18-20 plies then WITH nullmove please!

So a zugzwang or 2 is no problem to detect then. Apart from that, the
rare times we see a zugzwang arise in a game, that's usually already
seen by my double nullmove.

With a material only program proof isn't hard btw for this,
as you can take openingsposition where my stupid experiment played
1.a3 searching 30 plies.

However, the whole discussion 3 years ago,
when i cannot remember any person called Uri Blass posting at that time,
was not a claim of mine that chess could be solved.

In contradiction!!!!!
Where Jaap v/d Herik writes in the start of the 80s in "computerchess"
the next quote: "when software will search 10-11 ply then no human will
be able to ever beat it". I completely have said the opposite,
that after a ply or 12 only evaluation matters!

Nowadays i would modify that already to 10 ply, noting that in difficult
openingspositions the understanding of todays chess programs is still
that bad that they need a few ply more to see some consequences there,
so still that 12 there as found by De Groot to be the depth where the
majority of short term plans are based upon is still valid!

I said this partly based upon also experiences of my draughtsprogram
which already for years can search easily 25-30 ply after a night, and
the only way in which we (marcel monteba and me) could improve the
search was by adding knowledge. This draughts program searches fullwidth
by the way, as doing nothing is usually very well. I don't need to
note that it sees all tactics of world champions within flashes of
seconds, where even an average but smart national player can beat it
pathetically.

My claim was that branching factor above 10 plies was much better as
i expected it to be, because of better working of hashtables, and
more efficient search by means of a better evaluation which would
basically research the same tree over and over again. Basically i
claimed that with the number of nodes that Deep Blue searched, one
could easily build a much better quality search in software.

So what you write here above : "searching 18-20 ply is without doubt
possible with nullmove", that's exactly what no one dared to say 3 years
ago, and i'm happy you write it here! It proves how opinions have changed,
as 3 years ago NO ONE dared to say that, except me.

Now i don't want to sound like a profet, but i wanted to raise this
discussion a bit to show how fullwidth search has superseded by
nullmove driven search, and how these programs have progressed in search
the past years and now are dominating the scene, where 10 years ago
Genius with a fullwidth search, a few singular extensions, pruning and
a clever selective search completely dominated the world of computerchess
with tactics, only losing now and then from faster machines. At that time
it was nearly impossible that a supercomputer would lose from a micro,
where nowadays a match between any pick out of the top against a supercomputer
sure will show opposite stakes!

>Uri

Vincent



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.