Computer Chess Club Archives


Search

Terms

Messages

Subject: Missing a MODEL for the minum branching factor using R=3,hash,a-b

Author: Vincent Diepeveen

Date: 20:13:53 08/10/99

Go up one level in this thread



Bob, this is the x-th time u shout like:
"this is theoretical not possible, because bla bla"

What theoreticdal impossibility, and according to what
model?

a) you never tried this your own
b) there is no model for alfabeta search with hashtable and
     nullmove
c) even for crafty a branching factor of 3 is awfully bad at big
    depths, and crafty doesn't come near stupidity of this prog,
    where everything gives a cutoff without need for active lines.
d) 'stupid' doesn't have extensions

So why would it be impossible?
just take care your generator selects silly moves as first.
advancing moves cause tactics which have to be solved by
means of search. that sucks. We prefer Ra1-b1-a1-b1 above anything else!

I remember some years ago a discussion about the same,
when i said that searching 12 ply was not much for
a program getting a billion nodes a second.

First we got 'suspicions' in newsgroup it was getting another
4 ply deeper, which turned out to be not true.

then it was argued that even with nullmove it was impossible
to get to 18 to 20 ply deep with the same number of nodes this
fast prog got in 180 seconds (180 billion nodes, arguably).

Well, even crafty gets it nowadays after searching some billion
nodes (may use up to 180 billion) near those depths, or even
gets it completely.

Diep gets 18-20 already with 1 or 2 billion nodes nowadays,
from which i recently dropped a search here in the CCC
getting to that depth,
but the only thing that was said at that time was: SHOW it.

i now like to say: DISPROOF it using a sound model instead
of getting a number for the smallest possible
branching factor from the random function.

i can argue now after some years
WHY my claim some years ago appeared to be
true: all denials about that getting to
18-20 ply with some billion nodes were just 'beliefs' of a few
dudes which had NO MODEL of their beliefs,
where my claims usual are based upon experiments i've done at home.

this 17 ply at once is not a joke. it's done some years ago after
a few emails between me and peter gillgasch who said that
searching deep was a peanut if just all moves made give a cutoff.
So i tried first with a search that always returned zero, then a search
that returned material,and to my big surprise at that time it got directly
to unimaginable depths.

THERE IS NO MODEL.

How can we 'conclude' that it's theoretical even impossible
with a branching factor of 3 that it's impossible to get
with the most stupid prog world has ever seen to that depth?

What roulette table do we get this number 3 from anyway?

Even craftys branching factor at big depths is not that bad.
It's more like 2 for *smart* programs, compared to
this silly thing that gets cutoff everywhere. And perhaps 2 is
also too much too.

Note that 2^30 = 1 billion nodes, which for my dumb prog is just
a few minutes at nowadays hardware.

So approach to take:

  a) make this dumb prob yourselve, and sort the way
      i sorted. and search from openingsposition.

if you don't like to do this nor see doing it yourselve is convincing
as evidence then

  b) produce a theoretical model that incorporates nullmove,
     alfabeta search and hashtables; this in order to be able to
     calculate with possible branching factors.

We DO have a model for a program just using alfabeta,
but that model is so damned outdated, that i wonder that
so far no one noticed that.

If i search fullwidth with DIEP then it uses an awful lot of nodes.
If i limit the number of extensions drastically (like the diep that will
play at icc tomorrow), and don't look to how many nodes i need to
get to n ply, but look what the branching factor between n and n+1 is,
then it's way better than theoretical should be possible.

This obviously is not because of a bug but because of hashtables.

So in this SIMPLE case the square root from the number of
possibilities is not true, as that doesn't take into account transpositions.

Now secondly we add the powerfull nullmove. Then suddenly
even an aproximation of how many nodes we should search completely
has gone.

I do a simple test (some years ago) and get directly within seconds
to 17 ply tactical with a simple program at a P100 (no check extensions
in that prog though), knowing even that qsearch sucks bigtime,
that it has no hashtables, but with a generation trick it is just so lucky
to pick the slowest advancing move in openingsposition, from which
it can do about 8 (which with R=3 is about 40 ply to get cutoffs
from before it can't prevent getting into tactics).

Now if i can get to 17 ply with just a few nodes at for my progs
around 8 times slower hardware than my current PII450, what depth
could it get to if that prog gets optimized?

Even with a bad branching factor of 2 for that program it gets to
20 ply JUST AT MY PII450. I give it another few more minutes and it's
already at 22 ply. Nothing done yet. just ran an old prog at new hardware.

End of proof already?

Well for me it was at that time. I was convinced then that nullmove
was the way to go, and didn't care for the model at that time. Now
I do demand a model that disproofs or an apology that my math is sucking,
as without such a model there is not a single proof of what the
minimum branching factor is.

On August 09, 1999 at 17:15:25, Robert Hyatt wrote:

>On August 09, 1999 at 15:59:29, Vincent Diepeveen wrote:
>
>>On August 09, 1999 at 10:13:29, Robert Hyatt wrote:
>>
>>>On August 09, 1999 at 06:46:21, Vincent Diepeveen wrote:
>>>
>>>>On August 09, 1999 at 05:55:49, Frank Schneider wrote:
>>>>
>>>>>On August 07, 1999 at 08:17:49, Vincent Diepeveen wrote:
>>>>>
>>>>>>On August 05, 1999 at 22:43:29, Robert Hyatt wrote:
>>>>>>
>>>>>>>On August 05, 1999 at 17:13:28, Tom King wrote:
>>>>>>>
>>>>>>>>On August 04, 1999 at 20:00:49, Robert Hyatt wrote:
>>>>>>>>
>>>>>>>>[snip]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>I find the following:
>>>>>>>>>
>>>>>>>>>using SEE to order captures in the q-search, without eliminating any, will
>>>>>>>>>shrink the tree about 10% over using something simple like MVV/LVA.  But the
>>>>>>>>>SEE code will likely cost you more than 10% (unless you are a bitmap program
>>>>>>>>>where this can be done fairly efficiently).
>>>>>>>>>
>>>>>>>>>using SEE to eliminate losing captures can speed you up another 50%, or a factor
>>>>>>>>>of two, which is very significant.  And no matter how slow your SEE code is,
>>>>>>>>>that become a 'winner' of an idea.
>>>>>>>>
>>>>>>>>I'm seeing a big speedup - it's just the (possible) loss of accuracy which
>>>>>>>>concerns me. Having said that, my Q search is pretty "quick and nasty" anyway,
>>>>>>>>although I do still do things like probe the hash tables.
>>>>>>>
>>>>>>>
>>>>>>>This is only my opinion, but I spend my time working on the full-width part of
>>>>>>>the search (extensions, etc.).  The q-search already has _so many_ errors in it
>>>>>>>(it is highly selective since throwing out everything but captures is a drastic
>>>>>>>step, of course) that I don't trust it at all.  I just want it to handle simple
>>>>>>>hung pieces and not much else...  I'll trust my extensions to find the deep
>>>>>>>tactical tricks since then I won't be overlooking pins, forks, skewers, etc.
>>>>>>>
>>>>>>>When you think about it like that, shrink the q-search and use those nodes in
>>>>>>>places where they are more useful.
>>>>>>>
>>>>>>>Just an opinion, of course...
>>>>>>
>>>>>>Right, my opinion is different. A good qsearch will give more accurate
>>>>>>scores for leafs, so in a set of leafs X, for all leafs x in X we will have
>>>>>>a more reliable score.
>>>>>>
>>>>>>So whatever plydepth we get, we will get a positional more trustworthy score,
>>>>>>which with backtracking will result in a better and more reliable score.
>>>>>Thats right, but without SEE it is very difficult to get as deep. And a depth
>>>>>n+1 score with SEE is probably better than a depth n score without SEE.
>>>>>Thats the tradeoff, I think.
>>>>>
>>>>>Frank
>>>>
>>>>My god you gotta be joking. Of course you can search easily very deep
>>>>with a SEE.
>>>>
>>>>If you make your program another few clocks more stupid then you can
>>>>search with a million nodes a second at  PII450.
>>>>
>>>>As it's so damned stupid you can safely forward prune last few plies too.
>>>>
>>>>Then you can also remove all extensions from it including
>>>>most check extensions.
>>>>
>>>>Then you invent you can search even deeper by completely stripping
>>>>all knowledge including piece square tables. You just search
>>>>material based.
>>>>
>>>>Should get like 22 ply at tournament level easily.
>>>>
>>>
>>>
>>>Not a prayer of getting that deep.  Even if your eval is 90% of the
>>>total search, you would only go 10X faster.  That is _not_ 22 plies
>>>deeper.  Even without search extensions...
>>
>>What eval?
>>
>>I just incremental counted piecevalues.
>>
>>Actually 22 ply would be bad.
>>
>>30 ply should be no problem either.
>>
>
>Your math is sucking.  even with a branching factor of 3, tell me how big
>the tree is with alpha/beta going to 30 plies.  You are _never_ going to get
>to 30 plies, full-width, I don't care if you can do your eval, make and unmake
>in zero cycles.  Search alone will prevent you from getting that deep.
>
>A 30 ply search is absolutely staggering in total nodes searched...
>
>
>>I got at my P100 with such a proggie already without hashtables 17 ply
>>from openingsposition.
>>
>>You simply nullmove everywhere.
>
>this is an _exponential_ problem.  17 plies is _not_ 30 plies.
>
>
>
>
>>
>>But yes, it played 1.a3.
>>
>>It only had killermoves to prevent falling for tactical jokes,
>>after that the trick was it first generated pawnmoves, starting
>>with a-pawn to a3. etcetera. I forgot what i did with captures,
>>i bet i took a few, but not too many
>>as i hardly sorted the moves, cuz it took too much systemtime.
>>
>>Prog ran at a P100 at 250k nodes a second. When i added these
>>'sophisticated' ways of sorting, killermoves, and getting captures
>>the speed dropped to only 110k nodes a second, same speed also for
>>the one WITH piece square table, but that one just searched 9 ply
>>or so.
>>
>>What i did in qsearch for this dumb&stupid proggie were 2 captures
>>to any piece then only recaptures and captures to higher or equal
>>pieces.
>>
>>So it didn't use SEE yet, which would give at least another few plies,
>>and it didn't use forward pruning yet.
>>
>>After this experiment i realized how much tricks are found by just
>>the positional part of my prog, and how much qsearch (rebel qsearch
>>as example?) and extensions near leafs (genius?) pick up.
>>
>>It took a long time to see for example that it can't take the pawn with
>>Nxe5?? after 1.e4,e5 2.Nf3,Nc6 3.Bc4,Nd4
>>
>>reason for that was of course becuz i didn't extend checks, as that
>>was way too expensive initially. Just checking whether the piece
>>that moves gives a check appeared to be quite cheap later.
>>
>>>>Don't ask me how poor it plays though.
>>>>
>>>>SEE is a virus which worked in the past a little, but will slowly
>>>>die next few years.
>>>>
>>>
>>>
>>>:)  This virus can be "deadly".
>>>
>>>
>>>
>>>
>>>>>>
>>>>>>Greetings,
>>>>>>Vincent



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.