Author: KarinsDad
Date: 11:13:02 08/12/99
Go up one level in this thread
Vincent, You appear to be arguing from unstable ground. Even if your statements that you could get to a branching factor of 2 due to transpositions and null move and other tricks were true (which in and of itself sounds extremely suspicious due to a whole lot of work done by others which seems to indicate otherwise), you could at best require 2^30 or 1.07 billion positions searched. If your "dumb" program did the 4 million positions per second that you claim, it would require 268 seconds to alpha beta search an entire graph to 30 ply. This assumes that you could get EXACTLY an average of a branching factor of 2, let alone that it would require a whole lot of memory to handle a transposition table large enough to handle the billion unique positions to get you down to your 2 branching factor (heavy transposition being one of the main reasons you claim you could get down to 2). What if, however, the best you could do is a branching factor average of 2.4 instead of 2? Then, instead of 1.07 billion positions, it would require almost 254.89 billion positions. That would take your theoretical program 63722 seconds per move or 17.7 hours per move to reach this same full width 30 ply graph. I think you are totally ignoring the realities of the world and fantasizing about what COULD happen. What if the best branching factor you could get was 3.2? or 4.5? or 5.2? You are making a lot of grandiose claims without being able to back them up. Simple mathematics Vincent. If you can get the branching factor for a simple "dumb" program very close to 2, people may take you seriously. Otherwise, you appear to be blowing smoke. Don't ask other people to write the program that does this. You are making the claim. You write the program, distribute it, and prove the rest of us wrong. In the meantime, I think you have a whole bunch of people that take you no more seriously today on this issue than they did years ago. As per the words of the immortal bard "Put up or shut up.". Nobody else has to disprove your claims, you have to prove them. KarinsDad :) PS. As per your question on the branching factor of 3 below, it would require almost 206 trillion positions or over 4 years per move (and a whole boatload of memory to get your transposition table to be worthwhile) using your 4 MNPS engine. On August 10, 1999 at 23:13:53, Vincent Diepeveen wrote: > >Bob, this is the x-th time u shout like: >"this is theoretical not possible, because bla bla" > >What theoreticdal impossibility, and according to what >model? > >a) you never tried this your own >b) there is no model for alfabeta search with hashtable and > nullmove >c) even for crafty a branching factor of 3 is awfully bad at big > depths, and crafty doesn't come near stupidity of this prog, > where everything gives a cutoff without need for active lines. >d) 'stupid' doesn't have extensions > >So why would it be impossible? >just take care your generator selects silly moves as first. >advancing moves cause tactics which have to be solved by >means of search. that sucks. We prefer Ra1-b1-a1-b1 above anything else! > >I remember some years ago a discussion about the same, >when i said that searching 12 ply was not much for >a program getting a billion nodes a second. > >First we got 'suspicions' in newsgroup it was getting another >4 ply deeper, which turned out to be not true. > >then it was argued that even with nullmove it was impossible >to get to 18 to 20 ply deep with the same number of nodes this >fast prog got in 180 seconds (180 billion nodes, arguably). > >Well, even crafty gets it nowadays after searching some billion >nodes (may use up to 180 billion) near those depths, or even >gets it completely. > >Diep gets 18-20 already with 1 or 2 billion nodes nowadays, >from which i recently dropped a search here in the CCC >getting to that depth, >but the only thing that was said at that time was: SHOW it. > >i now like to say: DISPROOF it using a sound model instead >of getting a number for the smallest possible >branching factor from the random function. > >i can argue now after some years >WHY my claim some years ago appeared to be >true: all denials about that getting to >18-20 ply with some billion nodes were just 'beliefs' of a few >dudes which had NO MODEL of their beliefs, >where my claims usual are based upon experiments i've done at home. > >this 17 ply at once is not a joke. it's done some years ago after >a few emails between me and peter gillgasch who said that >searching deep was a peanut if just all moves made give a cutoff. >So i tried first with a search that always returned zero, then a search >that returned material,and to my big surprise at that time it got directly >to unimaginable depths. > >THERE IS NO MODEL. > >How can we 'conclude' that it's theoretical even impossible >with a branching factor of 3 that it's impossible to get >with the most stupid prog world has ever seen to that depth? > >What roulette table do we get this number 3 from anyway? > >Even craftys branching factor at big depths is not that bad. >It's more like 2 for *smart* programs, compared to >this silly thing that gets cutoff everywhere. And perhaps 2 is >also too much too. > >Note that 2^30 = 1 billion nodes, which for my dumb prog is just >a few minutes at nowadays hardware. > >So approach to take: > > a) make this dumb prob yourselve, and sort the way > i sorted. and search from openingsposition. > >if you don't like to do this nor see doing it yourselve is convincing >as evidence then > > b) produce a theoretical model that incorporates nullmove, > alfabeta search and hashtables; this in order to be able to > calculate with possible branching factors. > >We DO have a model for a program just using alfabeta, >but that model is so damned outdated, that i wonder that >so far no one noticed that. > >If i search fullwidth with DIEP then it uses an awful lot of nodes. >If i limit the number of extensions drastically (like the diep that will >play at icc tomorrow), and don't look to how many nodes i need to >get to n ply, but look what the branching factor between n and n+1 is, >then it's way better than theoretical should be possible. > >This obviously is not because of a bug but because of hashtables. > >So in this SIMPLE case the square root from the number of >possibilities is not true, as that doesn't take into account transpositions. > >Now secondly we add the powerfull nullmove. Then suddenly >even an aproximation of how many nodes we should search completely >has gone. > >I do a simple test (some years ago) and get directly within seconds >to 17 ply tactical with a simple program at a P100 (no check extensions >in that prog though), knowing even that qsearch sucks bigtime, >that it has no hashtables, but with a generation trick it is just so lucky >to pick the slowest advancing move in openingsposition, from which >it can do about 8 (which with R=3 is about 40 ply to get cutoffs >from before it can't prevent getting into tactics). > >Now if i can get to 17 ply with just a few nodes at for my progs >around 8 times slower hardware than my current PII450, what depth >could it get to if that prog gets optimized? > >Even with a bad branching factor of 2 for that program it gets to >20 ply JUST AT MY PII450. I give it another few more minutes and it's >already at 22 ply. Nothing done yet. just ran an old prog at new hardware. > >End of proof already? > >Well for me it was at that time. I was convinced then that nullmove >was the way to go, and didn't care for the model at that time. Now >I do demand a model that disproofs or an apology that my math is sucking, >as without such a model there is not a single proof of what the >minimum branching factor is. >
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.