Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Could someone please anaylse this to 24 ply? [diagram]

Author: Robert Hyatt

Date: 08:28:54 10/02/03

Go up one level in this thread


On October 01, 2003 at 20:29:35, Ricardo Gibert wrote:

>On October 01, 2003 at 13:34:53, Robert Hyatt wrote:
>
>>On October 01, 2003 at 12:07:40, Vincent Diepeveen wrote:
>>
>>>On October 01, 2003 at 11:57:01, Robert Hyatt wrote:
>>>
>>>Remember 1997 when you said it would be impossible to search 17-19 ply
>>>with nullmove, good working hashtable and a few tens of billions of nodes.
>>
>>From the opening position.  Yes.  Using 1997 hardware...
>>
>>
>>>
>>>Your argumentation back then was that the minimum branching factor was
>>>squareroot of the average number of moves. I have measured that average at 40
>>>when searching 21 ply from openings position on average.
>>>
>>>So sqrt(40) = 6.32
>>>
>>>That's what you said back in 1997-1998 in RGCC.
>>
>>Not with null-move I didn't say that.
>>
>>The evidence is too easy to find.  It is closer to 3.0....
>>
>>
>>>
>>>You used 35 btw for the average number of moves, but checks get extended, so
>>>it's in reality 40.
>>>
>>>Now you say 'nonsense' again in 2003 against the statement that
>>>the real problem of a search at a pc where the pc has a loading
>>>factor EXACTLY 500 times bigger, than at a supercomputer, that this
>>>doesn't matter for branching factor at all.
>>
>>What I say is nonsense is your statement that "the deeper I go, the more
>>efficient the parallel search gets."  You don't mention any limit to that,
>>which means that efficiency just continues to climb.  If so, when you go
>>deep enough you can get there is zero time as efficiency will be infinite.
>>
>>
>>>
>>>I do not know what science you are performing, but it cannot have anything to do
>>>with computerchess and search algorithms. Because you can't even do normal math
>>>there.
>>
>>I think my math holds up.  If efficiency continues to climb, it is unbounded.
>>If it is unbounded, then you will reach a point where you search more than
>>N times faster with N processors.  That is pure garbage.  It always has been.
>>It always will be.
>>
>>All you have to do is stop and think about what you are writing...
>
>
>Here is an example of a formula that always increases as x increases and yet
>remains bounded: y = 1 - 1/(2**x)
>

That isn't what we are talking about here.  That's a function with an
solvable limit.  Vincent's already posted his >N speedup stuff for N
processors.  If you go beyond N on speedup, I don't see how you are
going to calculate any limit for such a function...

Simply saying "deeper goes faster" is inaccurate.  "deeper goes faster but
the curve is bounded" is much more accurate.


>
>>
>>
>>>
>>>Vincent
>>>
>>>>On October 01, 2003 at 10:10:26, Vincent Diepeveen wrote:
>>>>
>>>>>On October 01, 2003 at 10:03:26, Vincent Diepeveen wrote:
>>>>>
>>>>>>On October 01, 2003 at 07:45:48, Joachim Rang wrote:
>>>>>>
>>>>>>>On October 01, 2003 at 07:31:53, Vincent Diepeveen wrote:
>>>>>>>
>>>>>>>>On October 01, 2003 at 07:24:24, Joachim Rang wrote:
>>>>>>>>
>>>>>>>>>On September 30, 2003 at 19:33:30, Vincent Diepeveen wrote:
>>>>>>>>>
>>>>>>>>>>On September 30, 2003 at 19:30:30, Dann Corbit wrote:
>>>>>>>>>>
>>>>>>>>>>>On September 30, 2003 at 19:27:38, Peter Collins wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On September 30, 2003 at 19:22:26, Dann Corbit wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>On September 30, 2003 at 19:14:02, Peter Collins wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>My apologies for the format (from memory) of the position I'd like to analyse,
>>>>>>>>>>>>>>but I am at a terminal outside of where I can access the gamescore:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>Black to make his 29th move:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>3q2k1
>>>>>>>>>>>>>>5p1p
>>>>>>>>>>>>>>1n2nbpB
>>>>>>>>>>>>>>1P1p4
>>>>>>>>>>>>>>1Qp3P1
>>>>>>>>>>>>>>7P
>>>>>>>>>>>>>>1P3PK1
>>>>>>>>>>>>>>1B2R3
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>Sorry if this is the wrong forum, I'm new here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>A friend of mine played black against Jeremy Silman..he only looked at about a
>>>>>>>>>>>>>>trillion nodes on slower machines...
>>>>>>>>>>>>>
>>>>>>>>>>>>>[D]3q2k1/5p1p/1n2nbpB/1P1p4/1Qp3P1/7P/1P3PK1/1B2R3 b - -
>>>>>>>>>>>>
>>>>>>>>>>>>Thanks Dan, indeed, this is the correct position.
>>>>>>>>>>>>
>>>>>>>>>>>>I think the one of the best variation so far goes...
>>>>>>>>>>>>
>>>>>>>>>>>>29...d4 30.Be4 d3 31.Qd2 g5 32.h4 gxh4 and from here I am using a 2800Barton,
>>>>>>>>>>>>1.5MB Ram to analyse it.
>>>>>>>>>>>
>>>>>>>>>>>1.5 GB, I suppose.
>>>>>>>>>>>
>>>>>>>>>>>Do you have a preference for what program to analyze with?
>>>>>>>>>>>
>>>>>>>>>>>Many programs will never make it to 24 ply [*]
>>>>>>>>>>>
>>>>>>>>>>>[*] By 'never' I mean that not in several years of continuous time.
>>>>>>>>>>
>>>>>>>>>>at a 486 sure :)
>>>>>>>>>>
>>>>>>>>>>But how about 500 processors?
>>>>>>>>>>
>>>>>>>>>>2 hours or so?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Diep will never make it to play 24 on your famous 500 CPU-Box.
>>>>>>>>>
>>>>>>>>>regards Joachim
>>>>>>>>
>>>>>>>>junior gets dual 21 ply, shredder searched at a dual P4 in ict3 like 19 in a
>>>>>>>>more complex position than this against diep.
>>>>>>>>
>>>>>>>>i'm not really seeing the problem when i use similar definitions of a ply like
>>>>>>>>them.
>>>>>>>>
>>>>>>>>Best regards,
>>>>>>>>Vincent
>>>>>>>
>>>>>>>
>>>>>>>didn't you wrote that you no longer do stupid pruning? Without extensive pruning
>>>>>>>no program will get 24 plies even at a 500-CPU-Machine.
>>>>>>>
>>>>>>>I don't know your program, but do you actually believe that Diep would reach a
>>>>>>>depth of 24 in two or three hours?
>>>>>>>
>>>>>>>regards Joachim
>>>>>>
>>>>>>i'm using R=3 nullmove. there is no limit on the number of plies you can search
>>>>>>with nullmove. if i limit the extensions, sure 24 ply is not a major problem
>>>>>>when all the search lines end in for example endgame with little possibilities.
>>>>>>
>>>>>>Opening is really the worst case position there.
>>>>>>
>>>>>>a well tuned evaluation function for this position should have no problems
>>>>>>reaching 24 after a couple of hours 500 cpu.
>>>>>>
>>>>>>you have no idea what you're talking about. how many times in your life have you
>>>>>>run with a hashtable of sizes up to 250GB having 512GB ram?
>>>>>
>>>>>compare it with this. how many thousands of years would it take your current pc
>>>>>to generate a 10 TB database?
>>>>>
>>>>>Well at such machines with 512GB ram and the total system having terabytes of
>>>>>striped harddisks i/o and 1 TB bandwidth over the entire machine (0.5 at the 512
>>>>>processor partition obviously) you talk about other dimensions.
>>>>>
>>>>>Even with a cleaned hashtable after 10 minutes already speedup is 37.3% at 130
>>>>>cpu's. that gets better and better simply each ply because the BRANCHING FACTOR
>>>>>is better at such machines at big depths.
>>>>>
>>>>>So it is in short exponential better than a PC.
>>>>>
>>>>>If i search 5 ply deeper than normal diep, that in theory would be:
>>>>>
>>>>>3.0^5 = 243 speedup out of 500 processors.
>>>>
>>>>That's nonsense.  If that were true, you could eventually search to the
>>>>end of the game in no time.  Because the limit of that function is +infinity.
>>>>
>>>>Please do a little math before you post such nonsense...
>>>>
>>>>>
>>>>>Impossible some will say. Well when you give 1 processor 200GB hashtables in
>>>>>total you sure won't find a 243 speedup.
>>>>>
>>>>>However with so many million nodes a second you speak about other dimensions
>>>>>than a PC can handle.
>>>>>
>>>>>I can store *every* node into hashtable.
>>>>>
>>>>>Without that no speedup at all at the machine. But doing that the speedup is
>>>>>magnificent!
>>>>>
>>>>>So the real power shows at long level analysis!
>>>>>
>>>>>Best regards,
>>>>>Vincent



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.