Author: Robert Hyatt
Date: 19:46:04 09/04/02
Go up one level in this thread
On September 04, 2002 at 21:58:55, martin fierz wrote: >On September 04, 2002 at 21:10:18, Robert Hyatt wrote: > >>On September 04, 2002 at 20:42:13, martin fierz wrote: >> >>>On September 04, 2002 at 20:16:00, Robert Hyatt wrote: >>> >>>>On September 04, 2002 at 19:22:25, martin fierz wrote: >>>> >>>>>On September 04, 2002 at 18:20:49, Terry Ripple wrote: >>>>> >>>>>>This is hardly the place to try and discredit our fellow CCC members and you >>>>>>know who i'am refering to! This was very distasteful and uncalled for and >>>>>>shouldn't have been allowed to continue at all. >>>>>> >>>>>>I just wanted to give my opinion on this matter! >>>>>> >>>>>>Regards, >>>>>> Terry >>>>> >>>>>i disagree with you... bob's DTS paper has 2 major flaws with numbers: >>>>>1. numbers in a table are claimed to be measured, and they are not, and vincent >>>>>is absolutely right to point this out. >>>>> >>>>>2. bob's rounding of 1.81 to 1.9 and rounding the average of these rounded >>>>>results can result in an average speedup of 1.82 to be reported as 2.0. this is >>>>>ridiculous and any undergraduate student should not get away with something like >>>>>that. >>>> >>>>I don't follow that point. Each "speedup" is computed from two numbers, >>>>the N processor time divided into the one processor time. There is a >>>>roundoff/truncation issue there. I didn't say "I did round up". I said >>>>"it is possible that the log eater might have done that." >>>> >>>>But that only happened on a single speedup. There is no "second" roundup >>>>because each speedup is only computed once. So for normal math, the error >>>>is +/- .05. If I happened to have done it in integer math, then the error >>>>_could_ have been +/- .09. Unfortunately, without the code, I can't say >>>>which. I suspect it was pure floating point, using the %.1f type format, >>>>which means .05 is the actual error. But that is just my best guess. Not >>>>a statement of fact... >>> >>>well, on the famous page in question, you give speedups for every position in >>>table 5, and then an average speedup in table 6. as far as i can see, table 6 is >>>the main result of this paper, not table 5, right? >> >>OK. I was only considering the data in the main position-by-position speedup >>table. The other table is not a round-up, but you should be able to check >>that from the first table. IE IIRC the output, it produced individual speedup >>values followed by a column average, all in the same program. IE I would hope >>that if you sum up a column in the individual speedup table, and divide by 24, >>you get the number in the summary table... hopefully using normal FP rounding >>to a %.1f format. :) >> >> >> >> >>> i surely would not remember >>>all 24 numbers, but i would remember that you claim to get a nice 2.0 speedup on >>>a dual machine. >> >>On cray blitz, it was very close, yes. It started dropping however. With >>only one processor, and splitting at the root (which almost nobody else is >>willing to do) the search is _very_ efficient. It gets worse as the "party" >>gets bigger of course... :) >> >> >> >> >> >> >>>IF you really rounded the way you might have done, you have two roundings. i can >>>see that you computed 2.0 from the 24 single speedups, when the proper result >>>would be 1.9583333... which you should give as 1.96, not as 2.0. >> >>As I said, everybody reported to 1 decimel place, and I simply stuck with >>that. Even the 1 place is very "noisy" in reality and doesn't mean a thing >>IMHO... > >if you give the single speedup with 1 digit accuracy, you have to give the >average of that with two digits accuracy, since you have 24 ~ 5^2 results, so >the average speedup has a five times smaller error on it than the single >speedup. and if you give that with one digit it's supposed to mean that it has >an error of ~0.1 - so the average has an error of ~0.02 with this reasoning => >two digits are the thing to do. >you could also just measure if the 0.1 is as noisy as you think - i'd be doing >that if i had a 2 processor machine... all you have to do is let it think on the >same position 10 times and write down the times (without rounding...) & compute >the standard error. repeat for as many positions as you like, to eliminate >variability due to some weird position. > How about this: Position is kopec 2, which has always been "interesting". log.001: time=1:01 cpu=100% mat=0 n=24338689 fh=89% nps=396k log.002: time=43.96 cpu=199% mat=0 n=32358472 fh=88% nps=736k log.003: time=1:25 cpu=199% mat=0 n=62974448 fh=88% nps=736k log.004: time=1:12 cpu=199% mat=0 n=53702512 fh=88% nps=740k log.005: time=44.00 cpu=199% mat=0 n=32372099 fh=88% nps=735k log.006: time=24.33 cpu=199% mat=0 n=18295076 fh=89% nps=751k log.007: time=44.08 cpu=199% mat=0 n=32435811 fh=88% nps=735k log.008: time=1:00 cpu=199% mat=0 n=44237641 fh=88% nps=735k log.009: time=52.34 cpu=199% mat=0 n=38684097 fh=88% nps=739k log.010: time=24.36 cpu=199% mat=0 n=18322715 fh=89% nps=752k The first entry is 1 cpu. The rest are two cpus. :) thank that is variable enough? As I said, that .1 is _noise_ in the worst degree. :) This is a position where Crafty changes its mind several times in the last (12) iteration. That hurts... But it can also produce one of those pretty rare super-linear numbers when it lucks into searching the right thing first.. IE log6 and log10 are examples, but the _average_ certainly isn't "super- linear". :) >this obviously has nothing any more to do with computer chess, but with >statistics, and with reporting errors... it is just WRONG to give 2.0 as the >average of the measurements for the speedup using two processors, even if that >is the correct result if you round to one decimal place in the end. you should >NOT round to one decimal place! >just because "everybody" is doing something wrong doesn't make it right :-) > >the only reason i'm annoying you with this is that i'd be really interested >exactly how close to 2.0 you get on average? and i'm a scientist, so answers >like "very close" are not accepted :-) > You have the table of numbers rounded to 1 decimel place. What if you add 'em up and divide by 24 and keep 2 or 3 decimel places? Of course after you see the data from above, you might not think it very informative any longer. :) >aloha > martin > > >> >> >>>obviously, IF you rounded the single speedups that way, then, on average you are >>>giving a 0.05 too high speedup. if you subtract that, you get 1.91... which is >>>kind of closer to 1.9 than 2.0... >> >>In looking at the output from the log eater, which is _all_ I have to look at >>and draw conclusions from, the numbers are, to the best of my judgement, floats. >>Or in FORTRAN, which it was actually written in, REALs... So I would really >>suspect that the speedups are +/- .05 for rounding from FORTRAN, personally. >> >> >> >> >> >>>i am not saying this is a BIG MISTAKE which invalidates any conclusions or >>>anything. it's just something you shouldn't do. it doesn't even matter if you >>>did it or not - if you still think that that is a valid procedure, then you >>>should think again :-) >>> >>>aloha >>> martin >> >>The only rounding I have ever done dealt with integer values. Which have to >>be handled carefully due to instant truncation. But in looking at the one page >>of "log eater" output I happen to have, which is basically the table for the >>speedups plus the average speedups across the bottom, it looks like normal >>rounding was done by using formatted I/O. I earlier said %.1f, but in reality >>Cray Blitz was written in FORTRAN because of the Crays that didn't have C until >>the late 1980's... and the log eater was written in the same language, which >>means that integer math would not have been used...
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.