Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: I can't believe this bashing is being allowed on here: "Bad Math Topic"

Author: Uri Blass

Date: 21:54:36 09/04/02

Go up one level in this thread


On September 04, 2002 at 21:58:55, martin fierz wrote:

>On September 04, 2002 at 21:10:18, Robert Hyatt wrote:
>
>>On September 04, 2002 at 20:42:13, martin fierz wrote:
>>
>>>On September 04, 2002 at 20:16:00, Robert Hyatt wrote:
>>>
>>>>On September 04, 2002 at 19:22:25, martin fierz wrote:
>>>>
>>>>>On September 04, 2002 at 18:20:49, Terry Ripple wrote:
>>>>>
>>>>>>This is hardly the place to try and discredit our fellow CCC members and you
>>>>>>know who i'am refering to! This was very distasteful and uncalled for and
>>>>>>shouldn't have been allowed to continue at all.
>>>>>>
>>>>>>I just wanted to give my opinion on this matter!
>>>>>>
>>>>>>Regards,
>>>>>>      Terry
>>>>>
>>>>>i disagree with you... bob's DTS paper has 2 major flaws with numbers:
>>>>>1. numbers in a table are claimed to be measured, and they are not, and vincent
>>>>>is absolutely right to point this out.
>>>>>
>>>>>2. bob's rounding of 1.81 to 1.9 and rounding the average of these rounded
>>>>>results can result in an average speedup of 1.82 to be reported as 2.0. this is
>>>>>ridiculous and any undergraduate student should not get away with something like
>>>>>that.
>>>>
>>>>I don't follow that point.  Each "speedup" is computed from two numbers,
>>>>the N processor time divided into the one processor time.  There is a
>>>>roundoff/truncation issue there.  I didn't say "I did round up".  I said
>>>>"it is possible that the log eater might have done that."
>>>>
>>>>But that only happened on a single speedup.  There is no "second" roundup
>>>>because each speedup is only computed once.  So for normal math, the error
>>>>is +/- .05.  If I happened to have done it in integer math, then the error
>>>>_could_ have been +/- .09.  Unfortunately, without the code, I can't say
>>>>which.  I suspect it was pure floating point, using the %.1f type format,
>>>>which means .05 is the actual error.  But that is just my best guess.  Not
>>>>a statement of fact...
>>>
>>>well, on the famous page in question, you give speedups for every position in
>>>table 5, and then an average speedup in table 6. as far as i can see, table 6 is
>>>the main result of this paper, not table 5, right?
>>
>>OK.  I was only considering the data in the main position-by-position speedup
>>table.  The other table is not a round-up, but you should be able to check
>>that from the first table.  IE IIRC the output, it produced individual speedup
>>values followed by a column average, all in the same program.  IE I would hope
>>that if you sum up a column in the individual speedup table, and divide by 24,
>>you get the number in the summary table...  hopefully using normal FP rounding
>>to a %.1f format.  :)
>>
>>
>>
>>
>>> i surely would not remember
>>>all 24 numbers, but i would remember that you claim to get a nice 2.0 speedup on
>>>a dual machine.
>>
>>On cray blitz, it was very close, yes.  It started dropping however.  With
>>only one processor, and splitting at the root (which almost nobody else is
>>willing to do) the search is _very_ efficient.  It gets worse as the "party"
>>gets bigger of course... :)
>>
>>
>>
>>
>>
>>
>>>IF you really rounded the way you might have done, you have two roundings. i can
>>>see that you computed 2.0 from the 24 single speedups, when the proper result
>>>would be 1.9583333... which you should give as 1.96, not as 2.0.
>>
>>As I said, everybody reported to 1 decimel place, and I simply stuck with
>>that.  Even the 1 place is very "noisy" in reality and doesn't mean a thing
>>IMHO...
>
>if you give the single speedup with 1 digit accuracy, you have to give the
>average of that with two digits accuracy, since you have 24 ~ 5^2 results, so
>the average speedup has a five times smaller error on it than the single
>speedup. and if you give that with one digit it's supposed to mean that it has
>an error of ~0.1 - so the average has an error of ~0.02 with this reasoning =>
>two digits are the thing to do.
>you could also just measure if the 0.1 is as noisy as you think - i'd be doing
>that if i had a 2 processor machine... all you have to do is let it think on the
>same position 10 times and write down the times (without rounding...) & compute
>the standard error. repeat for as many positions as you like, to eliminate
>variability due to some weird position.
>
>this obviously has nothing any more to do with computer chess, but with
>statistics, and with reporting errors... it is just WRONG to give 2.0 as the
>average of the measurements for the speedup using two processors, even if that
>is the correct result if you round to one decimal place in the end. you should
>NOT round to one decimal place!
>just because "everybody" is doing something wrong doesn't make it right :-)
>
>the only reason i'm annoying you with this is that i'd be really interested
>exactly how close to 2.0 you get on average? and i'm a scientist, so answers
>like "very close" are not accepted :-)

I think that we will never get an answer for this.

I suspect that people who get results that are 1.9x times faster or even
1.8x faster can improve their search for one processor.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.