Author: Rolf Tueschen
Date: 12:41:41 09/06/02
Go up one level in this thread
On September 06, 2002 at 15:28:09, Sune Fischer wrote: >On September 06, 2002 at 14:38:15, Robert Hyatt wrote: > >>On September 06, 2002 at 14:17:59, Sune Fischer wrote: >> >>>On September 06, 2002 at 11:53:13, Robert Hyatt wrote: >>> >>>>I have posted the raw data logs, the "cooked data" that I extracted from the >>>>logs, and the speedup tables (those for Martin last nite). It might be >>>>interesting to take the cb.c program I also posted and change the speedup >>>>format to show 3 decimel places (I used 2 as Martin had suggested that would >>>>be better.) >>>> >>>>It would be interesting to run the program with 1, 2 and 3 decimel place >>>>accuracy, and let everyone look at the three tables and decide which one >>>>_really_ provides the most useful information. I'll bet everyone likes >>>>.1 better than .11 because is .01 really significant? Or is it just random >>>>noise? >>> >>>To a numerical scientist (as I'm sure you know) the numbers 1.8 and 1.80 are not >>>identical, 1.80 is ten times more accurate, and that is a powerful statement in >>>itself. >>>To produce such a number you need to (a) run a larger experiment and do some >>>statistics to get an average or (b) get some better and probably a lot more >>>expensive equipment (higher resolution mass-spectrometers, or whatever the >>>situation may call for), though in this case (a) seems like the only option. >>> >> >> >>(a) was the course I took in my dissertation, but I had a 30 processor >>sequent that was basically "mine" for several months so running thousands >>of tests was not impossible. >> >>However, doesn't that leave the data open to the same criticism as the data >>in my dts JICCA article? (that the data is not "raw")?? Because it will >>be an average, and that will make it look artificial... >> >>So back we go again? > >Sorry, I'm not fully up to speed here because I haven't read all of the threads, >so my comment was more of a general nature :) > >But I'd say it depends on what you want to show, if you have bunch of positions >that you want to know the speedup for, and you know that every time you run it >you get something sligthly different. Then, you have no choice but to roundoff >to lose a few of the inaccurate digits, or alternatively do additional work to >make sure you get the digits right. > >There seems to be little point in using a number of 1.983432 for a speedup, if >the next run will produce 1.9348284 and the next 1.96347823 etc., it looks >rather silly doesn't it :) > >Personally I would rather be presented with a clean average number of 1.94, or >even 1.9 or 2.0. > >>I've always used "averages" but for the DTS paper it was simply impossible. >>You might Call someone up like say "united computing" in texas and ask what >>they would have charged for a few months time on a dedicated C90. :) > >That is a dilemma, of course if you have no grasp what so ever on how much the >error is, you have a problem. So to be safe, it is better to use less digits ;) > >Anyway, this is all something that can be read in any introductury data analysis >book, here is something I found on google: > >"From the mathematical standpoint, the precision of a number resulting from >measurement depends upon the number of decimal places; that is, a larger number >of decimal places means a smaller probable error. In 2.3 inches the probable >error is 0.05 inch, since 2.3 actually lies somewhere between 2.25 and 2.35. In >1.426 inches there is a much smaller probable error of 0.0005 inch. If we add >2.300 + 1.426 and get an answer in thousandths, the answer, 3.726 inches, would >appear to be precise to thousandths; but this is not true since there was a >probable error of .05 in one of the addends. Also 2.300 appears to be precise to >thousandths but in this example it is precise only to tenths. It is evident that >the precision of a sum is no greater than the precision of the least precise >addend. It can also be shown that the precision of a difference is no greater >than the less precise number compared. > >To add or subtract numbers of different orders, all numbers should first be >rounded off to the order of the least precise number. In the foregoing example, >1.426 should be rounded to tenths-that is, 1.4." > >http://www.tpub.com/math1/7b.htm > >(some great semantics at the very bottom:) > >-S. Chapter three: Bob, how you could say that speed-up was measured? Isn't it a factor and therefore calculated? come back to my first statement! Rolf Tueschen
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.