Author: Rolf Tueschen
Date: 03:26:30 09/07/02
Go up one level in this thread
On September 07, 2002 at 03:34:55, José Carlos wrote: >On September 06, 2002 at 21:42:17, Robert Hyatt wrote: > >>On September 06, 2002 at 16:26:14, Rolf Tueschen wrote: >> >>>On September 06, 2002 at 15:55:09, Robert Hyatt wrote: >>> >>>>On September 06, 2002 at 15:41:41, Rolf Tueschen wrote: >>>> >>>>>On September 06, 2002 at 15:28:09, Sune Fischer wrote: >>>>> >>>>>>On September 06, 2002 at 14:38:15, Robert Hyatt wrote: >>>>>> >>>>>>>On September 06, 2002 at 14:17:59, Sune Fischer wrote: >>>>>>> >>>>>>>>On September 06, 2002 at 11:53:13, Robert Hyatt wrote: >>>>>>>> >>>>>>>>>I have posted the raw data logs, the "cooked data" that I extracted from the >>>>>>>>>logs, and the speedup tables (those for Martin last nite). It might be >>>>>>>>>interesting to take the cb.c program I also posted and change the speedup >>>>>>>>>format to show 3 decimel places (I used 2 as Martin had suggested that would >>>>>>>>>be better.) >>>>>>>>> >>>>>>>>>It would be interesting to run the program with 1, 2 and 3 decimel place >>>>>>>>>accuracy, and let everyone look at the three tables and decide which one >>>>>>>>>_really_ provides the most useful information. I'll bet everyone likes >>>>>>>>>.1 better than .11 because is .01 really significant? Or is it just random >>>>>>>>>noise? >>>>>>>> >>>>>>>>To a numerical scientist (as I'm sure you know) the numbers 1.8 and 1.80 are not >>>>>>>>identical, 1.80 is ten times more accurate, and that is a powerful statement in >>>>>>>>itself. >>>>>>>>To produce such a number you need to (a) run a larger experiment and do some >>>>>>>>statistics to get an average or (b) get some better and probably a lot more >>>>>>>>expensive equipment (higher resolution mass-spectrometers, or whatever the >>>>>>>>situation may call for), though in this case (a) seems like the only option. >>>>>>>> >>>>>>> >>>>>>> >>>>>>>(a) was the course I took in my dissertation, but I had a 30 processor >>>>>>>sequent that was basically "mine" for several months so running thousands >>>>>>>of tests was not impossible. >>>>>>> >>>>>>>However, doesn't that leave the data open to the same criticism as the data >>>>>>>in my dts JICCA article? (that the data is not "raw")?? Because it will >>>>>>>be an average, and that will make it look artificial... >>>>>>> >>>>>>>So back we go again? >>>>>> >>>>>>Sorry, I'm not fully up to speed here because I haven't read all of the threads, >>>>>>so my comment was more of a general nature :) >>>>>> >>>>>>But I'd say it depends on what you want to show, if you have bunch of positions >>>>>>that you want to know the speedup for, and you know that every time you run it >>>>>>you get something sligthly different. Then, you have no choice but to roundoff >>>>>>to lose a few of the inaccurate digits, or alternatively do additional work to >>>>>>make sure you get the digits right. >>>>>> >>>>>>There seems to be little point in using a number of 1.983432 for a speedup, if >>>>>>the next run will produce 1.9348284 and the next 1.96347823 etc., it looks >>>>>>rather silly doesn't it :) >>>>>> >>>>>>Personally I would rather be presented with a clean average number of 1.94, or >>>>>>even 1.9 or 2.0. >>>>>> >>>>>>>I've always used "averages" but for the DTS paper it was simply impossible. >>>>>>>You might Call someone up like say "united computing" in texas and ask what >>>>>>>they would have charged for a few months time on a dedicated C90. :) >>>>>> >>>>>>That is a dilemma, of course if you have no grasp what so ever on how much the >>>>>>error is, you have a problem. So to be safe, it is better to use less digits ;) >>>>>> >>>>>>Anyway, this is all something that can be read in any introductury data analysis >>>>>>book, here is something I found on google: >>>>>> >>>>>>"From the mathematical standpoint, the precision of a number resulting from >>>>>>measurement depends upon the number of decimal places; that is, a larger number >>>>>>of decimal places means a smaller probable error. In 2.3 inches the probable >>>>>>error is 0.05 inch, since 2.3 actually lies somewhere between 2.25 and 2.35. In >>>>>>1.426 inches there is a much smaller probable error of 0.0005 inch. If we add >>>>>>2.300 + 1.426 and get an answer in thousandths, the answer, 3.726 inches, would >>>>>>appear to be precise to thousandths; but this is not true since there was a >>>>>>probable error of .05 in one of the addends. Also 2.300 appears to be precise to >>>>>>thousandths but in this example it is precise only to tenths. It is evident that >>>>>>the precision of a sum is no greater than the precision of the least precise >>>>>>addend. It can also be shown that the precision of a difference is no greater >>>>>>than the less precise number compared. >>>>>> >>>>>>To add or subtract numbers of different orders, all numbers should first be >>>>>>rounded off to the order of the least precise number. In the foregoing example, >>>>>>1.426 should be rounded to tenths-that is, 1.4." >>>>>> >>>>>>http://www.tpub.com/math1/7b.htm >>>>>> >>>>>>(some great semantics at the very bottom:) >>>>>> >>>>>>-S. >>>>> >>>>>Chapter three: >>>>> >>>>>Bob, how you could say that speed-up was measured? Isn't it a factor and >>>>>therefore calculated? come back to my first statement! >>>>> >>>>>Rolf Tueschen >>>> >>>> >>>>OK... a terminology issue. Board A is 2 feet long. Board B is 3 feet long. >>>>How long are both? >>>> >>>>measured: put 'em end to end and let a tape show 5'??? >>>> >>>>calculated: measure each one and add the two lengths which shows 5'??? >>>> >>>>The speedups were calculated, but there is an exact relationship between the >>>>time taken to search with 1 processor vs the time taken to search with N >>>>processors. Speedup is defined to be that ratio. IE the speedup was not >>>>extrapolated, or calculated by finagling with various things like NPS, time, >>>>outside temp, cpu mhz, etc. It is just a direct result of dividing measured >>>>number A into measured number B. >>>> >>>>Whether that quotient is "measured" or "calculated" seems to be moot since it >>>>will be the _same_ result...??? >>> >>>I'm getting older each day... >>> >>>But speed-up is a factor and _not_ seconds. Ok, this might be unimportant here. >>>We're surely not searching for Newton's constants. Since we are depending on >>>chess positions as you've said yourself. So we can't have 'exact' relationships. >>> >>>Rolf Tueschen >> >> >>Here we do. IE, the one cpu run takes two minutes. The two cpu run takes >>one minute. The speedup is 2.0, which is produced by dividing the 1cpu time >>by the 2cpu time. In fact, that is the only way to get a speedup since you >>really can't "observe" such a thing in raw form because it is a comparison >>between two separate events... > > Another example, in case someone still is confused about this, is "raw NPS" >(which most people accept without problem). You don't meause NPS directly, you >measure total nodes and time, and then calculate a ratio. Exactly the same as >the speedup ratio. Interesting how some are trying to fish in no man's land. But Bob is too smart to utilize such tricks. While some always fall into their self-made traps. Always. The "most people accept without problem" will become a real classic. Too much unintentional confession in five words. The opposite of smartness... Rolf Tueschen > > José C. > > >>But it can't possibly change unless one of the times changes. And if one of >>the times changes, then the speedup changes too. >> >>The exception occurs with rounding errors. And with the times vs speedup, >>as there are an infinite number of pairs of times that will pruduce a specific >>speedup.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.