Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The page in question - Give me a break

Author: Vincent Diepeveen

Date: 17:32:00 09/03/02

Go up one level in this thread


On September 03, 2002 at 18:49:08, martin fierz wrote:

the problem is the speedups he didn't round off.
the problem is the search TIMES. No way to see his
search numbers as rounded off numbers. please see the
table:

pos     1       2       4       8       16
1       2,830   1,415   832     435     311
2       2,849   1,424   791     438     274
3       3,274   1,637   884     467     239
4       2,308   1,154   591     349     208
5       1,584   792     440     243     178
6       4,294   2,147   1,160   670     452
7       1,888   993     524     273     187
8       7,275   3,637   1,966   1,039   680
9       3,940   1,970   1,094   635     398
10      2,431   1,215   639     333     187
11      3,062   1,531   827     425     247
12      2,518   1,325   662     364     219
13      2,131   1,121   560     313     192
14      1,871   935     534     296     191
15      2,648   1,324   715     378     243
16      2,347   1,235   601     321     182
17      4,884   2,872   1,878   1,085   814
18      646     358     222     124     84
19      2,983   1,491   785     426     226
20      7,473   3,736   1,916   1,083   530
21      3,626   1,813   906     489     237
22      2,560   1,347   691     412     264
23      2,039   1,019   536     323     206
24      2,563   1,281   657     337     178

That's not rounded off numbers at all.

Now from these numbers we can calculate the
speedup on each position for each processor.
Divide simply the time for 1 processor by the
times needed for 2. and for 1 processor divided by the
time for 4 processors, to get the speedups relative to
1 processor. then also do a simple addition of what
the difference is between the actual time for n processor
versus time for 1 processor divided by claimed speedup.

Best regards,
Vincent





>On September 03, 2002 at 18:26:09, Robert Hyatt wrote:
>
>>On September 03, 2002 at 17:44:26, martin fierz wrote:
>>
>>>On September 03, 2002 at 16:26:33, Robert Hyatt wrote:
>>>
>>>>On September 03, 2002 at 15:50:45, Matthew Hull wrote:
>>>>
>>>>>On September 03, 2002 at 15:42:08, Gian-Carlo Pascutto wrote:
>>>>>
>>>>>>http://sjeng.sourceforge.net/ftp/hyatt1.png
>>>>>>
>>>>>>I will try to upload the full article as soon as I can.
>>>>>>
>>>>>>--
>>>>>>GCP
>>>>>
>>>>>You've got to be kidding.  When Vincent posts the numbers, he's got all these
>>>>>trailing zeros.  What's with that?
>>>>>
>>>>>It is Vincent that's faking numbers here, not Bob.  Bob's numbers are just
>>>>>rounded off.
>>>>>
>>>>>Vincent is the one emitting bogons here.
>>>>
>>>>
>>>>If you didn't see my response elsewhere, this output was produced by a program
>>>>I wrote to "eat" Cray Blitz log files.  I did this for my dissertation as I
>>>>produced tens of thousands of test position logs for that.
>>>>
>>>>I believe that it does something similar to what I do today, which means that
>>>>anything between 1.71 and 1.8 is treated as 1.8, 1.81 to 1.9 is 1.9.
>>>
>>>that sure sounds like a bad thing to do! you should retain 2 digits, since
>>>the interesting number is not the speedup (1.8, 1.9, whatever), but rather the
>>>difference to 2.
>>
>>I don't use a "log eater" today.  I generally run a set of positions, and
>>get a time for one processor, then a time for 2 processors and divide the two
>>by hand.  I do round to one dec. place as the numbers are already very
>>unstable, and going to more accuracy is really pointless...
>
>well, that is not the point! in the end, you give an average of the speedup
>which turns out to be 2.0. now, even if your single numbers are unstable, the
>average of them is much more stable, and could do with more than 1 digit.
>
>if you do your wrong rounding twice, first for the single measurement, and then
>for the final result (you do, as i can see from the page GCP posted), then you
>can actually make a larger mistake. take measurements 1.81, 1.81, 1.81 1.81 and
>1.91. you round to 1.9 1.9 1.9 1.9 and 2.0. for the final result you round again
>to 2.0. when the average of this thing was 1.83. of course, i have chosen my
>numbers carefully, but there is simply no reason to do the rounding the way you
>do. on average, with your double rounding, you are giving yourself a 0.1 speedup
>which is not there! if you thought it was a good idea, and that it did not
>affect the result really, you should have done it the other way round - down
>instead of up. then at least you would be conservative...
>
>>> so if you measure 1.91 or 1.99, you really measure 0.09 or
>>>0.01, and obviously these two numbers are quite different.
>>>also, if you really feel like rounding, you should do it in the normal sense,
>>>like 1.750 - 1.849 -> 1.8. i don't believe a word of vincent's post, but always
>>>rounding upwards is definitely making your data look better than it is, and
>>>should be rejected by any reviewer or thesis advisor - if he knew you were doing
>>>that :-). not that it makes a big difference of course - but for the 2-processor
>>>case we are talking about a potential >10% error, which where it starts getting
>>>significant.
>>>
>>>aloha
>>>  martin
>>>
>>
>>or really 5% max?  ie 1.9 to 1.99 is only 5% difference...
>
>well, that's your interpretation: you say your are measuring x=1.9 or x=1.99.
>but your experiment is designed to produce numbers between 1 and 2. so if you
>look at it that way, your result is 1+x, delta_x being the .9 to .99 difference
>which is 10%. this also makes sense from a practical point of view, since the x
>thing is what you get from putting a second processor on the task.
>even worse, i could argue that the interesting number is not 1.9 or 1.99 but
>rather 2-x, with x now being how much inefficiency you get from the SMP search
>overhead. and now we are talking about a factor of 10 in the result which can go
>wrong. this is not a completely useless number either - obviously, an SMP search
>which gets 1.95 on average is a MUCH better search than one which gets 1.8 on
>average - even though it will not affect the end speed that much. but it is
>obviously getting much closer to perfection than the 1.8 search.
>
>if you want to stay with your interpretation (which i think is a bad idea), then
>you can still get nearly 10% error by taking 1.01 and 1.10 instead of your
>numbers - not that that is very likely to happen :-)
>whatever interpretation of your data you want to use, there is 1. no reason to
>round the way you do, and 2. it is always better to keep a digit too much than
>one too little :-)
>
>aloha
>  martin



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.