Author: Ed Panek
Date: 10:32:42 09/09/02
Go up one level in this thread
Red Hat Linux release 6.0 (Hedwig)
Kernel 2.2.5-15 on an i686
Here is just one example of unpredictable floating point behavior. I ran the
following program on RH and on my Mac G4:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
void main( void )
{
double a,b,c,d;
double a_int, a_fract;
double b_int, b_fract;
double c_int, c_fract;
int i,j;
for( i = 8; i < 13; ++i )
{
a = (double)i;
b = a / 1000.0;
d = 1000.0;
c = b * d;
j = (int)(b * d);
a_fract = modf(a, &a_int);
b_fract = modf(b, &b_int);
c_fract = modf(c, &c_int);
printf("i = %d, a = %35.30f, b = %35.30f, c = %35.30f, j =
%d\n",i,a,b,c,j);
printf("a = %35.30f + %35.30f\n", a_int, a_fract);
printf("b = %35.30f + %35.30f\n", b_int, b_fract);
printf("c = %35.30f + %35.30f\n", c_int, c_fract);
}
return;
}
The output on RH is as follows:
testit
i = 8, a = 8.000000000000000000000000000000, b =
0.008000000000000000166533453694, c = 8.000000000000000000000000000000, j =
8
a = 8.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.008000000000000000166533453694
c = 8.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 9, a = 9.000000000000000000000000000000, b =
0.008999999999999999319988397417, c = 9.000000000000000000000000000000, j =
8
a = 9.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.008999999999999999319988397417
c = 9.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 10, a = 10.000000000000000000000000000000, b =
0.010000000000000000208166817117, c = 10.000000000000000000000000000000, j =
10
a = 10.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.010000000000000000208166817117
c = 10.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 11, a = 11.000000000000000000000000000000, b =
0.010999999999999999361621760841, c = 11.000000000000000000000000000000, j =
10
a = 11.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.010999999999999999361621760841
c = 11.000000000000000000000000000000 + 0.000000000000000000000000000000
In the above case, the cast to int truncates and leaves the integer too small by
1 in some cases. But, suprisingly, the modf() routine returns data that is
corrected for the error.
Where is the correction happenning? In modf()? By the CPU when the result is
stored back?
On my mac, the CPU and/or libraries do some magic to correct this so that the
underlying data IS whole number data before the cast, or else the cast is very
smart. In this case, the cast and the printf() agree. I will note that it is
possible the compiler is very smart and factored the divide and multiply by 1000
out of the equation... But I doubt it.
Power PC G4 output:
testit
i = 8, a = 8.000000000000000000000000000000, b =
0.008000000000000000166533453694, c = 8.000000000000000000000000000000, j =
8
a = 8.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.008000000000000000166533453694
c = 8.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 9, a = 9.000000000000000000000000000000, b =
0.008999999999999999319988397417, c = 9.000000000000000000000000000000, j =
9
a = 9.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.008999999999999999319988397417
c = 9.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 10, a = 10.000000000000000000000000000000, b =
0.010000000000000000208166817117, c = 10.000000000000000000000000000000, j =
10
a = 10.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.010000000000000000208166817117
c = 10.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 11, a = 11.000000000000000000000000000000, b =
0.010999999999999999361621760841, c = 11.000000000000000000000000000000, j =
11
a = 11.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.010999999999999999361621760841
c = 11.000000000000000000000000000000 + 0.000000000000000000000000000000
i = 12, a = 12.000000000000000000000000000000, b =
0.012000000000000000249800180541, c = 12.000000000000000000000000000000, j =
12
a = 12.000000000000000000000000000000 + 0.000000000000000000000000000000
b = 0.000000000000000000000000000000 + 0.012000000000000000249800180541
c = 12.000000000000000000000000000000 + 0.000000000000000000000000000000
All very interesting to me... But the bottom line is that floating point is
spooky.
Comments?
Ed
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.