Author: Dieter Buerssner
Date: 16:59:53 09/09/02
Go up one level in this thread
On September 09, 2002 at 13:32:42, Ed Panek wrote: > a = (double)i; > b = a / 1000.0; Note, that to not add even more confusion by possible compilere(mis)optimizations, it may be better to use d instead of 1000 above. A clever Compiler otherwise may actually do b = a*0.001 instead of a/1000. otherwise - both are *not* the same. > d = 1000.0; > c = b * d; > j = (int)(b * d); This is the crucial point, see below. >In the above case, the cast to int truncates and leaves the integer too small by >1 in some cases. The reason is the following: i86 hardware will by default calculate floating point expressions with higher precision, than the 64 bits of double (it will use 80 bits). So, what happens here, is that the 80 bit result of b*d was converted to an integer (without rounding it to type dobule first). This is *not* the same, as converting the double precision result to an integer. Note, that in double precision, for the numbers you tested, allways c will be exactly the same as a! (but for the other example: (2010.0 / 1000.0) * 1000.0 is not exactly 2010). >But, suprisingly, the modf() routine returns data that is >corrected for the error. No, the outcome of modf is right and expected in all cases (also on the Mac). The numbers printed are correct until the last digit (which also is correctly rounded). >On my mac, the CPU and/or libraries do some magic to correct this so that the >underlying data IS whole number data before the cast, or else the cast is very >smart. Your Mac will not calculate the temporary result with more precision. Actually, the outcome is perfect, and expected for a standard conforming floating point environment. OTOH, typically the additional precision will not hurt - it can often even help a bit. But for some software, it indeed hurts. Anyway, if you don't intend to write such software, just assume, that in floating point some things cannot be guaranteed - for example a == (a/d)*d. Also (a/d)*d is not necessarily the same (even when no over/underflow problem occures) as (a*d)/d. Many more such things ... BTW. The MSVC compiler sets the precision of the floating point hardware to 64 bit doubles (53 bit mantissa), and therefore gets the right integers in your case. You can search the net for the following nice paper: David Goldberg: What Every Computer Scientist Should Know About Floating Point Arithmetics. Regards, Dieter
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.