Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Java oddity

Author: Dieter Buerssner

Date: 12:02:07 09/09/02

Go up one level in this thread


On September 09, 2002 at 10:35:07, Daniel Clausen wrote:

>On September 09, 2002 at 10:04:12, Ed Panek wrote:
>
>>long x;
>>double y;
>>
>>y = 2.01;
>>x = (long)(y * 1000);
>>
>>
>>guess what x equals? ...
>>
>>No, not 2010, but 2009 !!!!!!!!!!!!!!!

[...]

>In your example, the number "2.01" doesn't have an exact representation in a
>computer. (at least I don't think so - correct me when I'm wrong :)

You are correct for all floating point representations on almost all systems
(which is binary based, I believe, there have also been systems with decimal
based floating point representation in ancient past).

>In fact, the
>internal representation is just a tiny bit below 2.01.

Exactly. For the IEEE 754 floating point standard, 53 bit mantissa (almost all
modern architectures use this for double - but not so some Crays) the closest
floating point number is exactly
2.0099999999999997868371792719699442386627197265625000 (all trailing zeroes).
The next bigger floating point number representable would be
2.010000000000000230926389122032560408115386962890625000
And as you can see, this is a bit father away from 2.01.

>Now if you multiply that
>by 1000, it will be just a tad below 2010,

2.009999999999999772626324556767940521240234375000E3 (which is not exactly the
above number times 1000)

On MIPS, Alpha, Motorola chips, one would get exactly the same result.

>and if you convert that to an integer
>(which does not round, it cuts off the rest) it yields the result 2009.

At least in C. Are the rules the same in Java?

A result of 2010 would actually show, that the compiler/libs/hardware don't
conform to the IEEE standard (for example it might make a very small error by
converting 2.01 to the second number mentioned above. This sort of thing is not
easy to implement, and in general needs higher precision math, than the provided
internal types).

BTW. that cast to an integral type is rather costly on x86/7, because the
control register of the FPU must be reprogrammed. AFAIK the Intel compiler has
some optimization switches, which may yield in faster code, and in different
results. And, to become at least a bit ontopic again, this can yield in
compiler/optimization switch dependence of chess engines. For example, in Yace I
initialize some eval tables by linear interpolation going through intermediate
floating point results. So, some values can differ by one centipawn. Not that
this would make the engine weaker/stronger - but it makes it searching different
trees, and it may be non obvious, what is the cause ...

Regards,
Dieter



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.