Author: Matt Taylor
Date: 14:42:44 12/09/02
Go up one level in this thread
On December 09, 2002 at 17:17:53, Robert Hyatt wrote: >On December 09, 2002 at 17:08:18, Dieter Buerssner wrote: > >>On December 09, 2002 at 16:52:34, Robert Hyatt wrote: >> >>>Yep. however, even the ANSI C committe could not agree on what to do with >>>things >>>from int, short, long and how to specify 64 bit ints... >> >>The ISO C standard defines those types IMHO in a very sensable way, by giving >>minimum allowed ranges for a conforming compiler. The C99 Standard defines >>[unsigned] long long basically as a type that has at least 64 bits. > >Unfortunately C99 is not particularly well-supported yet. The previous C >standard >left a _lot_ of holes. Is char signed or unsigned by default? Neither. Which >is stupid. >Is long 16 or 32 or 64 bits? Any of the above depending on the machine. Do bit >fields >start right-justified or left-justified? Either. > >The list goes on... > >> Any further >>specification (especially of the shorter types) would probably make it >>impossible to implement it efficiently on some architectures (for example for >>some embedded processors, which prefer int to be 16 bit and will produce faster >>code with this. Others use 32 bits even for char - they can't adress unaligned >>data. Generating code with masks etc. could make the code very inefficient). >>For most programming tasks, I see not much of a problem with the Standard >>definition of the types. >> >>If they would define the exact range of types, should they stop there? One could >>go on and ask to define an endianess. > > >That would suit me. The intel little-endian approach was ok for compatibility. >It >is now a complete bit of nonsense. > > >> >>Regards, >>Dieter I am not familiar with C 99, but the older ANSI C defines short, int, and long with respect to each other as follows: sizeof(short) <= sizeof(int) <= sizeof(long) Such code is not portable between architectures with different word lengths. For that matter, it's not necessarily portable between compilers. Microsoft chose to make an int 32-bits in their 64-bit version of VC. GNU C could (and should) make an int 64-bits. I would advocate a uniform convention of intxx where xx=number of bits. In my code I usually prefix with 's' or 'u' for signed vs. unsigned. For types where size is irrelevant (e.g. array indices), a general int type (like size_t) could be used. This convention is compact, clear, unambiguous, and extensible. (As a corollary, how will the short/int/long convention simultaneously support 16-bits, 32-bits, 64-bits, and 128-bits in a type?) However, many people (ANSI, Sun, and Intel to name a few) are of the persuasion that unsigned types are less useful than signed types. (Consider that malloc takes a signed length. How dumb is that.) I would foresee them omitting the prefix and possibly leaving the signedness ambiguous. In the case of Java, Sun decided to completely omit unsigned numbers. Fortunately it doesn't matter in most cases because C and all of its collective variants are incapable of doing range checking, and virtually all operations on integers will work without modification for unsigned integers IF you already handle signed integers. The difference is that signed integers are more prone to silent errors such as forgetting to check for < 0 and indexing an array. -Matt
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.