Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Seg fault problems on 64bit linux

Author: Robert Hyatt

Date: 15:01:27 09/05/05

Go up one level in this thread


On September 05, 2005 at 16:58:01, Carey wrote:

>On September 05, 2005 at 14:44:58, Robert Hyatt wrote:
>
>>
>>    void *p = (void *) ((int) malloc(size+63) + 63) & ~63);
>>
>>What I do is malloc 63 bytes more than I need, add 63 to the resulting pointer,
>>then and with a constant (an int unfortunately) that has the rightmost 6 bits
>
>I always did that seperately.  Allocate it and then cast to unsigned int, and
>then masked off how much I needed, then added that to the original pointer.
>That way the pointer never had a chance to be truncated.

Sorry.  "cast to unsigned int" casts a 64 bit value to a 32 bit value.  You just
lost the upper 32 bits...  Remember that _any_ int on these compilers is 32
bits.  While pointers in 64 bit mode are 64 bits long.  Any conversions will
completely wreck an address (pointer).



>
>That was back in the days of 16 bit near vs. far pointers in DOS, but the same
>thing works with 64 bit systems.
>
>A couple of macros can easily hide the nastiness of all the type casts etc. that
>keep the compiler happy.
>
>
>>32 bits, pointer (and long) = 64 bits...  Why an int is 32 bits on a 64 bit
>>machine is a good question.  We really needed some better int types, but the
>
>Up to the compiler designer.
>
>Realistically, it makes quite a bit of sense.  So much code today is hardwired
>for 32 bit ints that going fully 64 by default would cause a lot of code to
>fail.  By keeping the int at 32 bits, most semi-properly written code will still
>compile and work.

Problem is all ints are _not_ 32 bits.  That was my point.  Declare an int on a
Cray.  Or on an alpha...


>
>It'd be nice if all the code was properly written, but it's not.
>
>
>>ANSI C committee failed to deliver them.  int16, int32, int64 would have been a
>
>We have that.
>
>That's what the stdint.h header does.  It was added in C99.  It isn't in the C89
>standard because at the time, they were concentrating on "codifying existing
>practice".  In other words, just figuring out what all the existing k&r
>compilers were already doing, and how to clean up the murky areas.  That alone
>was a major effort.
>
>Unfortunately, it's not in C++, so that can screw up portability and linking
>both C and C++ together.
>
>Not only does it define those things, it also allows 'at least' that size, and
>'fast' sized integers, etc.
>
>So the machinery is there.
>
>We just needed better tools to access them.
>
>For code that has to be portable, it's just a matter of the preprocessor
>checking what version you have, including stdint if it's c99, and if not, then
>just typedefing some reasonable defaults.
>
>And then never using the plain 'int' but using known sized integers.
>
>
>Personally, I've always missed the somewhat stronger type checking of Pascal.
>(Not the full ISO typechecking, but a little better than what C has.)  A little
>bit better type checking can be a major help.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.