Author: Carey
Date: 16:17:33 09/05/05
Go up one level in this thread
On September 05, 2005 at 18:01:27, Robert Hyatt wrote: >On September 05, 2005 at 16:58:01, Carey wrote: > >>On September 05, 2005 at 14:44:58, Robert Hyatt wrote: >> >>> >>> void *p = (void *) ((int) malloc(size+63) + 63) & ~63); >>> >>>What I do is malloc 63 bytes more than I need, add 63 to the resulting pointer, >>>then and with a constant (an int unfortunately) that has the rightmost 6 bits >> >>I always did that seperately. Allocate it and then cast to unsigned int, and >>then masked off how much I needed, then added that to the original pointer. >>That way the pointer never had a chance to be truncated. > >Sorry. "cast to unsigned int" casts a 64 bit value to a 32 bit value. You just >lost the upper 32 bits... Remember that _any_ int on these compilers is 32 >bits. While pointers in 64 bit mode are 64 bits long. Any conversions will >completely wreck an address (pointer). Right. But that's not what I said. Or at least not what I meant. Rereading it I can see how you misread what I said. You are doing the whole thing as a single statement. That's the problem. You are trying to do too much at once. I said to do it seperately. Leave the malloc return as a pointer. This part never gets chopped to any integer. It stays safe. Then seperately, we cast it to an int. Or even a char. Just enough we can do the alightment. The loss in precision there is irrelevant because we are working with small integer (a few bits) anyway. Then we take that very small integer adjustment and add that to the original pointer. That type of adjustment is perfectly safe. No different from doing any of the normal pointer math, such as ptr=ptr+1; That way the pointer itself never ever gets cast to an int. It always stays as the full pointer. Only the adjustment calculation part ever gets chopped to an int, and in that case, it's okay. All we are needing is a few bits anyway. This is identical to what we used to have to do back in the days of 16 bit DOS when mixing 16 bit near vs. 20/32 bit 'huge' &'far' vs. 32 bit flat pointers. (Although back in those days, we had to do a bit of extra work to make sure the pointers were normalized, etc.) We used to do wrapper routines. (Sometimes macros, but usually functions.) Something like: void *AlignedMalloc(size_t Bytes) {unsigned int a; unsigned char *ptr; /* for proper official standard behavior, must be unsigned char */ ptr=(unsigned char*)malloc(Bytes+64); a=(unsigned int)ptr; a= a & 63; a=64-a; ptr = ptr + a; return (void *)Ptr; } As you can see, the pointer is never at risk from being chopped. Back in the old days, we never really knew what size pointer we'd be working with. It might be compiled with 16 bit pointers, or 32 bit 'far'/'huge' pointers, or even on a full brand new 32 bit 386 system. (Actually, there are other, technically better, ways to do that function. This assumes that a char is a byte is a cell, and so on. It wont work right on some exotic systems. For those, it might be safer to just increment the pointer up to 64 times until the lower bits show it's aligned. If it hasn't done it by then, you give a fatal error becuase you are on a really weird system.) Of course, we also did similar things for calloc, etc. And, of course, doing a 'free' was more than a little difficult, since the original pointer was lost. This could be dealt with by storing it, or by just not caring and letting the OS free the memory when we were done. >>>32 bits, pointer (and long) = 64 bits... Why an int is 32 bits on a 64 bit >>>machine is a good question. We really needed some better int types, but the >> >>Up to the compiler designer. >> >>Realistically, it makes quite a bit of sense. So much code today is hardwired >>for 32 bit ints that going fully 64 by default would cause a lot of code to >>fail. By keeping the int at 32 bits, most semi-properly written code will still >>compile and work. > >Problem is all ints are _not_ 32 bits. That was my point. Declare an int on a >Cray. Or on an alpha... I know. The Cray etc. people complained about that back in the late 80's when the original ANSI / ISO C standard was being done. The C standard people patiently explained the situation to them. That their charter was limited to "codifying existing practice" (Their words.) That they only had limited authority to invent or drastically change. That's why it waited until the C99 standard. And there is absolutely nothing that can be done about plain 'int'. It pretty much is defined as the machine word. Whatever that happens to be. But, the standard does allow for some flexibility. Hence, it's possible for a version of C to have 32 bit ints even on a 64 bit system. And for portability the large number of 32 bit programs that might not be 64 bit safe, it does make sense. Not necessairly the best choice, but it does make sense. (In fact, I remember reading articles back in the days when people were moving from 16 to 32 bits. People were complaining about the difficulties of moving from 16 to 32 bits. And the Cray people spoke up made similar comments about 32 bit unix programs being ported to the Cray.) The only time it causes problems is if the program author does very stupid things, like violating cardnial rules of pointers and integers being the same size. That's such a cardnial rule that no programmer today should violate it. But some do. That was a painful lesson we learned the hard way back then. But since then, most programmers have forgotten about it and are having to relearn it when moving to 64 bits. The reality is that you should never ever expect a pointer to be any particular size or value. Always use ptrdiff_t, and so on. The reality is that if the C compiler author wants to, a pointer could be 96 bits or more. The compiler author may decide to throw in some extra boundary info. Or maybe the pointer include extra info such as a page table entry. Or whatever. A properly written C program will never notice what size a pointer is. And the size of the integer will only be relevant to the amount of regular computational data it needs to hold. In which case, a 64 bit compiler with a 32 bit int can make some sense. (Again, not necessarily the best choice. But it can help make unsafe int behavior less likely to fail. A lot of programs today depend on 32 bit rollovers, etc.)
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.