Author: Dann Corbit
Date: 22:07:39 12/05/03
Go up one level in this thread
On December 05, 2003 at 07:37:15, F. Huber wrote:
>Hi,
>
>a few days ago i discovered a big problem with my compiled version of
>Chest (´WinChest.exe´), which occurs since the very first release:
>(I didn´t realize it earlier, because until now i had only a notebook
>with 128MB RAM, and now have changed to a desktop P4-2.66 with 512MB)
>
>WinChest is requesting its hash memory from Windows by calloc(), but
>whenever the needed memory is larger than 256MB, this function call
>is not successful - it gets no more mem from Windows than this value! :-(
>
>I´m working here with Windows 98 (SE) and 512MB, but this can´t be the
>reason, because when compiling the same WinChest sources with the
>´Borland-C 5´ compiler there´s absolutely no problem with larger
>hash sizes - even more than the physical 512MB are possible (of course
>with swapping).
>
>This problem only arises with the version compiled with MS-VC++ 6.0,
>and I have absolutely no idea, what could be the reason for it -
>I´ve already searched through the help files and looked through all
>compiler options, but I didn´t find anything, what could explain this
>strange behaviour, and there also seem to exist no compiler options
>related to this problem.
>
>So my question to all, which are more familiar with MS-VC++ 6.0 than I:
>Is there any memory limit of 256MB in calloc() or mallcoc() in _priciple_
>in this compiler, or is there still any compiler option (that maybe I´ve
>not discovered yet), or do you eventually know _any_ other way to solve
>this problem with this compiler?
>
>Really hoping to get some solution -
>with my best regards,
Everybody else already answered, but I will throw in two cents worth.
According to the C standard, the only thing a compiler has to be able to do and
still conform is to be able to allocate a single 64K object. Obviously, we need
things that are bigger than that. But we can't cry to the compiler vendors
about it, because there are no rules.
Now, as to allocating big blocks of memory -- why the problem?
Suppose I have a system with 512 megs of ram and I want to calloc() a block of
256 megs. Ought to be easy, right?
Not so fast. Suppose that I have 300 bytes free -- that leaves 256 megs + 44
megs loose change. Ought to be simple. But there is a catch. The largest
contiguous block of memory might be much smaller than 256 megs. Indeed, if my
program has been running a long time, other programs may have fragmented memory
quite a bit. Maybe 300 megs are available, but the largest unbroken span not
owned by anyone is much smaller than 256 megs. Maybe only 100 megs or 64 megs.
In the case of really bad fragmentation, it could be only one byte, but that
never happens in real life. I do remember back in the OS/2 days, if you left a
program running doing a million malloc()/free() calls, eventually you would
fragment memory to the point where the system would crash, even though there
were no leaks.
Let's consider a super-simple machine that has 8 bytes of ram and 4 users. Here
is our free memory store:
[0][1][2][3][4][5][6][7]
Joe ask for one byte of memory. He gets byte 0.
[J][1][2][3][4][5][6][7]
Fred asks for two bytes of memory. He gets 1 and 2.
[J][F][F][3][4][5][6][7]
Joe releases his memory.
[0][F][F][3][4][5][6][7]
Sally asks for three bytes of memory. She gets 3, 4 and 5.
[0][F][F][S][S][S][6][7]
Wally asks for 2 bytes of memory. He gets 6 and 7.
[0][F][F][S][S][S][W][W]
Sally releases her memory
[0][F][F][3][4][S][W][W]
Gerald asks for 3 bytes of memory. There are 3 bytes free {0,3,4} but they are
not contiguous, so the request fails.
Now, how is it that an operating system can allocate tons and gobs of memory to
lots and lots of processes. In fact, if you look at your Windows Task Manger,
under the mem usage total on the bottom bar {assuming a win32 system} you will
see a lot more memory than you really have in use quite often.
So how is this possible?
The answer is demand paging. You get a memory mapped file {or a part of one}
allocated to your process when you do an allocation. When I call the allocator,
it returns almost immediately. I use the first page and everything is peachy.
But if I ask for a page that is far away (and lots of other tasks are running
and consuming memory), something bad will happen. It is called a "page fault"
and it means I asked for a page in my memory map that was not already held in
memory. So when I ask for element[10000000] it goes to the disk to find the
right spot and loads that page into memory. If you look at your system tool
under page fault totals, you will see these terrible events happening a lot if
you don't have enough memory on your system.
Why do I call them terrible? Because memory is thousands of times faster than
disk. When you actually hit the disk it is an awful speed penalty. That is why
adding memory to an overloaded system can suddenly make it run dramatically
faster. This is especially so in memory demanding applications like database
systems.
Now, with a Win32 system, you can really only address 4GB of ram total[*]. And
you cannot have all of that. The operating system needs gobs of ram too.
Windows divides it 50/50. So user processes get to allocate 2Gb and the
operating system gets to run its kernel stuff and disk cache, etc in the other
half. It is not as bad as it sounds, since each user process can have its own
2Gb throught he magic of demand paging. All fine and dandy. But 2 Gb is a very
small set when you are dealing with large problems (like a database system or a
7 man tablebase file). But there is good news around the corner. 64 bit chips
can address 18,446,744,073,709,551,615 bytes of ram.
Which reminds me. I want a multiple CPU AMD 64 system. And gobs and gobs of
ram.
[*] Yes, I know there are exceptions, but they are not very important. And you
have to give something up to get the extra memory.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.