Author: Robert Hyatt
Date: 18:20:43 08/06/03
Go up one level in this thread
On August 06, 2003 at 21:11:38, K. Burcham wrote: > >After reading your post about the mhz being about the same size, lets say for >our example 2000mhz vs 2000mhz. Here we are comparing 32 bit vs 64 bit systems. >Lets say the operating system is 64 bit compatable. >Lets say the chess program is 64 bit. >Lets say software and hardware is 64 bit. > >GCP, could you please make a few comments about this 64 bit system working in >chess software. we are all used to 32 bit. what is actually going on in these >systems when running a chess program with 32 bit vs 64 bit. > >there is a lot on the internet comparing the two, but not about chess software >using 64 bits. > >here are some examples from the internet: > >A lot of exciting capability is enabled by 64-bit processors. They are >inherently twice as fast as their 32 bit counterparts, That's the first misconception. A 32 bit processor fiddles with a 32 bit computation in one cycle. A 64 bit processor fiddles with a 64 bit computation in one cycle. If you really have values (integer) such that -2^31 <= N <= 2^31-1, then a 64 bit processor will do _nothing_. You just work with 64 bit values, where the upper 32 bits are all zero or all 1. No gain whatsoever. 64 bit processors become faster when you really need 64 bit values, such as in bitboard programs where all 64 bits are significant. Suddenly and AND takes one cycle, rather than two. A shift takes one cycle rather than 3. And add or subtract the same. So for _most_ applications, a 64 bit processor offers nothing at all. Which leads to disappointment. > but there are other ways >of gaining performance improvements making the performance issue not necessarily >a critical reason to go to 64 bits. More important, however, is the address >space improvement that 64 bits enables. A 32-bit computer can address, and work >on, 2-to-the-32nd power bits of data at any one time. This translates to a file >size limitation of 2 GB. A 64-bit computer on the other hand, can address and >work on 2-to-the-64th power bits of data at any one time. That translates to 4 >billion times more bits of data. The file size limit doesn't exist. Windows has supported 64 bit file offsets for quite a while. Linux supports this as well. It doesn't require a 64 bit processor. The 4 gigabyte address space is also reasonable for today, and with intel there is a kludge to give you a 36 bit address space. Eventually, 32 bits becomes a problem, of course. > >Today, many servers can handle more than 4Gbytes of physical memory. High-end >desktop machines are following the same trend. But no single 32-bit program can >directly address more than 4Gbytes at a time. However, a 64-bit application can >use the 64-bit virtual address space capability to allow up to 18Ebytes to be >directly addressed; thus, larger problems can be handled directly in primary >memory. If the application is multi-threaded and scalable, then more processors >can be added to the system to speed up the application even further. Such >applications become limited only by the amount of physical memory in the >machine. If you look carefully, _nobody_ is really producing a 64 bit address bus. MIPS for example chops it off at 46 if I remember correctly. There simply isn't that much memory around, yet. > >It might seem obvious, but for a broad class of applications, the ability to >handle larger problems directly in primary memory is the major performance >benefit of 64-bit machines. > >In the 64-bit environment, a process can have up to 64-bits of virtual address >space, that is, 18 exabytes. This is approximately 4 billion times the current >maximum of a 32-bit process. > >kburcham That's true. Unfortunately, you have to calculate the number of 300 gig drives you need for swap space. Then it looks a bit impractical. :)
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.