Author: martin fierz
Date: 11:24:47 12/11/01
Go up one level in this thread
On December 10, 2001 at 16:16:00, Scott Gasch wrote: >On December 09, 2001 at 21:09:17, martin fierz wrote: > >.... >>i would have expected these two approaches to be equivalent, but the new one >>drops a few % of speed overall. that's a lot, because the whole hashing stuff >>only takes up a few % of my work, so it looks like that part is something like >>half as fast as earlier. > >When you have a global array in your code it will live in the .data section of >your binary image. When you allocate a big chunk of memory for your hashtable >it will come from the process' heap. So the memory layout of your image before >and after this change is dramatically different. The slowdown you are seeing >is, I think, more likely a result of the changed memory layout than a penalty >for accessing the hashtable memory the new way. The new way might result in one >or two new instructions per hit, but that doesn't explain the kind of slowdown >you are talking about. that's what i thought too. i wasn't sure about where the hashtable was located before, thanks for explaining. >I saw the same thing happen when I modified some data structs in my chess engine >to support multithreaded search recently. I saw an instant 15%+ decrease in >engine speed. This was due to memory layout and was corrected, believe it or >not, by rearranging the order of locals in a single function. My first bit of >advice to you is to profile your code before and after the change. Be very >suspicious of any routine that is taking longer... especially if it has nothing >to do with the change. another cool programming experience :-) i have also seen large speed differences between using global arrays for things or using them as local arrays in the function that uses them, sometimes better for the globals, sometimes better for the locals. i just would like to understand what is better and why, so that i can "do the right thing" generally. but it seems like no general rules exist :-( >The other thing that may be slowing you down is that you are somehow swapping. >Are you sure that the size of the hashtable you have allocated will fit in >physical memory? I'd run perfmon and watch the "number of page faults per >second" counter for your process while it runs. If this is not zero, expect to >pay a large speed penalty. yes, the hashtable fits very well. my default hashtable size is 8MB, and my machine has 320, so that shouldn't be the problem. thanks for your answer martin
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.