Author: Eugene Nalimov
Date: 10:01:53 10/14/98
Go up one level in this thread
On October 14, 1998 at 00:22:51, Ren Wu wrote: > >On October 13, 1998 at 12:58:50, Eugene Nalimov wrote: > >>My program generated KQPKQ in ~120 hours at Alpha/533 (roughly equivalent >>to PII/400). I think it's slower than Bruce's because of >> (1) Very crude moves generator - I just adopted schema used in my >> z80/8080/8088 chess program, >> (2) Slow - but memory efficient - indexing schema. > >Can you please let us know how much memory you used when you build this >database? And it would be great if you can outline your algorithm as well. >Thanks. I keep constructed tablebase in memory, so for 5-man ending with pawn program will use ~600Mb of (virtual) memory. As I have no machine with such amount of RAM, I see disk activity - but not as much as with Edwards' generator. At the earlier stages of the project I had version of the program that was disk-based, but then stopped its development. About algorithm - wait a week or two, and you'll be able to look at sources yourself. Or wait longer, and probable there will be an article in ICCA journal. >>2 days ago I was able to modify generator so that Gnu C++ (latest >>Gnu-Win32 from Cygnus) was able to compile it. I replaced some enums >>by ints, and (a==b) by (a==b)?true:false. Resulting code is ~1.4 times >>slower than code compiled by Visual C. >> >>I sent modified sources to Bob and now waiting - will his Linux Gnu C++ >>will be able to compile it? If so, I don't expect any problems in >>including probing code in Crafty. After that I plan to make sources >>public. >> >>I have all but one 5-man (3+2) pawnless tables generated; the last one >>is generating right now. Also, I generated 5 tables with 1 pawn - KNPKN, >>KNPKB, KBPKB, KRPRK, and KQPKQ; more to come in several days. So, when >>Bob will be ready, I'll FTP him 2Gb of gzipped tables. >> >>Next I plan to work on decompresion-on-the fly. Current results looks >>promising - compression ratio is 10-15% better than in gzip, and >>decompression speed is 29Mb/sec at PII/400, that is comparable with >>disk read speed (program was compiled by Visual C, of course). > >10-15% better or worse? If you can do 10-15% better than gzip and still be able >to decompress the db at that speed, your work is really impressive! Of course better :-). Algorithm used by compression/decompression routines is specialized - it looks at the data as at (very roughly speaking) N-dimension cube. It just happened that data in a tablebase is N-dimensional, and indexing schema do not hide that fact. For example, endgame databases in checkers program Chinook cannot be compressed by that algorithm at all. Also please note that CRC check is turned off - when it's on, decompression speed slows to 18Mb/sec. Algorithm was developed not by me, but by my friend... Eugene >>Eugene
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.