Author: Robert Hyatt
Date: 17:55:30 05/09/04
Go up one level in this thread
On May 09, 2004 at 20:27:58, Uri Blass wrote:
>On May 09, 2004 at 19:40:19, Robert Hyatt wrote:
>
>>On May 09, 2004 at 15:06:30, Uri Blass wrote:
>>
>>>I see that crafty is using the following commands:
>>>
>>>if (buffered >= SORT_BLOCK) {
>>> BookSort(bbuffer,buffered,++files)
>>>
>>>If I understand correctly the book of crafty is not one file but some files,
>>>when every file is not more that SORT_BLOCK positions.
>>
>>No. One file broken up into 32768 "clusters". A cluster is all the hash
>>signatures with the same 16 left-most bits (same parent).
>>
>>>
>>>I wonder what is the reason for it.
>>>I think that it is more simple to call booksort only one time, after I read all
>>>the book positions into an array.
>>
>>
>>Which would you rather sort? one million things at a time, 100 times total, or
>>100 million things once? Hint" 100M at once takes _way_ longer... I produce
>>separate sort files, then merge them back into one file. It was done for speed
>>of production...
>
>I understand.
>
>>
>>
>>
>>
>>
>>>
>>>The only problem can be if the book is too big so there is not enough memory in
>>>RAM, but I think that it is not a practical problem with the hardware that you
>>>have.
>>
>>People have used game databases of 20+ million games, at 100 moves (50 moves per
>>side) that is a _huge_ number of book positions. Won't fit into RAM. So a form
>>of disk sort is needed.
>
>The question is if there is a good reason to use 20+ million games for getting a
>better book and if 1 million is not enough.
>
>hint:games of weak players or blitz games are probably not very productive for a
>better book.
>
>Uri
I won't argue that at all. I did this due to many requests for the ability to
create _huge_ books, for whatever reason...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.