Author: Vincent Diepeveen
Date: 14:16:51 07/20/99
Go up one level in this thread
On July 20, 1999 at 12:12:14, Dann Corbit wrote: I think for measuring one should use a lot of time. Not 20 seconds tests. I'm completely against that. I was thinking more of a table which lists what program is using what thing. In DIEP i'm using: alfabeta pruning nullmove R=3 hashtables for a lot of different purposes, in transposition table: PROBE=8 i do checks in quiescencesearch extensions: check extensions, threat extensions (i'm aware that's quite a general term, as that can include everything from mating extensions to singular extensions, but i fear we can't get out much more out of most dudes, including me) >I think it would be interesting to benchmark chess algorithms: >0. Move generators -- all types >1. Alpha-Beta vs MTD(f) >2. Bitboards vs 0x88 >3. etc. > >Prepare a large crosstable and do a large number of runs with as many >implementations as possible and under as many different conditions as possible. > >Change the search time from very short searches (10 sec or less) up to half an >hour to find the bit O(f(n)) properties of the algorithms. > >A systematic study might eliminate a lot of guesswork or even tell us *where* >certain algorithms work better than others. For instance, we might use one >algorithm at a certain time control and a different algorithm at a longer time >control and yet another at correspondence chess time controls.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.