Author: Robert Hyatt
Date: 09:54:48 05/20/02
Go up one level in this thread
On May 20, 2002 at 08:23:35, Eric Baum wrote: >How much do modern programs benefit from >developments beyond alpha-beta search +quiesence >search? So, if you did the same depth search, >same quiesence search, same opening book, >same endgame tables, but replaced the evaluation >function with something primitive-- say material >and not much else-- how many rating points would you >lose? > >My recollection is that one of the Deep Thought thesis >showed a minimal gain for Deep Thought from >extensive training of evaluation function-- >it gained some tens of rating points, but >less than it would have gained >from a ply of additional search. Has that changed? You are mixing apples and oranges: apples: which evaluation features does your program recognize? oranges: what is the _weight_ you assign for each feature you recognize? Those are two different things. The deep thought paper addressed only the oranges issue. They had a reasonable set of features, and they set about trying to find the optimal value for each feature to produce the best play. Adding _new_ evaluation features would be a completely different thing, of course...
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.