Author: Russell Reagan
Date: 12:05:15 05/20/02
Go up one level in this thread
On May 20, 2002 at 14:47:24, Eric Baum wrote: > >OK then: >(1) How much have computer programs benefitted from additional >features? Remove all additional features from the top programs >except material/piece-square table, and how many rating points would you lose? >I'm guessing less than 100, but do you have another estimate? I'm guessing more than 100, a lot more. It wouldn't be hard for you to test out on your own. It seems a lot of people think that computer chess programs, no matter how simple, play like grandmasters. There are plenty of very weak chess programs, and they use mostly material and piece-square tables to do evaluation. I would bet that a program with an evaluation function that primitive wouldn't break the 2000 elo mark, or maybe between 2000-2100 at best. Of course I could be wrong and someone like Bob or Dan Corbit could tell you better the differences between a program with a simple evaluation function vs. a program with a complex one, but even if you could get a program with that kind of simple evaluation function playing at a 2500 level, that's at least 100 rating points right there. Of course, a program that simple would never be capable of consistently playing competitively against IM's and GM's, which is what a 2500 rating would get you. One some lists, Crafty and Yace are at least 100 points behind the leading commercial programs, and they have evaluation functions that are far from simple. >(2) Are there any programs with significant ability to discover new >features, or are essentially all the features programmed in by hand. >If you believe there are programs that discover useful new features, >how many rating points do you think they have gained? >And can you give me some idea of what type of algorithm was used? There are programs like this, and they do learn new features. They generally use neural nets or something like that. They also generally stop improving at about a beginner level. >Also, for comparison, does anybody have a recent estimate of rating >point gain per additional ply of search? I don't, but someone does I'm sure. I would guess however that at some point you aren't going to get many more rating points, and then once you reach a really deep depth, you will start to see more jumping up of the rating, then another diminishing returns area, then another jump, and so on, until you reach a ply depth where you solve the game. Just think about it. If a program searches 14 ply, and another program searches 15 ply, then the 15 ply program isn't going to see much more than the 14 ply program. There isn't enough depth difference for any significant tactics or anything to alter the evaluation of the position by any great margin. On the other hand, if one program searches 14 ply, and another program searches 20 ply, then that means that the 20 ply program could see extra 3 move combinations that the 14 ply program couldn't see, which would be significant. >(3) Also, am I right in thinking that modern programs are still more or >less doing alpha-beta with quiessence search, or has there been real >progress on context dependent >forward pruning, leading to substantial rating points gains? None of us (except the authors of the top commercial programs) can say for sure if there are any "secrets" that aren't well known by all of us amateur programmers. As far as the non-commercial programs, it's safe to say that it's pretty much alpha-beta (in some form) with quiessence search. We know what Crafty does since it's open source, and then there's Yace at around the same level as Crafty, and since it's not significantly better, it's probably not doing anything "secret" that's helping it a ton. And since I don't even know what "context dependent forward pruning" is, maybe you could explain that :) Russell
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.