Author: Steve Maughan
Date: 00:46:02 04/06/02
Go up one level in this thread
Russell, While creating my program Monarch I went through similar thought processes. I would however oscilate from "Wow - if you put all the standard algorithms into a program it plays quite well" to "the domain space of possible tweaks to the standard search, special search cases, standard evaluation and special evaluation is so *HUGE* that it's really tough to create a really strong program. >Do the top programs make use of some methods that the general computer chess >hobbyist does not know about? Any "secrets", or any significant improvements >upon a well known technique? Yes. For example The King aka ChessMaster clearly has some extension heuristics like that of no other program. >Secondly, what is the margin of difference that actually makes a difference? >For example, if program A is optimized to the hilt, gaining a few thousand NPS >over program B (or even more), is that really worth anything in terms of >playing strength? Maybe a +0.05 pawns for program A or some equally >trivial "advantage", or is it? Basically, how much better/faster/deeper does >the program have to search to gain a realistic advantage over another program? >Small advantages in speed due to one program being more optimized than >another, IMO, would not be enough to make one program significantly stronger >than another. I think you'd be suprised at the effect of small changes to the search / evaluation - they can be massive. A program like Crafty is *very* well tuned but since Bob is generous enough to make it open source and he also is willing to answer question, there isn't much in Crafty that isn't in a decent amateur program - it's just incredibly well tuned. That's one reason why it has a strength of ~2550 ELO and the amateur are mostly below this. >Last, and possibly the most obvious of my possible reasons for this, is to >wonder if the strength differences in programs lie in the most mysterious part >of any chess program, the evaluation function. This seems to be the area where >there is little or no standardization of algorithms, and also the area allowing >for the most creativity. So do we have a winner...the evaluation function? Again I'm sure pros have a well tuned evaluation function but I also think that a good evaluation function is mainly the result hard work i.e. time, which is what the pros have dedicated to their programs. >And of course, if anyone else has any other ideas I'd love to hear them. The other area that Christophe and Bob emphasise is testing. The pros spend much more time testing each version Regards, Steve Maughan
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.