Author: Dann Corbit
Date: 10:00:05 07/26/00
Go up one level in this thread
On July 26, 2000 at 12:42:41, blass uri wrote: [snip] >I do not see the point of comparing the evaluation with other top programs. >It is not clear if the evaluation of the top programs is superior relative to >stobor that is not a weak program and even if they are slightly superior it may >be better to do other things. > >You can also get evaluation that is closer to the top programs evaluation and >not the same evaluation and even if the evaluation of top programs is superior >your estimate to the top programs evaluation may be inferior. > >The evaluation of other programs is often wrong and they often do the same >typical computer evaluation mistakes and I do not see the point in doing >something similiar. > >We do not need Fritz clones and people are not going to buy a program that has >similiar evaluation to fritz and inferior search rules. I think it would be useful for "ballpark" figures to see if your evaluation is way off. I also think that running the top ten chess engines on 1000 tough positions and doing a geometric average of their evaluations might be a good idea too, to create a baseline. However, I think the following approach would be far more useful: For each term that Yassar Sierwan describes in all of his chess books (pigs on the 7th, bad bishop, etc.) create a variable to track it. Then do a optimization (perhaps simulated annealing or maybe just a weighted least squares fit) to find optimal values for each of these terms. Then, use those values in your evaluation function. You could also add the clever things that Colin Frayn, Robert Hyatt, Rémi Coulom, and Dusan Dobes did in their evaluation functions and provide weights for these, as well as any terms discussed in ICCA literature or online papers.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.