Author: Vincent Diepeveen
Date: 08:42:34 04/14/02
Go up one level in this thread
On April 14, 2002 at 04:26:52, Alessandro Damiani wrote: Seems to me that these idiots never have figured out what already has been tried in computerchess world. Of course i'm not using their 'concept' which already exists by the way. These guys are beginners everywhere of course. Mawari, every idiot who programs for that game can get world champ there of course, or pay levy to get a gold medal... ...if i may ask... What works for a 2000 rated chessprogram to experiment with doesn't work for todays strong chessprograms simply. Mawari programs when compared to chess programs are at 2000 level of course, relatively seen to how much time and effort has been invested in mawari programs. If i read their abstract well then in fact they define a 'partial' evaluation, already known under the name lazy evaluation using a quick evaluation. That's a complete nonsense approach. It's pretty much the same like lazy evaluation based upon a quick evaluation, it's most likely exactly the same, if not 100% similar. If i would describe here how much time i invested in making a quick evaluation which evaluates some rude scores, and which with some tuning when to use it and when to not use it, that it always scores when used within 3 pawns in 99% of the positions, then people would not get happy. I invested *loads* of time there in the past. More important, i generated big testcomparisions here to see when the quick eval worked and when not. That's why i could conclude it didn't work. Even more unhappy i was when i tested with this concept. Disaster. Yes it was faster concept, but here the amazing results - positional weaker - tactical weaker the first i wasn't amazed about of course, but the second i was. i was pretty amazed to find out that these 1% of the evaluations where the quick evaluation gave a score but evaluated it wrong, really amazing that these evaluations cause a tactical way better engine. Simply majority of tactical testset positions get solved by evaluation and NOT by seeing a bit more tactics. In short it's not working simply to use a lazy evaluation in a program with a good evaluation which also has high scores for things like king safety. >Hi all, > >I am wondering if someone uses "alpha-beta-Evaluation Functions" by Alois P. >Heinz and Christophe Hense. Below is the abstract of the text. > >Alessandro > > >Bootstrap Learning of alpha-beta-Evaluation Functions >Alois P. Heinz Christoph Hense >Institut für Informatik, Universität Freiburg, 79104 Freiburg, Germany >heinz@informatik.unifreiburg.de > >Abstract >We propose alpha-beta-evaluation functions that can be used >in gameplaying programs as a substitute for the traditional >static evaluation functions without loss of functionality. >The main advantage of an alpha-beta-evaluation function is that >it can be implemented with a much lower time complexity >than the traditional counterpart and so provides a signifi >cant speedup for the evaluation of any game position which >eventually results in better play. We describe an implemen >tation of the alpha-beta-evaluation function using a modification >of the classical classification and regression trees and show >that a typical call to this function involves the computation >of only a small subset of all features that may be used to >describe a game position. We show that an iterative boot >strap process can be used to learn alpha-beta-evaluation functions >efficiently and describe some of the experience we made >with this new approach applied to a game called malawi.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.