Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: static evaluation: alpha-beta-Evaluation Functions

Author: Vincent Diepeveen

Date: 05:02:40 04/15/02

Go up one level in this thread


On April 15, 2002 at 02:22:07, Tony Werten wrote:

>On April 14, 2002 at 16:24:44, Alessandro Damiani wrote:
>
>>On April 14, 2002 at 13:57:38, Vincent Diepeveen wrote:
>>
>>>On April 14, 2002 at 13:37:19, Alessandro Damiani wrote:
>>>
>>>>Hi Vincent,
>>>>
>>>>You too much concentrated on the game Mawari. I think they choose Mawari to have
>>>>a simple framework to experiment with. I guess a Mawari engine is far simpler
>>>>than a Chess one. So, forget Mawari. :)
>>>>
>>>>You are right, alpha-beta evaluation functions are like lazy evaluation. But,
>>>>the big difference is that an alpha-beta evaluation function is an algorithm
>>>>that traverses a classification tree. I have in mind the picture of an ordered
>>>>hierarchical structure of position features (a tree of features). At first sight
>>>>it seemed to me like that (right, I didn't take the time to read the whole text
>>>>:).
>>>>
>>>>We both agree on the bad effect of lazy evaluation on positional play, but an
>>>>alpha-beta evaluation function seems to be different: the bounds on a feature's
>>>>value range are not estimated.
>>>>
>>>>But maybe I am wrong.
>>>
>>>Yes you are wrong.
>>>
>>>Let them show pseudo code.
>>>
>>>Then you see what they describe is 100% the same and completely
>>>not working.
>>>
>>>What they do is kind of:
>>>
>>> "we have a great new feature and we call it X"
>>>
>>>In reality they invented old feature F which was already proven
>>>to be not working.
>>>
>>>So in fact 2 mistakes are made by them
>>>  a) not checking out existing literature and experiments done
>>>  b) committing scientific fraude by describing something
>>>     existing.
>>>
>
>Impressive. Here is the pseudo code for the skip-search heuristic.

>function perfect_evaluation(p:position):value:int
>begin
>   for i:=1 to all_possible_features(p) do
>   begin
>      add(value,score_of(i);
>   end;
>   return(value);
>end;
>
>( I have some pseudo-code for the meaning of life as well)
>
>Tony

evaluating just one half of the evaluation, right or left (or in chess
black or white) is already known for half a century. It is not possible
in todays programs. If i evaluate half of the position i get
for example for white a score of +50 pawns. For black -50 pawns.

So you see the problem is simple.

Evaluating just a part of the evaluation has been tried from right
to left to just doing rude scores or any other part of the evaluation.

Giving it a new name and calling it 'features' instead of 'patterns'
already says something how little computer game theory they know.

>>
>>I understand what you mean, but it is better if you first have got more
>>information before you judge. Here is the pseudo code taken from the text:
>>
>>function static_evaluation(p: position;
>>                           alpha, beta: real;
>>                           k: evaluation node): real;
>>begin
>>  for i:= 1 to D do unknown[i]:= true;
>>
>>  while true do begin
>>    if k.beta <= alpha then return alpha;
>>    if k.alpha >= beta then return beta;
>>    if leaf(k) then return k.alpha;
>>
>>    if unknown[k.feature] then begin
>>      vector[k.feature]:= get_feature(p, k.feature);
>>      unknown[k.feature]:= false
>>    end;
>>
>>    if vector[k.feature] <= k.split_value then
>>      k:= k.left
>>    else
>>      k:= k.right
>>  end
>>end
>>
>>where D is the number of features in a position.
>>
>>Here is the link where I took the text from:
>>
>>   http://satirist.org/learn-game/lists/papers.html
>>
>>Look for "Bootstrap learning of alpha-beta-evaluation functions (1993, 5
>>pages)".
>>
>>Alessandro
>>
>>>>
>>>>
>>>>On April 14, 2002 at 11:42:34, Vincent Diepeveen wrote:
>>>>
>>>>>On April 14, 2002 at 04:26:52, Alessandro Damiani wrote:
>>>>>
>>>>>Seems to me that these idiots never have figured out what already
>>>>>has been tried in computerchess world.
>>>>>
>>>>>Of course i'm not using their 'concept' which already exists
>>>>>by the way. These guys are beginners everywhere of course.
>>>>>Mawari, every idiot who programs for that game can get
>>>>>world champ there of course, or pay levy to get a gold medal...
>>>>>...if i may ask...
>>>>>
>>>>>What works for a 2000 rated chessprogram to experiment
>>>>>with doesn't work for todays strong chessprograms simply.
>>>>>Mawari programs when compared to chess programs are at 2000
>>>>>level of course, relatively seen to how much time and
>>>>>effort has been invested in mawari programs.
>>>>>
>>>>>If i read their abstract well then in fact they define a
>>>>>'partial' evaluation, already known under the name
>>>>>lazy evaluation using a quick evaluation.
>>>>>
>>>>>That's a complete nonsense approach. It's pretty much the same like
>>>>>lazy evaluation based upon a quick evaluation, it's most likely
>>>>>exactly the same, if not 100% similar.
>>>>>
>>>>>If i would describe here how much time i invested in making
>>>>>a quick evaluation which evaluates some rude scores, and which
>>>>>with some tuning when to use it and when to not use it, that
>>>>>it always scores when used within 3 pawns in 99% of the positions,
>>>>>then people would not get happy.
>>>>>
>>>>>I invested *loads* of time there in the past.
>>>>>
>>>>>More important, i generated big testcomparisions here to see
>>>>>when the quick eval worked and when not. That's why i could
>>>>>conclude it didn't work.
>>>>>
>>>>>Even more unhappy i was when i tested with this concept. Disaster.
>>>>>Yes it was faster concept, but here the amazing results
>>>>>  - positional weaker
>>>>>  - tactical weaker
>>>>>
>>>>>the first i wasn't amazed about of course, but the second i was.
>>>>>i was pretty amazed to find out that these 1% of the evaluations
>>>>>where the quick evaluation gave a score but evaluated it wrong,
>>>>>really amazing that these evaluations cause a tactical way better
>>>>>engine.
>>>>>
>>>>>Simply majority of tactical testset positions get solved by evaluation
>>>>>and NOT by seeing a bit more tactics.
>>>>>
>>>>>In short it's not working simply to use a lazy evaluation in a program with
>>>>>a good evaluation which also has high scores for things like king
>>>>>safety.
>>>>>
>>>>>>Hi all,
>>>>>>
>>>>>>I am wondering if someone uses "alpha-beta-Evaluation Functions" by Alois P.
>>>>>>Heinz and Christophe Hense. Below is the abstract of the text.
>>>>>>
>>>>>>Alessandro
>>>>>>
>>>>>>
>>>>>>Bootstrap Learning of alpha-beta-­Evaluation Functions
>>>>>>Alois P. Heinz Christoph Hense
>>>>>>Institut für Informatik, Universität Freiburg, 79104 Freiburg, Germany
>>>>>>heinz@informatik.uni­freiburg.de
>>>>>>
>>>>>>Abstract
>>>>>>We propose alpha-beta-­evaluation functions that can be used
>>>>>>in game­playing programs as a substitute for the traditional
>>>>>>static evaluation functions without loss of functionality.
>>>>>>The main advantage of an alpha-beta-­evaluation function is that
>>>>>>it can be implemented with a much lower time complexity
>>>>>>than the traditional counterpart and so provides a signifi­
>>>>>>cant speedup for the evaluation of any game position which
>>>>>>eventually results in better play. We describe an implemen­
>>>>>>tation of the alpha-beta-evaluation function using a modification
>>>>>>of the classical classification and regression trees and show
>>>>>>that a typical call to this function involves the computation
>>>>>>of only a small subset of all features that may be used to
>>>>>>describe a game position. We show that an iterative boot­
>>>>>>strap process can be used to learn alpha-beta-­evaluation functions
>>>>>>efficiently and describe some of the experience we made
>>>>>>with this new approach applied to a game called malawi.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.