Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new paradigm is not about solving cross-word-puzzles...

Author: Christophe Theron

Date: 13:43:26 11/06/00

Go up one level in this thread


On November 06, 2000 at 13:17:34, Bruce Moreland wrote:

>On November 06, 2000 at 11:59:49, Christophe Theron wrote:
>
>>Tiger uses several different pruning mechanisms (some are well-known, some are
>>more obscure) in combination with each other. Usually when you do that you
>>weaken your program. Part of my work has been to teach these pruning algorithms
>>how to live together without burning the house.
>>
>>In the past 5 years I have worked 80% of the time on search, and I believe I
>>have "invented" several interesting pruning mechanisms. I'm pretty sure however
>>that other programmers have "invented" them as well, but maybe nobody else
>>managed to have all of them working together?
>
>I think you are probably on the right track with this.
>
>>I do a little of root processing for obvious things. But most of my evaluation
>>is done in the leaves. For example, king attack evaluation is done in every
>>node.
>>
>>My make/unmake routines update evaluation terms incrementally instead of
>>computing everything from scratch at each node. It is much more difficult to
>>work like this, because your make/unmake routines become pretty complex.
>>
>>Just one example: evaluation of "rooks on open files". The term is re-evaluated
>>if the program sees that the move:
>>* moves a rook
>>* moves a pawn on another file
>>* captures a rook
>>* captures a pawn
>>
>>So you can imagine what a mess it is when you add 20 or 30 additional terms...
>>But it works. This is also why bitboards would not help me much.
>
>I tried a simple form of this recently.  If you hash pawn structures you can get
>that information pretty quickly, so it is easy to write an incremental evaluator
>if you evaluate pieces only via their location and relationship with pawns.



I indeed store specific informations designed to speed up the evaluation update
in my Pawn hash table.

With the "rook on open file" example, the evaluation of the term is just playing
with some bit fields set up in the pawn structure evaluation, it's not very
expensive.




>But
>that kind of eval is very simple and misses a lot of nuances.  For instance,
>rook on the 7th should be more potent based upon the location of the enemy king.
> It sounds like you are talking about something more complex.



Yes. The terms I update in make/unmake are second order terms.




>I've thought about building some sort of dependency tree but I couldn't figure
>out a fast and manageable way to implement this.  That would be really cool if
>it could be done, because you could add terms, hook up a few wires, and the
>program could manage the whole thing itself.



I did not design the whole thing right from the start. It started as a very
simple evaluation in which I added terms little by little, trying to insert new
bits of code as elegantly as possible into the existing structure.

I did not like the idea to have my eval code and make/unmake code interleaved at
first, but it turned out to be the only way to do it efficiently. Make/unmake
handles information that can be directly used to update the evaluation, it does
not make sense to have a separate update eval code retrieving those informations
once make/unmake has been executed.



>As of now mine is a tip evaluator, it just goes through the piece lists and does
>it, mostly.  Some stuff is kept incrementally but it's not important stuff.
>
>>I almost don't use lazy eval. Obviously, it would not be work with Gambit
>>Tiger's huge positional evaluation of king safety.
>
>I did have working lazy eval until recently.  I had some big-term unavoidable
>stuff I did at the beginning, but some other stuff was pretty expensive but
>couldn't produce really big terms.  So if I knew that I was out of the window,
>and the rest of the eval couldn't get me back within it, I could prune out some
>pretty expensive stuff.
>
>As of now I'm doing king safety last, so that does sort of throw the lazy eval
>idea out.



Above a given level, lazy eval prevents you to improve your program. Lazy eval
is a poisoned gift.

And maybe the reason why most programmers don't want to or even cannot give huge
values to some positional terms.




>>>  Are you winning games because you
>>>are ungodly fast or is it some other reason?  Everyone's eval terms look great
>>>when they outsearch the opponent by four plies.
>>
>>
>>
>>I'm not that fast, but it's true my trees are rather small due to massive
>>pruning.
>>
>>So I can go pretty deep. But it is not enough to go deep. Your evaluation has to
>>be reasonnably accurate, or else you'll lose anyway. Of course you know that.
>>
>>A program that wins by outsearching its opponents is rather easy to "detect":
>>it's a program that is able to play senseless moves ("playing with no idea"),
>>and still win in the end. Reminds you of some famous program? Is it the way
>>Tiger plays in your opinion?
>
>I have no negative impressions of Tiger, it seems to be very strong even on slow
>processors.  That I found this to be true is a major reason I have been working
>on my program recently.



I strongly believe that a program should not be optimized for a given processor
speed. Such a program is not balanced at all, it will have pathologic
weaknesses.

So I try to make sure all the time that my program's efficiency stays constant
on all kind of computers.



    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.