Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new paradigm is not about solving cross-word-puzzles...

Author: Robert Hyatt

Date: 12:49:25 11/07/00

Go up one level in this thread


On November 07, 2000 at 14:39:57, Christophe Theron wrote:

>On November 07, 2000 at 10:20:14, Robert Hyatt wrote:
>
>>On November 06, 2000 at 23:43:50, Peter McKenzie wrote:
>>
>>>On November 06, 2000 at 22:18:13, Robert Hyatt wrote:
>>>
>>>>On November 06, 2000 at 16:43:26, Christophe Theron wrote:
>>>>
>>>>>On November 06, 2000 at 13:17:34, Bruce Moreland wrote:
>>>>>
>>>>>>On November 06, 2000 at 11:59:49, Christophe Theron wrote:
>>>>>>
>>>>>>>Tiger uses several different pruning mechanisms (some are well-known, some are
>>>>>>>more obscure) in combination with each other. Usually when you do that you
>>>>>>>weaken your program. Part of my work has been to teach these pruning algorithms
>>>>>>>how to live together without burning the house.
>>>>>>>
>>>>>>>In the past 5 years I have worked 80% of the time on search, and I believe I
>>>>>>>have "invented" several interesting pruning mechanisms. I'm pretty sure however
>>>>>>>that other programmers have "invented" them as well, but maybe nobody else
>>>>>>>managed to have all of them working together?
>>>>>>
>>>>>>I think you are probably on the right track with this.
>>>>>>
>>>>>>>I do a little of root processing for obvious things. But most of my evaluation
>>>>>>>is done in the leaves. For example, king attack evaluation is done in every
>>>>>>>node.
>>>>>>>
>>>>>>>My make/unmake routines update evaluation terms incrementally instead of
>>>>>>>computing everything from scratch at each node. It is much more difficult to
>>>>>>>work like this, because your make/unmake routines become pretty complex.
>>>>>>>
>>>>>>>Just one example: evaluation of "rooks on open files". The term is re-evaluated
>>>>>>>if the program sees that the move:
>>>>>>>* moves a rook
>>>>>>>* moves a pawn on another file
>>>>>>>* captures a rook
>>>>>>>* captures a pawn
>>>>>>>
>>>>>>>So you can imagine what a mess it is when you add 20 or 30 additional terms...
>>>>>>>But it works. This is also why bitboards would not help me much.
>>>>>>
>>>>>>I tried a simple form of this recently.  If you hash pawn structures you can get
>>>>>>that information pretty quickly, so it is easy to write an incremental evaluator
>>>>>>if you evaluate pieces only via their location and relationship with pawns.
>>>>>
>>>>>
>>>>>
>>>>>I indeed store specific informations designed to speed up the evaluation update
>>>>>in my Pawn hash table.
>>>>>
>>>>>With the "rook on open file" example, the evaluation of the term is just playing
>>>>>with some bit fields set up in the pawn structure evaluation, it's not very
>>>>>expensive.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>But
>>>>>>that kind of eval is very simple and misses a lot of nuances.  For instance,
>>>>>>rook on the 7th should be more potent based upon the location of the enemy king.
>>>>>> It sounds like you are talking about something more complex.
>>>>>
>>>>>
>>>>>
>>>>>Yes. The terms I update in make/unmake are second order terms.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>I've thought about building some sort of dependency tree but I couldn't figure
>>>>>>out a fast and manageable way to implement this.  That would be really cool if
>>>>>>it could be done, because you could add terms, hook up a few wires, and the
>>>>>>program could manage the whole thing itself.
>>>>>
>>>>>
>>>>>
>>>>>I did not design the whole thing right from the start. It started as a very
>>>>>simple evaluation in which I added terms little by little, trying to insert new
>>>>>bits of code as elegantly as possible into the existing structure.
>>>>>
>>>>>I did not like the idea to have my eval code and make/unmake code interleaved at
>>>>>first, but it turned out to be the only way to do it efficiently. Make/unmake
>>>>>handles information that can be directly used to update the evaluation, it does
>>>>>not make sense to have a separate update eval code retrieving those informations
>>>>>once make/unmake has been executed.
>>>>>
>>>>>
>>>>>
>>>>>>As of now mine is a tip evaluator, it just goes through the piece lists and does
>>>>>>it, mostly.  Some stuff is kept incrementally but it's not important stuff.
>>>>>>
>>>>>>>I almost don't use lazy eval. Obviously, it would not be work with Gambit
>>>>>>>Tiger's huge positional evaluation of king safety.
>>>>>>
>>>>>>I did have working lazy eval until recently.  I had some big-term unavoidable
>>>>>>stuff I did at the beginning, but some other stuff was pretty expensive but
>>>>>>couldn't produce really big terms.  So if I knew that I was out of the window,
>>>>>>and the rest of the eval couldn't get me back within it, I could prune out some
>>>>>>pretty expensive stuff.
>>>>>>
>>>>>>As of now I'm doing king safety last, so that does sort of throw the lazy eval
>>>>>>idea out.
>>>>>
>>>>>
>>>>>
>>>>>Above a given level, lazy eval prevents you to improve your program. Lazy eval
>>>>>is a poisoned gift.
>>>>>
>>>>>And maybe the reason why most programmers don't want to or even cannot give huge
>>>>>values to some positional terms.
>>>>>
>>>>
>>>>
>>>>This is simply _wrong_.  Lazy eval can work no matter what you do with large
>>>>positional scores, _if_ you do it right.  I have positional scores that go
>>>>way over a queen.  I have no problems with lazy eval at all.  You just arrange
>>>>the scoring code to do the big items first.  Then, as you refine the eval, you
>>>>can lazy-out along the way...
>>>>
>>>>Everybody has used this forever.  It might take a bit of work, but I don't
>>>>see it being any different than using alpha/beta, _if_ it is done right.
>>>
>>>Bob, I agree with you in theory but in practice I found that it just wasn't
>>>worth it.  In my eval, alot of the big terms are the most expensive to calculate
>>>so I wouldn't gain alot anyway.
>>>
>>>I also found that if I was experimentally scattering big terms around my eval, I
>>>would keep having to re-arrange the eval function for purposes of lazy
>>>evaluation which was a hassle and made it easy to introduce bugs.
>>>
>>>Another thing was that some types of pruning work better without lazy eval...
>>>
>>>So, from a practical perspective, I'm with Christophe no this one.  Have been
>>>for a couple of years or more.
>>>
>>>cheers,
>>>Peter
>>
>>
>>To each his own, of course.  I didn't find it much of a problem to do the big
>>terms first, then the smaller ones, and finally the really small ones, which are
>>actually (for me) the most expensive to compute because the small terms are
>>computed for all pieces.
>>
>>However, think about this if you want to _really_ do it right (I did this in
>>Cray Blitz, but not in Crafty):  You can _easily_ compute the min and max value
>>each piece can add to the positional score.  And as you make moves that capture
>>pieces, you can subtract the min/max value from that piece from an error
>>estimate.  And now your lazy eval can be _perfect_ because you know _exactly
>>how big the largest or smallest positional score you can possible compute with
>>the present pieces on the board.
>
>
>
>I don't see how it could work unless all your evaluation is first order.
>
>As soon as you have terms which involve the coordination of several pieces
>together, it does not work anymore.

You can't figure out that min(x)*min(y) = min(x*y)?  If you have second-order
terms (I have a bunch, and I also have some third-order stuff to boot) you can
take the first order error estimates and interact them together to get a second
order error estimate.


Remember, I only want the largest or smallest scores that a piece of the
eval can produce, to take an early exit.




>
>This is exactly the case in the king safety evaluation, which happens to have
>big values in some cases for me.
>

Mine too, +/- 3.00 roughly... max/min values of course.

However I do have some first-order stuff for each and every piece. and that
is pretty expensive to compute and the lazy eval cuts that off very nicely.



>
>
>    Christophe
>
>
>
>>Lots of ways to skin the same cat.  I like the staged approach I use now.  It
>>requires less work when I retune the evaluation.  The old approach required the
>>manual computation of all the individual error terms for each piece.  Which was
>>a miserable exercise.
>>
>>But if you don't like lazy eval, do you do futility pruning in the q-search?
>>this is the _exact_ same thing.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.