Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Teaching computers to think

Author: Tord Romstad

Date: 15:46:50 03/15/04

Go up one level in this thread


On March 15, 2004 at 17:47:09, Vasik Rajlich wrote:

>On March 15, 2004 at 15:27:22, Tord Romstad wrote:
>
>>On March 15, 2004 at 14:21:19, Vasik Rajlich wrote:
>>
>>>Hi Tord,
>>>
>>>this is great stuff! Expect more questions soon - and hopefully I'll have some
>>>useful feedback as well.
>>>
>>>First one very general question: to what extent are you confident that what you
>>>outlined here improves your engine? Do you test each piece separately, do you
>>>test changes in groups, have you tried longer vs shorter time controls, etc? I
>>>suspect that you are getting some benefit, given what is known about your
>>>program, including NPS rates - just curious how you go about validating what you
>>>do.
>>
>>I am very confident that the sum of all the tricks improve my engine.  I
>>haven't performed any tests recently, but the last time I tried the
>>difference in strength compared to a brute-force nullmove version of my
>>engine was something like 150 Elo points.  My tests consisted of self-play
>>games, gauntlets against other amateur engines, and tactical test suites.
>>All tests pointed in the direction of the selective version being much
>>stronger.  I have not tried slow time controls, though.
>>
>>On the other hand, I have no reason to believe that all the countless
>>search parameters I use have anywhere near optimal values.  There is an
>>endless number of things to experiment with.  To a certain extent I have
>>tried each piece separately, but I am not even close to having explored
>>the huge set of possible combinations of settings.
>>
>>>>As promised, I send you a brief description of how my selective search
>>>>works.  Because my search looks rather unconventional (for instance,
>>>>as you might remember from CCC, I use the concept of the "cost" of a
>>>>move instead of extensions and reductions), I've tried to translate
>>>>what I do to a more normal language.
>>>
>>>Actually, I also use this concept, called "weight" in my case. The idea came
>>>from some descriptions of Junior's search in the CCC archives. Anyway, as you
>>>pointed out, it's just a cosmetic change from extensions & reductions.
>>
>>I wasn't aware that others used the same formulation, but it doesn't really
>>surprise me that others have come up with the same idea.
>>
>>>>First of all, I should mention that I use MTD(f), which means that all
>>>>my searches are done with a zero width window.  Therefore, the search
>>>>fails high or low at all nodes.  I call the static eval at all
>>>>interior nodes, and use the evaluation to make many different search
>>>>decisions, like explained below.
>>>
>>>I also use MTD (f), it's a bit more elegant IMO. PVS should behave the same way
>>>though, as far as selective search goes.
>>
>>There are some differences, because in MTD(f) you should be careful about
>>using alpha and beta in your pruning decisions in the same way you would do
>>in a PVS search.  Otherwise you will easily end up with terrible search
>>inconsistencies.  The same node will usually be searched several times with
>>different values of alpha and beta, and pruning decisions which are valid
>>in one case will not always be valid in another.  Note that even null move
>>pruning causes this kind of problems when combined with MTD(f), but other
>>pruning techniques tend to cause more noticable problems.
>>
>
>Actually, I don't understand what is bad about a search inconsistency.

Well, I find it disturbing that tiny changes in my move ordering so often
results
in widely different scores and best moves from the same position.

>>>When I allow transitions from a null-move right into q-search, the program can
>>>overlook some pretty simple stuff at the leaves. It seems to me that after a
>>>null-move is tried, ideally there should be enough depth left to take care of
>>>certain things.
>>
>>This is one of the points behind my dynamic null move reduction factor.
>>I use my evaluation to try to guess whether there is a danger of a horizon
>>effect problem, and reduce the reduction factor when this happens.
>>
>
>Interesting. Actually, this point may be key.
>
>Consider this interesting data point: Junior, a very selective searcher, does
>not use null move at all. In some ways this is a bit surprising - for example,
>Rybka without null move gets totally crushed by the version with null move. (I
>found this out by accident, by accidentally disabling it once.) In some ways,
>though, it makes sense, at least the combination of being very selective and not
>using null move, since the two might encroach on each other.

Possibly.  As my program develops, I keep reducing my use of recursive null move
pruning, and eventually I hope to remove it and replace it with something
better.

>If, for example, as a selective searcher, you tag a move for a two-ply
>reduction, and then you attempt an R==3 null-move in response to the reduced
>move, you have just taken away five plies from the search. This shouldn't be
>done lightly.

If you are afraid of null moves interacting with selectivity in bad ways, there
is a
trick you might want to try:  Disable the null move, or use a low reduction
factor,
at nodes directly following a depth reduction.  I don't currently do this
myself,
but perhaps it is a good idea in some cases.

>If I understood correctly, you make this decision based on king safety, passed
>pawn eval, and static eval vs beta.

Two other factors, which I forgot to mention, are the remaining depth and the
remaining material.  It would also be interesting to include the path leading
to the position.

>>>Also, it's very logical to extend moves which static eval indicates are better.
>>>It appears that this is what Junior does - although it's not 100% clear.
>>
>>Really interesting.  Is this based on your own observations of Junior's
>>search, or have you read about it somewhere?
>>
>
>This is based on discussions in the CCC archives, however it was basically
>speculation, so I guess it's not worth much. The idea is that Junior appeared to
>be identifying pointless moves, such as moving rooks to closed files, etc, and
>giving them the maximum cost.

OK, now I have one more thing to look for in the archives when the search
engine works again.  :-)

>>>The risk of course is that you will reduce some good positional moves, not
>>>giving search the chance to show that they are good. Well, nothing is free.
>>
>>Part of the purpose of checking the history table is to prevent good
>>positional moves from being reduced too often.  I hope that good positional
>>moves are usually good many places in the search tree, and that their
>>history table entries therefore have high values.
>>
>
>True. It would be interesting to test the effect of this, since that's the only
>one of your safeguards which would prevent the reducing of good positional moves
>which require search to be understood. (ie moves which fail test #4)

Yes, unless the moves are good enough to fail high even with reduced depth,
of course.

An interesting way to attack this problem is the idea of trajectories.  I
haven't
implemented them myself yet (I haven't even got a clear idea of how to do it),
but I will definitely give it a try sooner or later.

Tord



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.