Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Extensions: everywhere or at frontier nodes?

Author: rasjid chan

Date: 11:42:15 01/24/05

Go up one level in this thread


On January 24, 2005 at 12:51:06, milix wrote:

Hello Anastasios,

I do not yet completely understand your point of view especially regarding
search depth, PV and their relation to strength. I think it is normal
as I remember there were debates here before about search vs. evaluation and
many other things.


"quote" - What this mean can be tested. We have 2 programs A, B.
A - only evaluate using the best evaluation.
B- search with only material up to depth N (must avoid all checks as usual).
If B wins A (almost) all the time, then our hypothesis is true.
My suspicion is that N is about 4 to 6 plies.
What this mean is that when we have found a PV of depth N, we have an
adjusted vision horizon of this PV to be N + 4/6.

Also there are some simple misunderstanding -
1) by "best evaluation" I mean eg. the evaluator of Shredder.
2) The program B does a pure fixed depth search so that no lines would be missed
,unlike in a real engine when bad pruning actually pruned "best" moves.

I just did the above experiment to tested how many plies my own SnailChess
evaluator can "see" ahead. My experiment is not yet very exhaustive as it has
some minor bugs sometimes, but the current result seem to be this :-

I compiled my program A and B to do pure fixed-depth search to depth N, without
any prunning so no lines are missed. The PV length returned will thus be always
 == N.

A - has quite a lot of evaluation, good king safety, basic mobility and usual
pawn strucute, etc. (SnailChess currently plays normally but not strong)
B - evaluation using just material.

Result :- A search depth = N(A), B searched depth = N(B).

1) if N(B) == N(A) , program A wins all the time and is clearly stronger.
   This is normal as adding usual evaluations is better.
2) if N(B) >= N(A)+ 3, B almost win all the time. I cannot yet do more test
   as I have to make my program play from loaded position.
   Current test is N(A) == 1 or 4 and N(B) == 4 or 7.
3) if N(B) > N(A) and N(B) < N(A)+ 3, the wins/losses are mixed.

   This means that my evaluation is worth about 1 to 2 plies and certainly
   cannot see 3 plies ahead. I think even if we test for search
   to much higher plies, the result may still be about the same.

I am not too sure if this type of experiments mean anything or whether
meaningful conclusions may be drwan from them.

Best Regards
Rasjid

























>Hi rasjid.
>
>First of all let me clarify what i mean by saying perfect (best in you text?)
>evaluation: the evaluation that tell us the outcome of the game in a position.
>For example, EGTB positions are examples of perfect evaluation.
>
>>What this mean can be tested. We have 2 programs A, B.
>>A - only evaluate using the best evaluation.
>>B- search with only material up to depth N (must avoid all checks as usual).
>>If B wins A (almost) all the time, then our hypothesis is true.
>>My suspicion is that N is about 4 to 6 plies.
>>What this mean is that when we have found a PV of depth N, we have an
>>adjusted vision horizon of this PV to be N + 4/6.
>
>By using perfect evaluation for A there is no such case. Engine A will score of
>course perfect in all games. For example, playing 5 men endines when program A
>uses EGTB and B just material eval.
>
>My second argument is about your PV terminology: PV for a program is the most
>likely continuation the program gives. How deep the PV goes has nothing to do
>with strength. If i do alot of pruning i can go really deep but i will miss many
>things. The PV is *correct* from program's view but objectivelly does not stand.
>
>>So we see that the evaluation is just an addition to increase our vision
>>horizon by 4-6 plies above the search depth.
>
>I disagree with this. I believe the opposite: we search because we haven't
>perfect evaluation. When we have we don't search (EGTB for example and in some
>amount the opening theory).
>
>Search plies, depth, extentions, pruning etc are all artificial terminology. We
>search for the point that one move will be clearly better than others. And this
>is determined by the end-point evaluation. The more robust and correct our
>evaluation is, the less search effort we have to make. Alpha-beta search for
>example achieves good end-point evaluation by searching deeper and deeper. It
>tries to avoid traps (and evaluate wrong nodes) by using extentions. It tries to
>win time (when time matters for example in a game) by using pruning/reductions.
>But when the evaluation is perfect it stops, no matter how long is the pv at
>this point.
>
>>Depth extention/reduction and prunning is time management
>I totaly agree with this.
>
>I hope our conversation will be productive for us and others and result in
>better tuned extensions or reductions and no pointless tree expansions!
>
>My best wishes,
>Anastasios Milikas
>
>On January 24, 2005 at 06:13:53, rasjid chan wrote:
>
>>On January 24, 2005 at 04:14:51, milix wrote:
>>>Hi rasjid
>>>
>>>The problem of chess we try to solve is to play the best move in a given
>>>position. If I had a perfect evaluation function, I need no search at all. I can
>>
>>
>>>tell you the best move without searching. But since perfect evaluation is not
>>>possible we need to search as deep as we can to be sure that our evaluation is
>>>reliable.
>>
>>
>>"Offen deeper searches results in better evaluation."
>>Firstly , I am never too sure of my own analysis until others made thinks very
>>clear.
>>
>>The raw codes of alpha-beta may be just 10 lines but once when we to need to
>>improve on it in search, it can be confusing. It's integral components are :-
>>1) evaluation
>>2) search-depth
>>3) the algorithm itself.
>>
>>We can even make and test(can easily be done) a hypothesis concerning
>>evaluation. Say we hypothesise "The best evaluation cannot see N plies ahead."
>>What this mean can be tested. We have 2 programs A, B.
>>A - only evaluate using the best evaluation.
>>B- search with only material up to depth N (must avoid all checks as usual).
>>If B wins A (almost) all the time, then our hypothesis is true.
>>My suspicion is that N is about 4 to 6 plies.
>>What this mean is that when we have found a PV of depth N, we have an
>>adjusted vision horizon of this PV to be N + 4/6.
>>
>>It is said a master can see about >= 8 moves ahead and so we guess Kasparov
>>may be able to see about 12/14 moves ahead or have a vision horizon of about
>>24 - 28 plies. It just happen that our hardward + software of the best
>>program like Shredder seem to be having vision horizon PROBABLY also
>>about this range within tournament time controls. If we are given computers
>>of the future that routinely and consistenly have a vison horizon of >= 40
>>plies, I think there is no more contest about Man vs Machine in computer chess.
>>
>>So we see that the evaluation is just an addition to increase our vision
>>horizon by 4-6 plies above the search depth.
>>
>>Depth extention/reduction and prunning is time management. When they are
>>implemented well, then the search should be using more of the allocated time
>>searching the PV (thus deeper and more reliable if the PV being not pruned) then
>>when extentions/reductions and pruning are not implemented.
>>
>>
>> Extensions are a
>>>way to get better evaluation and avoid horizon problems - that causes bad
>>>evaluation. I think we dissagree in terminology, not the concept!
>>>
>>>Checks take away plies by forcing a side to get out of check. So checks must be
>>>extended everyware (well i do some trick to not extend every check when the side
>>
>>
>>>that gives check is winning anyway).
>>
>>
>>"But if we have alot of depth to evaluate
>>correctly a passed pawns then why to extend the push in 7th rank?"
>>
>>About this I cannot be too sure. But it seems that if we do not extend all the
>>time everywhere at all iteration and IF THIS MOVE HAS A HIGHER PROBABILITY THEN
>>OTHER MOVES OF BEING IN THE PV, then we may end up having lesser lines/moves
>>being search deeper then this good passer.
>>
>>Best Regards
>>Rasjid
>>
>>
>>
>>
>>>
>>>On January 23, 2005 at 18:29:42, rasjid chan wrote:
>>>
>>>>On January 23, 2005 at 13:41:20, milix wrote:
>>>>
>>>>>On January 23, 2005 at 11:37:51, rasjid chan wrote:
>>>>>
>>>>>Hi Alvaro and Rasjid!
>>>>>Why search potentially better deeper? I think extensions apply in those
>>>>>situations where the value of a line is not reliable enough to be trusted - like
>>>>>a line full of checks or recaptures for example. There are some moves that are
>>>>>very bad and we need to search really deep to see that they are bad. As for the
>>>>>initial question i think that some kind of extensions is beneficial to
>>>>>enable/disable them if we are near or far from the leaves. Why to extend a push
>>>>>of a passer in the 7th when you are planning to do a 20ply search from there?
>>>>>
>>>>
>>>>As for the need to extend to confirm certain lines are bad or threatening,
>>>>I have not come to it yet; but I have read about mate threat extention.
>>>>
>>>>My earlier statement about searching good moves deeper is inexact. The probably
>>>>correct criterion is to search moves that have a higher probability of being
>>>>a PV move deeper.
>>>>
>>>>Chess programming is all about getting a PV that is deeper than what our
>>>>opponent can search. Even if we just do a material searchh, we can still win
>>>>Shredder provided at every turn we have a PV that is much deeper than what
>>>>Shredder can reach plus,say, a factor that Shredder has a good evaluator
>>>>(even the best evaluation may not be able to "see" 3/4 plies ahead).
>>>>
>>>>This being the case(?), then it is irrelevant how far we are from the root
>>>>or leaves. If we have a passer at 7th and we search to a depth of
>>>>20 , we still extend. The question is whether this passer is the PV.
>>>>The PV changes all the time depending on how deep we search. If we can search
>>>>to a deoth of 20, the other side probably can match us in depth. So
>>>>it comes back to whether this passer is the PV and only searching as deep as we
>>>>can can we know the answer.
>>>>
>>>>Caveat - Errorneous reasoning and fallacies are not intentional.
>>>>Rasjid
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>>On January 23, 2005 at 06:09:23, Alvaro Jose Povoa Cardoso wrote:
>>>>>>
>>>>>>>Hi everyone,
>>>>>>>an issue that has been on my mind for quite some time is that if extensions
>>>>>>>should be done everywhere in the tree or at the nodes immediately before
>>>>>>>qsearch.
>>>>>>>Take for example the pawn push to 7th (2nd) rank extension.
>>>>>>>If we are near at the root (say ply2, ply3, ply4, ...) why extend if we are sure
>>>>>>>the search will reach and past well beyond the promotion?
>>>>>>>Woudn't it make more sense to make this (and other extensions) at nodes
>>>>>>>immediately before qsearch ?
>>>>>>>Same thing for check extensions, one reply extension, recapture extension, etc.
>>>>>>>
>>>>>>>Best regards,
>>>>>>>Alvaro Cardoso
>>>>>>
>>>>>>Why not view all extentions this way.
>>>>>>
>>>>>>We have different lines of play (moves) and we extend lines that seem to have a
>>>>>>better chance to be good(PV), ie the likely better moves are searched deeper
>>>>>>RELATIVE to the average moves. So the question of your concern may be
>>>>>>irrelevant. At the next iterration at root, your good lines will again be
>>>>>>searched N depth deeper... repeatedly. This may correct way to view depth
>>>>>>extention.
>>>>>>
>>>>>>Rasjid



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.