Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: A new selective heuristic?

Author: Don Dailey

Date: 07:23:10 06/22/98

Go up one level in this thread


On June 22, 1998 at 00:12:22, blass uri wrote:

>
>On June 21, 1998 at 21:23:22, Don Dailey wrote:
>
>>On June 21, 1998 at 18:42:50, Amir Ban wrote:
>>
>>>On June 21, 1998 at 09:02:51, Frank Schneider wrote:
>>>
>>>>Has anybody tried the idea described below?
>>>>
>>>>Almost all strong chessprograms are based on brute-force alpha-beta search.
>>>>It has proven difficult to use chessknowledge to recognize important or
>>>>less important variations during search. There are a some extensions that
>>>>are known to improve strength (chess-extensions, recapture-extensions,..)
>>>>and some heuristics that try to prune away unimportant parts of the
>>>>searchtree (nullmove, razoring). These heuristics are usually based on a
>>>>very 'basic' kind of chessknowledge. Nullmoves, as an example, exploit
>>>>the knowledge that it is usually a privilege to make a move.
>>>>I want to present another idea that I experimented with. So far the results
>>>>are at least interesting.
>>>>
>>>>The idea is based on the observation that most lines that a brute-force
>>>>program searches are illogical because there is no relationship between
>>>>the moves. There are many lines like 1. g2g4 xy 2. a2a4. 1. g2g4 could be
>>>>the beginning of a kingside attack, but 2. a2a4 is (usually) not part
>>>>of the plan and therefore illogical.
>>>>In my experiment I define a move m as 'unrelated' to previous moves m2
>>>>and m3 if the the attacktables of m.to and m.from were not changed by
>>>>m2 and m3. I only test for illogic moves if enough material is on
>>>>the board, if there were no checks and if m is not a capture (see code
>>>>below). When the heuristic thinks that a move is 'illogical' it searches
>>>>the move with reduced searchdepth. If the returned score is above alpha
>>>>the move is searched again as usual, but if the score is below alpha
>>>>the next move is searched and some time was saved.
>>>>
>>>>The idea has been tested with a pre-alpha version of GromitChess 2.0
>>>>using the WAC-testsuit, a private 100-position-suite and the 24
>>>>Bratko-Kopec positions.
>>>>Results for the BK-Test are given below (using a K6/200):
>>>>
>>>>Ply     correct     time/s    nodes
>>>>-----------------------------------
>>>>without heuristic:
>>>>  7          14      371    7508457
>>>>  8          16     1218   24616834
>>>>  9          16     4548   90870914
>>>>
>>>>with heuristic:
>>>>  7          15      305    6308228
>>>>  8          15     1008   20736540
>>>>  9          16     2993   62041817
>>>>
>>>>There seems to be a 15-30% decrease in nodes searched without affecting
>>>>the correct-moves too much.
>>>>WAC and the 100-position suite were searched to ply 6 (10% less nodes)
>>>>and 7 (20% less nodes) without changing the number of correct moves
>>>>found.
>>>>The implementation is still 'experimental' and I'm sure it could be
>>>>improved.
>>>>
>>>>Has anybody tried this before? Any comments?
>>>>
>>>>
>>>>Frank Schneider
>>>>
>>>>
>>>
>>>
>>>How does your rule apply to the Spanish ? e4 e5 Nf3 Nc6 Bb5 a6 Ba4 Nf6 O-O Be7.
>>>Seems that both white and black play illogical sequences where no move is
>>>related to the other.
>>>
>>>I think the definition of "logical" is too simple. Possibly, with effort, it can
>>>be improved, but may not work anyway because of this: Extensions and forward
>>>pruning are not about following "logical" lines but about getting a reliable
>>>eval for a line. I don't see an obvious reason why a "logical" line should be
>>>searched more deeply while an "illogical" line less deeply. Conceivably, you can
>>>stop the search of an excellent move after two plies, because the score is
>>>reliable enough, while you would need to search a terrible line to 12 plies to
>>>reach the conclusion that it is terrible.
>>>
>>>You may have discovered that depth reduction is in many cases a useful thing to
>>>do, and the justification you found for it is perhaps irrelevant. Evaluating
>>>such a heuristic with a test-suite is not a very sensitive or appropriate way.
>>>
>>>Amir
>>
>>
>>Hi Amir,
>>
>>One of us did not understand the heuristic.  I think he should
>>explain it a little better because I am unclear about the details.
>>(FRANK, if you are listening ...)
>>
>>But, by what I (thought) I understood then:
>>
>>1. e4       e5
>>2. Nf3            (attacks the square black just moved to)
>>2. ...      Nc6   (defends the square the last move attacked)
>>3. Bb5            (attacked the last piece that moved)
>>4. ...      a6    (attacks the last piece moved)
>>5. Ba4            (moves piece just attacked)
>>5. ...      Nf6   (-- NO DIRECT RELATION but attacks d6 which the
>>                   bishop attacks through the knight)
>should be attack d7

Yep, I meant d7


>>... etc.
>I understand you mean 6.0-0 (no direct relation but attacks d1 which the
>bishop in a4 attacks through the pawn c2).

I didn't consider 6. o-o.   I don't really know what Frank had in
mind exactly but here is what rules I might consider on my first
attempt:

Look at last 2 moves (one for each side) and compare it to some
candidate move we are considering.

Look at all squares attacked by each of these 2 moves.  Let sliders attack
through other non-pawn pieces too.  If the candidate move attacks any of
these squares including any piece that just moved which should be
considered as part of the attack vector, then consider this move
as having passed the "Frank screening test."

Using this definition then 6. O-O does not quite make it.  But maybe
it should, it could in some position be part of a plan to play c3
followed by Rd1 to oppose an enemy rook on the d file for instance.
This is why I like this stuff in principle, most reasonable plans
are very much like this, ganging up on squares, defending squares
etc.

No one needs to post to explain the problems and plans this attempt
will miss,  I see plenty of them, but I don't necessarily think this
makes the idea worthless.

- Don


>>You mentioned that the real goal was to get the most reliable
>>evaluation which I agree with.  I believe (in the spirit of
>>what Frank is trying to do) this is quite consistant with that
>>goal.   Whether it works or can be made to work is an open
>>question (at least to me) but it may be that your intuition
>>on this is stronger.  I cannot dismiss it yet without taking
>>a closer look.
>>
>>This stuff is VERY similar to some stuff one of our team members
>>is experimenting with.  We get a nice node reduction for free,
>>but the implementation reduces our nodes per second drastically.
>>Our experiences with this gives me reason to believe something
>>like what Frank is experimenting with has a chance to pay off.
>>
>>I definitely agree with you on the test suites.  My experience
>>has been that they have no value at all for evaluating how good
>>the chess program is, but I use them quite extensively
>>because they have a lot of value for evaluating the behavior
>>of various algorithms.  When it comes time to really find out
>>if some change really improves the program, other methods must
>>be used.   I have this idea that a few of the programmers use
>>problem sets to evaluate their programs since it is by far the
>>simplest thing to do and I believe this may be a serious mistake.
>>
>>- Don



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.