Author: Vasik Rajlich
Date: 10:17:27 05/25/05
Go up one level in this thread
On May 25, 2005 at 09:37:24, Uri Blass wrote: >On May 25, 2005 at 09:17:47, Vasik Rajlich wrote: > >>On May 25, 2005 at 08:52:20, Michael Yee wrote: >> >>>On May 25, 2005 at 06:07:00, Vasik Rajlich wrote: >>> >>>>On May 24, 2005 at 18:49:05, Michael Yee wrote: >>>> >>>>>ACM/CSC-ER, >>>>>Proceedings of the 1983 annual conference on Computers : Extending the human >>>>>resource >>>>> >>>>>I just read this interesting paper (20+ years later!)... >>>>> >>>>>Main idea: >>>>> >>>>>Schaeffer essentially computes piece square table "bonuses" that depend on the >>>>>root position (e.g., plans like "king-side attack" or "try to occupy f5 with >>>>>your knight") and get added to the score of a branch in a path dependent way. >>>>> >>>>>Some questions: >>>>> >>>>>(1) Is this what Vincent Diepeveen sometimes referred to as "root processing"? >>>>> >>>>>(2) Does anyone know how Schaeffer's Planner compared to Wilkin's PARADISE (in >>>>>terms of playing strength)? >>>>> >>>>>(3) Is anyone experimenting with this or other types of path dependent eval? >>>>> >>>>>Steven is of course working on long-range planning. And I think Uri has some >>>>>path dependent stuff. But have most people pretty much abandoned it? >>>>> >>>>>Thanks, >>>>>Michael >>>> >>>>Just a terminological point - every engine does "long-range planning" by virtue >>>>of having an evaluation function. >>>> >>>>You could easily make a 1-to-1 mapping between any type of work done at the root >>>>and a pure static evaluation at the leaf. >>>> >>>>Vas >>> >>>I had a similar thought about folding everything into the static eval. (After >>>all, each feature is really encoding a plan: get a lead in material, get a >>>better pawn structure, attack opponent's king safety, etc.) >>> >>>But I think the mapping isn't exactly 1-to-1. Here's Schaeffer's bonus table for >>>getting a N on c1 to f5: >>> >>> +---+---+---+---+---+---+---+---+ >>> 8 | | | 4 | | 4 | | 4 | | >>> +---+---+---+---+---+---+---+---+ >>> 7 | | 4 | | | 8 | 4 | 8 | | >>> +---+---+---+---+---+---+---+---+ >>> 6 | | | 4 | 8 | 4 | | 4 | 8 | >>> +---+---+---+---+---+---+---+---+ >>> 5 | | 4 | | 4 | | 16| | 4 | >>> +---+---+---+---+---+---+---+---+ >>> 4 | | | 4 | 8 | 4 | | 4 | 8 | >>> +---+---+---+---+---+---+---+---+ >>> 3 | | 4 | | | 8 | 4 | 8 | | >>> +---+---+---+---+---+---+---+---+ >>> 2 | | | 4 | | 4 | | 4 | | >>> +---+---+---+---+---+---+---+---+ >>> 1 | | | N | 4 | | 4 | | 4 | >>> +---+---+---+---+---+---+---+---+ >>> a b c d e f g h >>> >>>For example, the path "N-e2-g3-f5" yields 4 + 8 + 16 = 28 bonus points (after >>>which, the "get N to f5" goal might shut off, I think?). This can certainly be >>>put in the static eval itself since you could have partial sums: each 4 would >>>still be 4, each 8 would become 12, and the 16 would become 28. >>> >>>However, if you add some negative bonuses, say on a2 and c3, to penalize wasting >>>time, I don't think you can model it exactly with just the static eval anymore. >>> >>> +---+---+---+---+---+---+---+---+ >>> 8 | | | 4 | | 4 | | 4 | | >>> +---+---+---+---+---+---+---+---+ >>> 7 | | 4 | | | 8 | 4 | 8 | | >>> +---+---+---+---+---+---+---+---+ >>> 6 | | | 4 | 8 | 4 | | 4 | 8 | >>> +---+---+---+---+---+---+---+---+ >>> 5 | | 4 | | 4 | | 16| | 4 | >>> +---+---+---+---+---+---+---+---+ >>> 4 | | | 4 | 8 | 4 | | 4 | 8 | >>> +---+---+---+---+---+---+---+---+ >>> 3 | | 4 | -1| | 8 | 4 | 8 | | >>> +---+---+---+---+---+---+---+---+ >>> 2 | -1| | 4 | | 4 | | 4 | | >>> +---+---+---+---+---+---+---+---+ >>> 1 | | | N | 4 | | 4 | | 4 | >>> +---+---+---+---+---+---+---+---+ >>> a b c d e f g h >>> >>>N-a2-c3-e4-g3-f5 --> -1 + -1 + 4 + 8 + 16 = 26 bonus points >>>N-e2-g3-f5 --> still 28 bonus points >>> >>>Both end positions could be the same (say, if the opponent moved a R and put it >>>back, but would have different path dependent evals. >>> >>>If the above example is correct, then maybe it shows that "path dependent" >>>bonuses can help deal with time/tempo. >>> >>>Michael >> >>Yes, true - there's no way to make this 1-to-1. >> >>Actually, I really didn't mean to say that :) What I would really like to say is >>that every "good idea" that you'd like to encode in a path-dependent manner can >>be encoded at least as well (and usually better) at the leaf node. >> >>In fact, your example is a good example of this. The two positions _should_ get >>the same score, showing the flaw of the path-dependent approach. >> >>As far as I can see, the only argument in favor of work at the root is to save >>computation time. Given the current trends in computer chess, I doubt it's >>useful for anybody whose engine isn't already incredibly strong. >> >>Vas > >I can only say that it is not the way that Movei is using path dependent >evaluation but I cannot say that 2 positions that are the same should be >evaluated the same regardless of the path. > >This is not the way that I think. >I learn from the path about the evaluation. > >If I see in some path many checks for the same side then I suspect that it is >perpetual check even without seeing repetition. >If I see the same position without seeing the checks then I suspect less that it >is a perpetual check and may evaluate it as bigger advantage for the side with >more material. > >Uri Ok, I have to agree. However, this seems to me to be an extremely unusual case and I can't think of another similar one. So let me try a third time: Every "good idea" that you'd like to encode in a path-dependent manner can be encoded at least as well (and usually better) at the leaf node, with the single exception of evaluating the prospects for perpetual check. Vas
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.