Author: martin fierz
Date: 05:29:54 02/15/03
i was thinking of implementing position learning in my checkers program, and was trying to find out how this is typically done. if i understand it properly, here's what people do: 1) if the search returns a value much lower than the previous search (define "much lower" as you like), write this position with it's value to the learn-file on disk. 2) before every search, stuff positions from the learn file in the hashtable. this all sounds sensible to me, and i can see that this helps. however, i have a stupid question about it. let's imagine that in the initial position A your program is worse and decides to sac some material for a last-chance attack. a few moves later, in position B, it may think that it's compensation is not sufficient and drop it's score and learn this position. another 2 moves later it suddenly sees that it has a perpetual check, and that the move in position A was not to blame at all. i think this scenario is not quite unrealistic, and it seems to me that position learning doesnt work here, because you only learn that position B was bad (when in fact it wasn't). even if you were writing ALL positions to disk after searching them (something i would like to do in checkers), then you still haven't solved the problem: in your next search at position A you will have a hash hit at position B telling you that the correct move is bad, because you can't see further than your hash hit. hmmm. is there any workaround for this? cheers martin
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.