Author: Robert Hyatt
Date: 20:27:00 02/19/02
Go up one level in this thread
On February 19, 2002 at 17:36:46, Gian-Carlo Pascutto wrote: >On February 19, 2002 at 17:00:18, Robert Hyatt wrote: > > >>The discussion was about creating "perfect pruning rules" that do not >>fail. I maintain there is no such thing, for obvious reasons... and even >>if they only fail in .01 % of the cases, > >...then they aren't perfect by definition ;) > >Ok. I agree here. > >>>Wow, wow, hold on a little here. Sure, it won't prove anything. Still, >>>I'm pruning >80% of my moves, and my program still plays damned fine >>>chess. Sure, I'm not proving anything, but it's still winning games >>>nicely. And you know what? You are, too.\ >> >> >>wrong kind of pruning. You are talking about _backward_ pruning using >>alpha/beta. I am talking about forward-pruning as in selective search. >>There is a huge difference. Backward pruning is provably correct and doesn't >>cause errors. Forward pruning is a different story. > >I was thinking of nullmove. > >If I'd only been talking about a/b, my program would still be 'proving' >something, if you get what I mean. OK if you talk about null-move I assume _nobody_ says that it doesn't add its own unique errors into the search. I certainly see enough of them. :) > >>>Sure, sure, if you can't possibly accept that it'll never fail. But >>>I'll be happy as soon as it's correct more than it fails, i.e. when >>>it starts getting productive. >> >> >>In that case, play me (me being a human) using your program. Set it to >>generate a random number between 1 and 100. If N is <= 51, then you play >>the move alpha/beta says is best. If N > 51, then you play a random move >>in the current position. I'll bet you whatever you want on the result of >>such games. :) > >A failed pruning rule won't generate a random number. If I'd modify >my program so that in 48% of the cases it adds a random amount with >a maximum of, say, 1 pawn, and in 1% of the cases a rook, then I think >you'd be in for a nasty surprise as far as strength is concerned. No... but a failed pruning rule can and will eliminate a critical move that skews the scores badly. The more errors you make, the more the score gets skewed. An error at ply=2, for example, is simply critical and will result in a very bad error. > >Note that your proposed experiment ignores the gains of that pruning >rule. If it's only right in 51% of the cases, they should be rather >large. > But as I said, you can make N _good_ moves and one bad one and lose the game. That bad one is the killer. And it is what led to the demise of selective search in the middle 1970's.. we _all_ got tired of seeing 39 brilliant moves and one lemon that lost the game. >>>>but in 1% it caused it to blunder. The more you prune, the more those >>>>probabilities add up... >>> >>>The more the gains add up, too ... >> >>Yes, but if you play the best move 50% of the time, you aren't guaranteed to >>win. If you play a bad move once, you most likely will lose... > >I think this pretty much depends on the badness of the bad moves >you play :) > >>I once had a selective search program. The selective part of the program >>went _way_ beyond 50,000 lines of code. And the special cases were still >>being found on a daily basis. It became obvious that such was not the way >>to write a chess program. In 1978 I switched to full-width again and dropped >>the thousands of special cases into the trashcan. >> >>COKO is another 1970's era program that ended up near 100K lines of code. >>Most for special cases in forward pruning... > >Okay. 1000 rules is still a pretty much an out of the blue number for >searching 60 ply deep (that was the original point, wasn't it?) > Things got mixed along the way. I didn't mention 1000 rules, Uri did. I don't think that would even _touch_ what we were doing in the early 1970's... >Moreover, it could just be that those programs were using bad >pruning rules. > by definition any forward pruning is bad, in terms of introducing error. Yes, you might gain more than you lose with some rules. But trying to make the tree selective enough to reach that 60 ply repetition is certainly going to push the error rate into the stratosphere.... >It could just be that we haven't discovered nullmove version 2 yet. > >If we do, perhaps 2 rules will be enough. > >-- >GCP It has been around forever (version 1 and 2 and 3) version 1 was R=1, non- recursive. Version 2 was R=2 recursive. Version 3 was R=2~3 I currently use it. Version 4 might be pure R=3 as some are trying... But they are all very closely related... Whether there will be a revolutionary new approach is a guess... I hope so, but I doubt it...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.