Author: Robert Hyatt
Date: 17:08:37 06/01/04
Go up one level in this thread
On June 01, 2004 at 19:25:41, Sune Fischer wrote: >On June 01, 2004 at 18:39:46, Robert Hyatt wrote: > >>>We wanted to know what the strength relations were with learning off, now we >>>know. >> >>Why? > >Mainly because you want reproducability, it's no good to have an engine that's >not performing at a constant level. > >Ie. suppose I play a match against Crafty, then I change something in my engine >and wants to see if it got better. >If Crafty learns, my engine will probably do worse even if it is an improvement. > >Surely you can see why that is a nonsese experiment to do, learning _must_ be >switched off or I simply cannot test against Crafty. There is a solution. Clear the learning before you start a test. But even then, you have _real_ problem because there is some randomness built into my move selection logic to provide variety. If you play a 20 game match, make changes, and play another 20 game match, comparing the results is less than worthless... > >> Do you want to know the strength relation with the evaluation turned off? >>Selective search turned off? Etc??? > >Actually, I would not mind that. >I'd like to know exactly where Crafty's strengths are compared to Frenzee's. >:) > >-S. You can easily answer that. But you also wouldn't publish a result of such a match without clearly identifying that Crafty was badly handicapped. That was the point. If someone reads "book learning disabled" and they don't know the whys and whatfors about book learning, they might say "so what, no big deal" when it really is. My philosophy has _always_ been one of "don't whine about a problem, fix it." I recall at the 1978 ACM event, Slate/Atkin were playing Belle, and they were jumping up and down because (a) they only dumped the PV at the end of an iteration and they didn't know whether their program had seen a coming tactical problem or not and (b) they could only hope that if it did see the problem it would have enough time to find a solution. I thought about that and the next year Blitz (and later Cray Blitz) dumped the PV each time it changed to keep the operator (me) informed about what it had seen and it also used the original "using time wisely" idea of extending the time limit when the score dropped. Now I didn't have to worry once it saw a bad score, because it would use a lot more time to try to avoid the problem. Several years ago Ed was complaining about "duplicate" games in the SSDF testing that was being done. I thought about that and decided "rather than complaining about duplicate losses, I'm going to simply avoid them by having crafty notice that it got into trouble in an opening and not play it again for a while." That is where my "book learning idea" was founded. A problem that you could either complain about (does it make sense to let a program lose the same opening over and over and count that against it and for its opponent?) or solve. I chose "solve" and have not had the problem happen to me, at all... Of course if you turn it off, the problem comes right back, bigger than life, and sticks around.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.