Author: Bruce Moreland
Date: 00:16:55 03/20/98
Go up one level in this thread
On March 19, 1998 at 03:41:15, Ed Schröder wrote: >We are now in a state that "improving learning" overrules the >importance of "improving the chess engine". All fine with me in >HUMAN-COMP games but not for COMP-COMP games on SSDF and (miss) >using AUTO232 for that purpose. Learning is not a bad thing. It is simply true that chess programs are expanding their capabilities. If someone spends a lot of time on a learner, without ever having heard of the SSDF list, but because they think it makes a better program, which it does, they should be able to submit their thing to the SSDF people, and not have to turn it off. They had the ideas, they wrote the code, they deserve the benefit -- it is part of *their* strength. To turn it off would be like demanding that programs turn off their books. The book is part of the program, those who make better books deserve to win games because their books are better, right? If it was purely an engine vs engine contest they'd turn books off and make you play from a standard set of positions. Heck, it's always been more than engine vs engine in other ways, too, your time control logic is not strictly an engine function, but that's a component of strength, too, right? The SSDF guys don't demand you operate at 3 minutes per move precisely. >If I take the 100% Rebel9 chess engine (so no improvements at all!) >add the "learner" improvements as I have described in a previous >posting, and release this as Rebel_SSDF then Rebel_SSDF will end up >30-40 elo points higher on SSDF than Rebel9. > >Then I start "yelling" on the Rebel Home Page, "Rebel_SSDF is much >stronger than Rebel9!!". > >That would be a cheat to the public IMO. Actually it will be stronger and more amusing. It's always kind of a stretch to say that you're stronger based upon any improvement in SSDF result, so I would be worried about making such a claim anyway. Learners are totally cool. Your customers probably want them. Just say that it has a new learner, make it a good learner, and try to fairly attribute the SSDF Elo gain (if any) to the learner and the engine as you see fit. It's no big deal. >In fact the only thing I would have done (please read my previous >posting) is that I have taken advantage of the fact that I know NOW >*HOW* SSDF testers do their testing. > >I don't want to be a part of such a development but this kind of >things is happening right before our eyes since a few years. This is a different thing entirely. Right now the SSDF guys might do one game or one hundred games, or do a game then exit your program and not exit the other guy's program. I don't know how they do it, but if they are doing it weird perhaps they should think about standardizing. Maybe it should be the programmer's responsibility to make sure that learning data gets saved at some point during or after a game, and it should be the SSDF's responsibility to try to do games in match units of at least perhaps ten games, on the same machines, between the same programs, although it would be fine to interrupt the match at any point, they wouldn't have to do the games all at once. You don't need to resign anything. This stuff is good for everyone, it's just a matter of making sure that programs are robust enough to handle real world situations, and the SSDF people are consistent enough that the results are approximately what they should really be. I'm sure other stuff like this will come up. We're not going to be using the same approaches forever, we are all going to do more stuff better, and this learner stuff is just an example of this -- the competitive domain just got more complex, which means more interesting and more fun for everyone, especially us. bruce
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.