Author: Mogens Larsen
Date: 08:41:38 05/01/00
Go up one level in this thread
On May 01, 2000 at 10:15:23, pavel wrote: >yes and thats my point too "Every possible parameter should be _exactly_ the >same". weather its learning or whatsoever. esp. when its on two cpu..... >pavel. Well, here are my viewpoints. Let's see if you agree. When you plan to do a test establish what you want tested. Should it be ponder, number of computers, learning, blitz vs. standard, strength difference or the significance of Nunn positions. That's where Chessfuns test lack clarity IMO. I assume it's not a test to reproduce Jounis test, since that would be close to impossible due to lack of information. The tests suggest that ponder on and off is one of the parameters tested. This could have been done easily by using two computers, two or three timecontrols and autoplay. And a lot more matches could have been played, ie. 60-100 per timecontrol, since the number of timecontrols is cut down to two. If you then wanted to test blitz vs. standard, you would "only" have to autoplay standard games with or without ponder (or both). There's no apparent reason to exclude learning, but the tester would have to decide that (I wouldn't). This should ensure a sample for the purpose of comparison and save som time I think. But I've already mailed my views on this to Chessfun (if I'm not banned that is), so my objections are known. All in all, testing is a tedious and unrewarding task. There's a lot of conflicting opinions, which is why I don't do it. I admire anyone with the guts to try it. So I hope that my current post won't upset the tester in question too much. Sincerely, Mogens Chr. Larsen http://home1.stofanet.dk/Moq/ "If virtue can't be mine alone, at least my faults can be my own."
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.