Author: Rolf Tueschen
Date: 05:24:05 01/26/06
Go up one level in this thread
Heinz, it's pretty difficult to criticise you with your remarkable openess and sophistication. My point is another one. It is dangerous the more someone so reflectedly like you is busy with such sort of ranking lists. The point is the question what we could do if all different testings rely on the same false assumptions? Ok, nothing wrong with it in the sense that some people want to have fun and just playing around in a hobby sphere. Nothing wrong with such passtimes. But again - you cant close your eyes and justify the overall preference for a new entry like Rybka. What if we could find out that the advantage of Rybka is based on two factors? 1) the always existing advantage of a new entry which was freshly tuned on all old ones 2) something perhaps new in the code of Rybka that will become known If then after a while the situation is equal on a slightly higher level, what is the sense of it? Isnt it a problem for you if you see Hiarcs being tuned successfully and whoopie, the big advantage is already minimalised? I'm not a supporter of SSDF but one factor in their testing design is remarkable. They dont take any new entry, no matter how complete, Beta or not, and produce data. I wouldnt say that this is all well planned and justified. I remember the general critic that always the strong new entries were tested with weaker hardware... But with such a "slowness" you at least reduce the possibilities to become part of the PR of something. Ok, fine, in the end that critic was still a valid one somehow, but this is a different topic. Q: Dont you fear to produce just neccessary data for the interests of other people? The question goes in all directions and testers of course.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.