Author: Rolf Tueschen
Date: 06:16:18 09/16/02
Go up one level in this thread
On September 15, 2002 at 18:00:59, Ed Schröder wrote: >About pre-arranging games, anybody can do it, play xxx games, delete yyy lost >games from the PGN output and post the remaining. The SSDF can do, all the >super-chess-freaks playing their valuable tournaments can do it, and and and. In >the end it all comes down if you trust the poster. > >Science is nice, but how does one scientifically proof someone is lying? After all I have to comment here because people here seem willing to digest everything from great experts. As if everything could be possible with the exception of a proof with 'science'. The truth is that we need a little bit of logic, not Astrophysics or stuff like that, to know what is going on. At first I quote a retired collegue of Ed, let's call the "Not-To-Mentionable" e.g. AB. Well, AB wrote me that XY (a famous tester) always understood his testings as a WorldChampionship with XY as operator. So testing for him is always testing program&operator, not the program alone. Then I have talked about the roughly 30 games in total. In 5, and these were important wins of the new Macheide program version of Rebel, I could present data, food for thought. In three (!) of the examples the same "cooked" opening was played where Shredder was without defense. All that by chance as the testing design normally required? I seriously doubt that. But it's clear that with such a "chosen" bias the whole testing is worthless. So we need no science but only a bit thinking to come to that conclusion. Because it's not true what Ed said. Ed is thinking that such cheating, leaving out the losses and only posting the wins, would be not to discover. This is wrong. Simply because many are testing and they know about the probable proceeding in tests. Too many wins, or too few losses simply make them suspicious. So, it is clear that a specific environment leads to comparable results. With the usual variance. Extreme results were possible but not probable. If however a certain tester _always_ gets supergood results for the program which is his favorite at the time, then it's clear that this must have reasons in the bias input by the tester himself. I wouldn't call it cheating, it's better defined as 'operator influence'. Of course then this has nothing to do with sound testing. Jonas made a good argument. Jonas makes a difference between normal testing with all the science involved and a human approach of an individual (I called him operator after the information of "AB") who doesn't want to do classical testing but who wants to prove that he's able to work on a program version with all his creativity to make it a better program. The exact detils of his method remain a secret. I would say that this is completely ok. If we forget about the announced claim of presenting data as a proof of the secret. Because the presented games have nothing to do with the secret. BTW that was the information "XY" wrote here into CCC after he had recovered from a first confusion when he insulted me as the one who outed the secret of the secret. In fact I did nothing else but showing that the games could never prove the better strength of the Macheide Rebel. Neither statistically nor content-orientedly. But was the whole attempt worth nothing at all? With Jonas I think that "XY" did an interesting job. The problem is that we know that he can't prove a thing, that the presented "proof" is a delusion full of logical errors. So the job of the experts was to analyse the conditions of the whole event. And although they came to the conclusion that nothing could be proven, the fact, that a special style existed called Macheide, after the famous Worldchampion LASKER's book, that is a real hype. So beyond all the necessary reservation of scientists we have to realize that our modern times live from magic and pretentiousness. The American motion picture Wag the Dog is the artistic masterpiece where our modern media society got its memorial. As we know from chess the threat is stronger -as idea- than being moved and having reality. So the idea is stonger than what really happens. In truth we know that charisma is 99% of the performance. Likewise we in CC have our own spin doctors and charismatic figures. Not only the programmers can create programs but also we creative testers. Not what is really happening is important but the creation of ideas of a potential happening. As in chess the potential is richer than reality. Isn't that a good side effect of such a hobby? Isn't reality much brighter with such dreams? So, in nuce I'm proud that I could contribute something. The SSDF published many rankings, so why it shouldn't be allowed to create a Macheide version of Rebel? Wasn't it the secret of Lasker, that he always won although he never (excuse me but I want to present my case without unnecessary complexity) had better positions? I think it's a great event when Lasker's strength could be utilized in our modern chessplayers, the programs. We have many unsolved problems outside classical science. Morphy (!) fields could be the next creative improvement of chess programs. I know that Sheldrake is also a big outsider of science... ;) Rolf Tueschen
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.