Author: Vincent Diepeveen
Date: 20:14:12 08/17/02
Go up one level in this thread
On August 17, 2002 at 17:43:15, Mike S. wrote: i hope you realized i put serious time in some positions of their testset, in order to find out that i have put more time in the positions than they have when i started emailing them about it. If they use the word time. they mean 'computer time'. Best regards, Vincent >On August 17, 2002 at 14:49:55, Vincent Diepeveen wrote: > >>On August 17, 2002 at 05:11:01, Uri Blass wrote: >>(...) >>>Did they check carefully that the test is correct? > >>No they didn't, they are too stubborn for that. (...) >>Look these guys i couldn't find on any rating list. (...) >>They are too bad in chess to analyze themselves even! (...) > >That type of comment isn't helpful. I think it's the nature of *very* difficult >test positions, that there will be some doubt then and when which continuation >really is the best, and that it's not easy to prove it by analysis, especially >if two variants seem to be (nearly) equal. I'm also not 100% convinced by some >of the positions/solutions - which may well be perfectly correct though (but >that still doesn't mean they are suitable for computer tests always), etc. - but >I'm also not convinced of my own analytical skills (you also won't find me on >any rating lists :o)). But it should be possible to avoid unfair critizism. > >Don't forget, even if you don't trust some of the positions: Most probably you >will trust the majority of these 100 (!) positions. They offer *thousands* of >results for comparison, including the ply depth info, from 112 progs... You can >remove the positions you don't like, and still have an *enormous* amount of >quality testing data. All provided by one man with one computer only. > >>Of course i'm not going to do it. > >It would take a lot of time and endurance (I did it, but in much smaller >scales). > >>Of course they are not >>going to test movei or any other 'amateur' engine. > >A *lot* of amateurs have been tested. I think, even the majority of of the >engines tested are amateurs (I didn't count them). > >>Because just *suppose* >>that one of the engines is very aggressive tuned and scores real high >>on their testset. > >Actually some do, i.e. Gromit or Goliath are ahead of some commerical engines in >the WM-Test results (I don't know if aggressiveness is the reason). > >>How's CSTII doing on this testset, speaking of an aggressive but very >>weak engine? > >An excerpt from the WM-Test results (the last value are the solutions): > >1 Fritz 7d (7,0,0,8) eng 19.05.02 256 2.698 70 >2 Fritz 7c (7,0,0,6) eng 11.01.02 256 2.698 70 >3 Deep Fritz 7 eng 04.08.02 256 2.696 70 >(...) >52 MChess Pro 8 exe 19.09.98 60 2.630 47 >53 Chessmaster 8000 exe 12.02.01 256 2.630 47 >54 >>> Chess System Tal 2.03 exe 24.05.99 128/64 2.629 46 >55 Aristarch 4.4 (UCI) exe 02.08.02 256 2.628 47 >56 Shredder 5.32 dll 26.05.01 256 2.627 46 >57 Li.Goliath 2000 v3.6 exe 07.05.02 256 2.627 45 >58 WBNimzo 2000b exe 05.11.99 256 2.624 47 >59 Nimzo 7.32 dll 04.08.99 256 2.624 46 >(...) > >http://www.computerschach.de/test/index.htm > >My experience with CST in my own testsuites was, that (unlike it's gameplay) it >is a quite "solid solver", but a bit slow(er), depending on what you compare it >to of course. In the Quicktest, which is much easier than the WM-Test, CSTal >2.03 is rated very similar to Aristarch 4.0 and Genius 5, on Athlon@1.2 GHz. See >XLS file: > >http://meineseite.i-one.at/PermanentBrain/quick/quicke.htm > > >Regards, >M.Scheidl
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.