Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The Ruffian test after 43 games by each engine

Author: Chessfun

Date: 19:47:19 03/02/04

Go up one level in this thread


On March 02, 2004 at 19:37:19, Peter Skinner wrote:

>On March 02, 2004 at 16:13:03, George Tsavdaris wrote:
>
>>On March 01, 2004 at 20:37:00, Peter Skinner wrote:
>>
>>>On March 01, 2004 at 15:10:56, Dann Corbit wrote:
>>>
>>>>
>>>> # Name              1    2    3    4    5    6    7    8    9   10   11   12
>>>>13   14   Score   Buch  Sommb
>>>>-------------------------------------------------------------------------------------------------------------
>>>> 1 Ruffian_202    **** =1=  01=  =01  =10  0==1 1=0  11=1 111  111  11=  1110
>>>>0=1  1=11  30.0/43 882.5 595.50
>>>> 2 Ruffian_105    =0=  **** 1=0  ===  ==1  =11  1111 111  1010 001= =01  =10
>>>>1111 111   29.0/43 892.5 574.00
>>>> 3 Ruffian_210    10=  0=1  **** 1=0  =001 ==0  011= 110  1=11 0001 110  1=1
>>>>111  111   26.5/43 910.0 526.25
>>>> 4 Ruffian_101    =10  ===  0=1  **** ===  00=  1=1  101= 101  0101 1110 111
>>>>01=1 101   26.0/43 898.5 524.75
>>>
>>>This is almost bang on with the results that I have attained. Version 1.0.1 in
>>>my testing finished ahead of 2.1.0 by only a half point. So those two just
>>>flopped in our testing.
>>>
>>>My games we G/15 and G/30. It seems that Ruffian 1.0.5 and 2.0.2 are just about
>>>equal in strength and 1.0.1 and 2.1.0 are equal in strength.
>>>
>>>I read a post on another forum the other day where someone did some more in
>>>depth testing is came to the conclusion that it is possible that 1.0.1 has been
>>>optimized and renamed 2.1.0 and the same goes for 2.0.2 and 1.0.5.
>>
>>Mmm... interesting. I would search a little about this.
>>
>>>
>>>If you take certain test positions and analyze them with the two similiar
>>>versions almost 99% of the time the same variation happens. If there was a
>>>"huge" strength improvement like some would have us believe that would not be
>>>the case.
>>>
>>>Unfortunately the post was removed as the administrator felt the post was
>>>attacking the author or accusing his of fraud. While the poster probably was,
>>>there is a hint that this is exactly what could have happened.
>>>
>>>Statistics do not lie...
>>
>>Yes but statistics on the above tournament don't say that Ruffian 1.01 ~=
>>Ruffian 2.1.0  and  Ruffian 1.05 ~= Ruffian 2.0.2.
>>
>>>
>>>Peter
>
>I have tested, and I have read all the testing others have done, and the same
>data always seems to come forward:
>
>1. Ruffian 1.0.1 finshing within a single point of 2.1.0. Usually is happens to
>be a .5 point differnce.
>
>2. Ruffian 2.0.2 and 1.0.5 seem to finish within 1 to 1.5 point of each other.
>
>3. Very few test results have shown 2.1.0 or 1.0.5 to be stronger than 2.0.2,
>and 1.0.5 respectively. I know in the Ridderk tournament 1.0.5 did finish lower
>than 1.0.1, but that was only by 4 points.. luck could have been a contributing
>factor.
>
>4. When analyzing positions with those 4 versions, 2.0.2 and 1.0.5 come out to
>the same result, just 2.0.2 does it quicker. Same goes when analyzing with
>1.0.1/2.1.0.


The results obtained by Manfred Meiler in the WM Test suggest differently.
Naturally you could say that's the result of optimization or whatever.

But even if that were the case the result is as you say yourself, engines that
produces quicker results line 4. However that does clash a little with your
lines 1 and 2.

This suggestion also assumes that the engines 2.0.0. and Leiden have been dumped
completely, despite the fact that the Leiden engine won a tournament and
naturally could also have been a candidate for optimization.

Sarah.




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.