Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The Ruffian test after 43 games by each engine

Author: Peter Skinner

Date: 16:37:19 03/02/04

Go up one level in this thread


On March 02, 2004 at 16:13:03, George Tsavdaris wrote:

>On March 01, 2004 at 20:37:00, Peter Skinner wrote:
>
>>On March 01, 2004 at 15:10:56, Dann Corbit wrote:
>>
>>>
>>> # Name              1    2    3    4    5    6    7    8    9   10   11   12
>>>13   14   Score   Buch  Sommb
>>>-------------------------------------------------------------------------------------------------------------
>>> 1 Ruffian_202    **** =1=  01=  =01  =10  0==1 1=0  11=1 111  111  11=  1110
>>>0=1  1=11  30.0/43 882.5 595.50
>>> 2 Ruffian_105    =0=  **** 1=0  ===  ==1  =11  1111 111  1010 001= =01  =10
>>>1111 111   29.0/43 892.5 574.00
>>> 3 Ruffian_210    10=  0=1  **** 1=0  =001 ==0  011= 110  1=11 0001 110  1=1
>>>111  111   26.5/43 910.0 526.25
>>> 4 Ruffian_101    =10  ===  0=1  **** ===  00=  1=1  101= 101  0101 1110 111
>>>01=1 101   26.0/43 898.5 524.75
>>
>>This is almost bang on with the results that I have attained. Version 1.0.1 in
>>my testing finished ahead of 2.1.0 by only a half point. So those two just
>>flopped in our testing.
>>
>>My games we G/15 and G/30. It seems that Ruffian 1.0.5 and 2.0.2 are just about
>>equal in strength and 1.0.1 and 2.1.0 are equal in strength.
>>
>>I read a post on another forum the other day where someone did some more in
>>depth testing is came to the conclusion that it is possible that 1.0.1 has been
>>optimized and renamed 2.1.0 and the same goes for 2.0.2 and 1.0.5.
>
>Mmm... interesting. I would search a little about this.
>
>>
>>If you take certain test positions and analyze them with the two similiar
>>versions almost 99% of the time the same variation happens. If there was a
>>"huge" strength improvement like some would have us believe that would not be
>>the case.
>>
>>Unfortunately the post was removed as the administrator felt the post was
>>attacking the author or accusing his of fraud. While the poster probably was,
>>there is a hint that this is exactly what could have happened.
>>
>>Statistics do not lie...
>
>Yes but statistics on the above tournament don't say that Ruffian 1.01 ~=
>Ruffian 2.1.0  and  Ruffian 1.05 ~= Ruffian 2.0.2.
>
>>
>>Peter

I have tested, and I have read all the testing others have done, and the same
data always seems to come forward:

1. Ruffian 1.0.1 finshing within a single point of 2.1.0. Usually is happens to
be a .5 point differnce.

2. Ruffian 2.0.2 and 1.0.5 seem to finish within 1 to 1.5 point of each other.

3. Very few test results have shown 2.1.0 or 1.0.5 to be stronger than 2.0.2,
and 1.0.5 respectively. I know in the Ridderk tournament 1.0.5 did finish lower
than 1.0.1, but that was only by 4 points.. luck could have been a contributing
factor.

4. When analyzing positions with those 4 versions, 2.0.2 and 1.0.5 come out to
the same result, just 2.0.2 does it quicker. Same goes when analyzing with
1.0.1/2.1.0.

5. Personally I don't believe Per-Ola would do something like this, but the data
does speak volumes. It is hard to just toss it aside.

I do want to go on record and state that I don't believe this to be the case, or
rather I am seriously hoping this is not the case. It would constitute a major
fraud..

Peter.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.