Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: rebel 10~!! super strong on amd k62 500

Author: Ricardo Gibert

Date: 03:44:48 07/29/00

Go up one level in this thread


On July 29, 2000 at 06:02:41, blass uri wrote:

>On July 29, 2000 at 05:33:54, Mogens Larsen wrote:
>
>>On July 29, 2000 at 04:29:28, Ricardo Gibert wrote:
>>
>>>I don't like any sport or contest that uses judges. Ice skating, high diving,
>>>beauty contests, etc. all use judges and the judges disagree all the time,
>>>because they all use different criteria for making judgements. They make
>>>personal assessments based on what they personally think is more important and
>>>according to their tastes and sensibilities (biases) . I hate that, so I don't
>>>watch such events, except for the occasional beauty contest when I don't mind
>>>quite so much ;-)
>>
>>Finally a voice of reason. There's no arguments that support replacing empirical
>>data with pure subjectivity or that they're measureably better independent of
>>the number of games. Either it's a question of misunderstanding the ELO system
>
>I understand the ELO system.
>The elo system does not use all the information to get the best estimate for the
>elo.
>
>It is using only results and not the games.
>
>I am sure that it is possible to do a calculating rating program that will give
>better estimate for the rating by not only counting the results but also by
>analyzing the games and evaluation of programs.
>
>It is not simple to do this program and I am not going to do it but it is
>possible.
>
>
>Here is one example when you can learn from analyzing games things that you
>cannnot learn from watching results without games:
>
>Suppose you see that in one game program A outsearched program B and got
>advantage by the evaluation of both programs.
>
>The evaluation of both programs was wrong and program A lost because the
>position that both programs evaluated as clear advantage for A was really a
>losing position for A.
>
>If you analyze the game you can understand it and increase A's rating based on
>this game.

In theory, this is possible, but in practice you may find that the eval changes
may fix one problem area, but creates another problem area and the program
actually nets a loss of playing strength. This happens more often than you might
think. Chess is not so easy to understand even if you are a GM. Otherwise, the
game would have been solved by now.

Science has shown that objective methods work best in the long run. Methods
based on judgement, time and again have demonstrated weaknesses that are
surprising and problematical. Why do you think science troubles itself by
performing double blind tests? For the heck of it? Or is it that scientists
can't be trusted to judge scientific issues? The answer to the latter question
is _no_, they can't be trusted. Thus we have the double blind tests using
objective measurement. It is the only way to go. Otherwise, it is not science.
We return to the Dark Ages.

>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.