Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Chess Tiger - Is It Really 2696 ELO?

Author: Fernando Villegas

Date: 19:22:49 12/21/99

Go up one level in this thread


On December 21, 1999 at 21:23:57, Graham Laight wrote:

>On December 21, 1999 at 18:10:05, Fernando Villegas wrote:
>
>>On December 21, 1999 at 13:15:11, Graham Laight wrote:
>>
>>>I apologise for bringing up a subject which has undoubtedly already been
>>>discussed, but according to the SSDF ratings, Chess Tiger is 2696.
>>>
>>>According to the FIDE ratings, there are only 11 players in the world with a
>>>higher rating than this.
>>>
>>>Can this possibly be correct?
>>>
>>>Graham
>>
>>As it has been said before, Elo rating between computers are valid in he
>>communuty of computers and has a not very clear and perhaps definitively dark
>>relation with Elo of human players. In fact, there is not any known method to
>>determinate that relation, until now. Only guesses. If monkeys played chess,
>>they too would have an elo rating, but I am sure you would not equate the elo of
>>Sheeta with that of Gary. Sorry for the monkeys
>>Fernando
>
>If monkeys could play chess, their Elo rating would be very low - so they would
>be comparable to Gary. Monkey Elo would probably be about 100, Gary's is over
>2800.


You did not catch my point; if monkeys play chess, they will rank according his
own abilities and so one will be a 2800 between monkeys as much as Elo does not
measure absolute chess skill, but distributes playing according comparative
stremght
>
>Following the link on Albert Silver's post to the previous discussion, it
>appears that Albert (and others) are saying the same thing - that because you're
>not comparing like with like, the computer Elo ratings are not valid.


Nope. What I say is that Elo is valid inside a determinate pool. Computers are a
pool, men are another.

>
>I have yet to be convinced, I'm afraid. Firstly, on their web site, SSDF say
>they have done some research to ensure that their rating ranges are reasonably
>accurate. In the past, for example, they have used the Aegon tournament to check
>the validity of their rating ranges.

Of course they are acurate, BUT inside that pool. And anyway I do not say
nothing about computers skills, if they can do this but not that. But until now,
even if we asume they play almost the same chess we play, the elo they have is a
measure of his relatives strenghts playing with all kind of computer, some very
old, but still giving room to measures.
Fernando

>
>Secondly, much of the argument revolved around the idea that computers are prone
>to making moves which are weak from the positional perspective - and that only 1
>such weak move is needed to lose a game with a grandmaster. However, I would
>question this for the following reasons:
>
>* Computers have a remarkably good ability to survive the resulting "crushing"
>attacks. Sometimes, when they find an escape, they are able to go on and win the
>game
>
>* IMs and above tend to divide themselves into "active" players (e.g. Maurice
>Ashley) and "positional" players (e.g. Yasser Sierewan, Anatoly Karpov).
>Certainly players like Yasser were, in the past, able to beat computers (Yasser
>is a previous winner of Aegon). But players like Kasparov (who tends to lose to
>computers) must have all (or most) of the positional players' knowledge, because
>his Elo rating is so much higher than theirs.
>
>To organise another Aegon style tournament would probably cost about $120,000
>and it's entirely possible that, because IBM have basically milked much of the
>publicity available for human v computer chess, that sponsorship would be very
>difficult to obtain. So, for the time being, we're stuck with jumping on every
>little scrap of information to try to create a (moving) picture of what the
>reality of the ratings is like.
>
>Graham



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.