Author: Robert Hyatt
Date: 18:15:20 01/03/98
Go up one level in this thread
On January 03, 1998 at 18:14:53, Don Dailey wrote: >On January 02, 1998 at 21:59:51, Robert Hyatt wrote: > >>On January 02, 1998 at 13:53:17, Stuart Cracraft wrote: >> >>>Is there a formula for translating ELO to USCF rating? >>> >>>I've heard that at some levels it is a 100+ difference >>>on the USCF side but that it varies, lower differences >>>for higher ratings. >>> >>>Anyway, this is to conver the ELO 2040 rating of a Louguet II >>>test result to USCF. >>> >>>Thanks, >>>Stuart >> >>first, this premise is totally wrong. Ken Sloan posted to r.g.c.c >>last year analyzing the difference in FIDE ratings and USCF ratings. >>The "average" is less than 50 points with USCF being higher than >>FIDE, but for the upper end I seem to recall that 30 was the right >>"fudge". >> >>Second, forget taking a test suite, running it, and getting an Elo >>(FIDE) >>type rating. It ain't going to happen. You won't get anything anywhere >>close to the true rating of the program. > >Bob, > >I don't think this is a ridiculous idea, there are just a lot of >problems >that must be solved first before it can be done correctly. Just because >it hasn't been done well yet doesn't mean it cannot be! > >I seem to remember you are right about the Elo points. I think at one >time there was a much larger difference but some gradual adjustments >have been taking place over the years. > > >-- Don I'd agree that it *can* be done, but it hasn't, and likely won't for a long while. The problem is explained as follows: Humans and computers are different. Humans have a mixture of tactical and positional skills, that blend together. It is possible that you might solve something in one way one time, and find something different the next. On the other hand, computers are *very* specific in their search strategies and their evaluations... and they apply things the same way every time. So a program either "gets it or it doesn't get it"... And it's knowledge can be very narrow (a tactical searcher/finder like Fritz). Which is so unlike what humans do that comparing the two is quite difficult. A good test is to take a human and give him several well-known problem suites and he will do similarly on most or all of them, while a computer will be "all over the place"... killing the tactical ones, failing miserably on the positional ones, and even failing on some tactical ones... Fitting a formula to make the results from a suite match several programs doesn't work either... for obvious reasons...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.