Author: Joe McCarro
Date: 14:58:34 05/31/02
Go up one level in this thread
I didn't think they were rating players 40 years ago I though it started in the seventys. I think that brings up problems for ratings especially Fischer who just suddenly stopped playing. I think Bob Hyatt has a point. Its seems Tal, Petrosian, Spassky, were stronger relative to the other players when they were playing Fischer. At that time they had no rating but were only rated later. Do we then figure fischers rating by tabulating his wins and losses and then basing it off they were given later? Anyway I was kicking around this idea to see if we can objectively test the strength of a persons games. I wanted some input from you all. What I like most about replaying great games with a computer analyzing is when someone makes a move that the computer thinks is no good but then you after thinking about it longer finds the move was great. When I see that i think that the player was playing better than the computer. Take a game played by Fischer have the computer analys that game for say 3 minutes per move - test one. Then have the same program analyze the same game for 30 minutes per move- test two. After this is done determine how many times the computer after the longer analysis went from a different recomendation to the one the great player actually played. You could also compare the evaluations given the move in question compared with that ultimately suggested by the computer and that played by the player. For example lets take a 40n move game and lets say that after 3 minutes per move(3x40=120) It agrees with 30 of Kasparov's moves and disagrees with 10. If we then let it run for 30 minutes per move and it finds agrees with all 40 moves we could say kasparov more likely than not would have "outplayed" that computer in a tournamnet game. (Not necessarily won or but just played better chess in that time period.) Now Lets say it comes back saying agreeing with 35 and disagreeing with 5(for simplicity sake lets just say the 5 disagreed with here were disagreed with earlier as well) Does this mean they played about equal? Not necessarilly. We would want to consider how drastic the differences were. If all the disagreeemnts with Kasparov were on average .1 of a pawn and the difference from test one to test two were on averag .6 pawns it would appear we could still say Kasparov played better than the test one computer. If we can build a baseline like that i.e., how much better kasparov is playing than computer program x we can then compare him to other players. Obviously I'm making the assumption that the longer a computer analyzes the better the analysis. I guess we could use different different time controls and compare and or average results. Note as well this won't say who had more talent or who was the better player it is just an attempt to see who played objectively better moves. Just like I wouldn't say the inventor of the hallogen light bulb is a better inventor than Thomas Edison even if hallogen bulbs are objectively better. Of course I'm making On May 30, 2002 at 17:59:35, Amir Ban wrote: >On May 30, 2002 at 13:34:25, Robert Hyatt wrote: > >>On May 30, 2002 at 13:19:45, Dann Corbit wrote: >> >>>On May 30, 2002 at 13:15:59, Jerry Jones wrote: >>> >>>>Does anybody know what the highest official ELO rating according to FIDE is that >>>>was ever attained by a human, Kasparov that is. >>>>Is it possible that a few years ago his rating was a few points higher ? >>>>If Kasparov had declined to play Deep Blue, would this have influenced his >>>>rating ? >>> >>>You can add one million points to his ELO rating if you like. Or subtract them. >>> Just be sure to do it to everyone else and it is perfectly valid. >>> >>>ELO figures are only valuable as differences within a pool of players who have >>>had many competitions against each other. The absolute numbers mean absolutely >>>nothing. >> >> >>This is a continual problem. :) 32 degrees F means one thing. 32 degrees C >>means another thing. 32 degrees K means another thing. No way to compare >>today's 2850 rating to the ratings of players 40 years ago. > >It is perfectly sensible to compare ratings of 40 years ago and even more to >today's. That's because at no point in time did the pool of players change, with >an old group completely replaced by another. The ratings are measured against >the field, which changes continuously, and provides continuity of the ratings. > >So, even if Kasparov and Fischer never met (certainly Kasparov 2001 never met >Fischer 1972), they had many common opponents, whose ratings where themselves >determined by common opponents, etc. There's no more reason to assume that >ratings in time are incomparable than to assume that ratings in the US and in >Europe are incomparable, for, although most games are in one region, there are >enough interregional games to give the ratings worldwide meaning. > >There are random fluctuations in the rating standard, because it's all >statistics, but the numbers are large, and I'm not aware of anything that would >cause ratings to systematically drift in any direction (actually this can be >simulated effectively, by creating a random population of players and slowly >change the pool over time and see if averages drift). > >Most strong players agree that the level of play is higher than 30 years ago, >and that's a good enough reason why today top ratings are higher. > >Fischer, Alekhine, Capablanca are of course classics, but so are Johnnie >Weissmuller and Jessie Owens, who would be today's also-rans. It is tempting to >say that this is because today our clocks run slower than in their time, but >they don't. > >Amir
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.