Author: Robert Henry Durrett
Date: 09:52:57 06/03/02
Go up one level in this thread
On May 31, 2002 at 17:58:34, Joe McCarro wrote: > >I didn't think they were rating players 40 years ago I though it started in the >seventys. I think that brings up problems for ratings especially Fischer who >just suddenly stopped playing. I think Bob Hyatt has a point. Its seems Tal, >Petrosian, Spassky, were stronger relative to the other players when they were >playing Fischer. At that time they had no rating but were only rated later. Do >we then figure fischers rating by tabulating his wins and losses and then basing >it off they were given later? > >Anyway I was kicking around this idea to see if we can objectively test the >strength of a persons games. I wanted some input from you all. > >What I like most about replaying great games with a computer analyzing is when >someone makes a move that the computer thinks is no good but then you after >thinking about it longer finds the move was great. When I see that i think that >the player was playing better than the computer. > >Take a game played by Fischer have the computer analys that game for say 3 >minutes per move - test one. Then have the same program analyze the same game >for 30 minutes per move- test two. After this is done determine how many times >the computer after the longer analysis went from a different recomendation to >the one the great player actually played. You could also compare the >evaluations given the move in question compared with that ultimately suggested >by the computer and that played by the player. > >For example lets take a 40n move game and lets say that after 3 minutes per >move(3x40=120) It agrees with 30 of Kasparov's moves and disagrees with 10. If >we then let it run for 30 minutes per move and it finds agrees with all 40 moves >we could say kasparov more likely than not would have "outplayed" that computer >in a tournamnet game. (Not necessarily won or but just played better chess in >that time period.) Now Lets say it comes back saying agreeing with 35 and >disagreeing with 5(for simplicity sake lets just say the 5 disagreed with here >were disagreed with earlier as well) Does this mean they played about equal? >Not necessarilly. We would want to consider how drastic the differences were. >If all the disagreeemnts with Kasparov were on average .1 of a pawn and the >difference from test one to test two were on averag .6 pawns it would appear we >could still say Kasparov played better than the test one computer. > >If we can build a baseline like that i.e., how much better kasparov is playing >than computer program x we can then compare him to other players. > >Obviously I'm making the assumption that the longer a computer analyzes the >better the analysis. I guess we could use different different time controls >and compare and or average results. > >Note as well this won't say who had more talent or who was the better player it >is just an attempt to see who played objectively better moves. Just like I >wouldn't say the inventor of the hallogen light bulb is a better inventor than >Thomas Edison even if hallogen bulbs are objectively better. > <snip> I really like this new innovative idea, although I would prefer the ultimate purpose to be to provide new information which could be used by the chess program developers, rather than to improve on Arpad Elo's analysis. Extensive testing of modern chess programs, to see how their moves compared to those of the past and present "chess greats," might be VERY revealing. This is not to imply that I think none of this has been done already, to some extent. Surely it must have been. But perhaps not to the extent that Joe McCarro seems to be suggesting. What do other people think about this idea? Bob D.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.