Author: Andrew Dados
Date: 11:52:40 01/28/00
Go up one level in this thread
On January 28, 2000 at 13:56:38, Dann Corbit wrote: >On January 28, 2000 at 12:07:17, David Paulowich wrote: > >>On January 28, 2000 at 07:27:54, Enrique Irazoqui wrote: >> >>>There is a degree of uncertainty, but I don't think you need 1000 matches of 200 >>>games each to have an idea of who is best. >>> >>>Fischer became a chess legend for the games he played between his comeback in >>>1970 to the Spassky match of 1972. In this period of time he played 157 games >>>that proved to all of us without the hint of a doubt that he was the very best >>>chess player of those times. >>> >>>Kasparov has been the undisputed best for many years. From 1984 until now, he >>>played a total of 772 rated games. He needed less than half these games to >>>convince everyone about who is the best chess player. >>> >>>This makes more sense to me than the probability stuff of your Qbasic program. >>>Otherwise we would reach the absurd of believing that all the rankings in the >>>history of chess are meaningless, and Capablanca, Fischer and Kasparov had long >>>streaks of luck. >>> >>>You must have thought along these lines too when you proposed the matches >>>Tiger-Diep and Tiger-Crafty as being meaningful, in spite of not being 200,000 >>>games long. >>> >>>Enrique >> >> >>I think we need to treat men and machines differently here. I can accept a >>20 game match between two human players as conclusive, for the year it was >>played. And a 400 game match between two computers would convince me. >>As long as the computers have a completely different way of playing, >>looking at thousands of times more positions than human players do, they >>may have to play much longer matches to produce truly convincing results. > >I think both positions are not correct. We see an experiment and assume it is >repeatable because it repeated. I flip a penny twenty times and it comes up >heads 18 out of 20. What are the odds it will be a head on the next flip? It >is 0.5 out of 1, the same as if it had been 18 tails out of 20. We watch a >brilliant game and think that we can draw from that that player x is much >stronger than player y. The truth of the matter is that we probably understand >the play of neither x nor y since they are hundreds of times better than we are >anyway. > >The ability of a player, whether man or machine, can be judged rationally only >by a purely mathematical basis. Observing a few games and drawing a conclusion >is the same sort of science as burning witches and eating mercury to live >forever. Seemed like a good thing to do at the time, but it did not have the >scientific basis it purported to possess. Hello Dan! While I don't try to discuss math here your argument is of a devils nature. It defies any sport competition, since outcome of a single game (chess, soccer, basketball, you name it) is next to miningless and gives us very little or no data who is better, faster or stronger or whatever. Any tournament is made with certain rules, and obeding them and 'winning' tournament gives us a 'champion'. Most people have no problem with recogniting a winner, some will always question outcome with 'not enough data', 'luck', 'wrong book', bugs, flu etc. You either agree on initial definition of 'winner' or just don't participate. So 'absolute mathematical ability' (duh what an ugly term) may never be known, or known with huge errors, but we need those charts, lists, ssdfs. And for that purpose any tournament produces valid results, imo. (So I am allowed to draw my conclusions...) -Andrew-
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.