Author: Andrei P
Date: 00:32:58 12/02/04
Go up one level in this thread
On December 01, 2004 at 23:25:29, Ricardo Gibert wrote: >On December 01, 2004 at 17:37:39, Andrei P wrote: > >>in Livshitz book "Test your chess IQ first challenge", which test human tactical >>skills, he gives a table that shows a correlation between % solved and elo >>strength. To create the table, the author tested the tactics in the book on >>humans with known elo and and then fitted the data into the table (see below). >> >>% solved elo >>100% 2200 >>90% 2000 >>80% 1800 >>70% 1600 >>60% 1400 >>50% 1200 >> >>I thought that, in general, one should be able to treat puzzles like players of >>a given strength. The stronger the puzzle,the higher its "elo". so one gets >>higher elo performance by solving higher rated puzzles etc. But according to >>this table, the puzzles do not behave like human players. For example, one could >>surmise that the average elo of the puzzles in this "tournament" is 1200 (humans >>with 1200 elo solve 50%), so according to fide expactacy, a 1600 player should >>score 92% against opposition of 1200, but scores only 70% in the table. >> >>What is the reason that relation between % solved and elo is different than for >>human-human matches? if anybody has references to how puzzles are rated that >>would be great. >> >>Thank you, Andrei > >You've assumed that the rating that corresponds to a players ability to win >games corresponds exactly to that of his ability to solve positions. Bad >assumption. > >To see why this is significant, replace the solving of positions with the >winning of a chess match. The outcome of a 4 game match is pretty random even >when the participants are rated 100 rating points apart, but if the match is 100 >games long, then the player with the rating advantage will have a virtual lock. >It is evident that a players ability to win an individual game does not >correspond directly to his ability to win a long match. > >Similarly, a players ability to solve an individual position does not correspond >directly to his ability to win an individual game. > >There is some relationship of course. The favorite is still the favorite, but >the percentages will not remain the same. They have to be scaled by an >appropriate function. solving chess puzzle is a "random" event in the sense that you describe it. for a given player there is an expectancy that he will solve and it is dependent on the players rating. just like in a chess match there is a certain expectancy that a player wins against a given opposition. both situations seems the same from a statistical point of view. so elo should scale up the same way. the only difference that I see is that in the case with chess puzzle we have one extremely stable player - the chess puzzle- which rating does not fluctuates, and one "regular" unstable human. But if that affects the scaling up of the performance with rating strenth compared to human-human matches, than we should see the same problem with rating computer-human matches where machine is much more stable player. Actually, I think I understand what you are saying. "tactical skills" don't necessarily scale up the same way as overall performance. If this is the case, than Livshitz' table contains an inveresting observation that OTB performance in 1200-2200 range, involves much less difference in the tactical skills! players with 400 elo spread in standard games, actually only different by 150 "tactical" elo points! So, at least on this level of play for human other factors are more critical in real games (opening preparations,positional knoledge, etc.). Or to turn it around, one may significantly improve peformance by beefing up on tactics, but it is a very slow progress practically. Andrei
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.