Author: KarinsDad
Date: 09:34:05 07/12/99
Go up one level in this thread
On July 12, 1999 at 11:45:27, Christophe Theron wrote: [snip] >> >>BTW, I was being a little facetious here, but the point is valid. EVERY superGM >>game should be analyzed move by move by the programs in order to determine >>whether or not the programs have any chance of coming up with the same move. It >>is only by understanding why superior human players make a given move that one >>can start to understand how to improve the programs beyond their 1700 level >>chess playing knowledge level of today. The reason that I say 1700 (probably a >>high estimate) is that programs do not have REAL sophisticated chess knowledge >>in them. If I could calculate 100 kNPS, then I would be about a 2600 or 2700 >>player as well. >> >>The only way to get programs better is to understand the best moves of the best >>players on the planet. To do that, you should be talking about the games and >>tournaments of those players. To draw the line between the superGMs and the >>programs / hardware / algorithms / cctournaments is to limit the breadth of >>where you can actually go with computer chess. Why should we limit our minds? >> >>KarinsDad :) >> >>PS. An interesting experiment may be to limit various programs to 6 ply (the >>average distance an average player may search) and see how well they perform. >>From this, a rough estimate of a given program's chess knowledge level could be >>made. This experiment has probably been done before at various ply. Does anyone >>know whether it has been done and what the results were? > > >I can give some data. Chess Tiger 11.8 played in April in a human tournament. I >was using a 386 SX 20MHz notebook and 2Mb hash tables. > >On this computer, and given the time control (game in 30 minutes), Tiger was >only able to search between 5 to 7 plies deep. Say 6 plies in average. > >Tiger won this tournament with 6.5 points out of 7. The FIDE Elo performance was >above 2000. > >So I don't think my program has "1700 level chess playing knowledge". And I >think this applies as well to many other good chess programs. I think Rebel or >Genius could have easily won the tournament too. > > > > Christophe Christophe, Thanks for the information. It just goes to show how far an estimate will get you. Hopefully, we will get more results from other people. One area we cannot easily determine is how much strength is gained by the programs due to their opening books. So, when determining knowledge of the engine itself (as opposed to opening books and tablebases), it is a little more difficult. For example, in the low rated tournament you mentioned, Chess Tiger probably did not get in a lot of trouble out of the opening whereas it's opponents may have. Another area is tactics. Of course, the program will not make any serious tactical mistakes up to the ply it is searching whereas humans may. So, overall, we do have to take your (and anyone else's results) with a grain of salt since we cannot segregate playing strength of humans (or programs for that matter) between tactics (the program's strength) and knowledge (the human's strength). But, maybe if we compile some results like these, we will have an approximate idea (this is a real abstract and theoretical area) of how much chess "strength" is based on powerful computers and how much is based on understanding of the mechanics of the game. KarinsDad :)
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.