Author: Uri Blass
Date: 01:20:23 12/27/01
Go up one level in this thread
On December 27, 2001 at 02:13:34, Tom Kerrigan wrote: >On December 26, 2001 at 17:47:12, Christophe Theron wrote: > >>I don't think so, but I think at some point the only way to improve will be to >>incorporate a way for the program to learn without the programmer, to remember >>its experience and improve on it, and to adapt its play to its opponent. > >I don't see this as likely because of the numbers involved. > >For a computer to recognize features, it has to loop over them. And there might >as well be an infinite number of features possible on a chess board, so that >loop is going to take a while. > >The human brain is sloppy and bad at this task, so maybe there's some way to do >sloppy and bad learning that does better than what we have now, but I wouldn't >know how to go about that. > >-Tom I think that there is already a possible way to do a program that is learning without the programmer. Even a program that is using only a piece square table is using a lot of numbers in the evaluation. I am sure that these numbers are not optimal and it is possible to get a small improvement by only changing one number. The program may try to test automatically if changing one number is a good idea or a bad idea. The problem is how to test if a change is positive or negative and I believe that it is possible to do it by the right test suite. I think that the right test suite should not be one of the tactical test suites that is known but simply avoiding tactical mistakes. A tactical mistake is something that is considered by programs as a tactical mistake after a long search and it is possible that humans are going to see part of these mistakes as positional mistakes. The first step to generate the right test suite is simply to analyze a big database of games in order to find tactical mistakes. The second step is to find all the possible tactical mistakes in the relevant positions including possible tactical mistakes that were not done in the games. The target of programs in the test should be simply to avoid a tactical mistake and a change in the evaluation is probably a positive change if it helps the program to avoid more mistakes. The main problem is to generate the test suite and I believe that a test suite of 100000 positions can be achieved if we find a lot of volunteers to give a lot of computer time for that job. Uri
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.