Author: Sune Fischer
Date: 03:34:12 12/29/01
Go up one level in this thread
I think if you took 1 million GM games and did a history "heuristics" on them, we would get an "idea" of how often a GM places the knights on c3 or e5 or.... Some basic analysis would then give you a good table score. Neural nets are just not suitable for chess IMO, NNs tend to ignore noise (which is really their strength) and the slightest difference on the board is often the difference between winning and loosing. But I still say table scores should only be a small factor in the overall evaluation, tactics and other kinds of positional stuff, like mobility, pawn structure, king safety...., is more important because it is adaptive to the individual games. On December 29, 2001 at 06:15:47, Steve Maughan wrote: >Tom, > >I had a similar idea but haven't got round to testing it. I think it possibly >will work. In many respects it is analogous to training Neural Networks via >Backpropagation. It's probably worth having a search on the web for info in >backpropagation training as it may trigger some thought processes. >Another idea >may be to run this for a large number of positions (~10,000) with a shallow >search (4 ply) and collect all of the coefficient deltas - applying the changes >at the end. This may give a more stable situation. I don't think you need a >particularly deep search for this to work since all you're trying to do is >improve a static evaluation with a search + static evaluation - i.e. any amount >of searching will be better than none at all. It's also probably better to use >positions were the best move is not a capture. Some people may critisise this >as it will not find the optimal evaluation but I think it may at least improve >it - and that's a start. > >Just my random thoughts! Let us know how it works out! > >Regards, > >Steve
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.