Author: Michael Yee
Date: 15:37:50 04/27/05
Go up one level in this thread
On April 27, 2005 at 16:10:13, Jay Scott wrote: >On April 27, 2005 at 10:35:28, Michael Yee wrote: > >>- learn rules/functions for when to extend or reduce search along a given line > >This would be a good area. Most of the attempts so far are in the line of >general algorithms, rather than attempts to automatically find features of the >situation which indicate that search reduction is likely to be a good idea. >Also, good search reduction can be a huge win, and I believe there's some >potential to come up with a breakthrough idea. Good point. The paper Remi referred to (Learning Extension Parameters in Game-Tree Search) learned parameters for an existing search strategy (fractional extensions). Maybe a more general approach could be tried. >>- learn when it's safe to prune a given line (related to previous idea) > >That means "reduce search to nothing", so it's a special case of the last one. > >>- learn parameters for static (leaf node) evaluation function (although a lot >>has already been done here, I think) > >This is the most popular thing to try, because it's easy to play around with. >There have been moderate successes. A success would be an automatically-tuned >evaluator, or even evaluator component, which was competitive with a hand-tuned >one and takes less work to retune when adding new parameters. A bigger success >would be to convince conservative chess programmers that this was true, so that >they used the method! Yes, this seems like a good place to start. And as you mention below, it would be a necessary component of some other learning tasks (feature construction). >>- learn/construct/discover features for a static evaluation function (some >>not-so successful work may have been done here with neural networks?) > >This is wide open. There's no previous work that I consider successful. I >believe it's easy in principle; it's only hard in practice. :-) But it does >depend on getting the previous step to work, tuning, so it's not the thing to >try first. You have to be able to tune your constructed features so you can know >if they're any good--deserving of a high weight in the evaluator, rather than a >near-zero weight. > >>- learn rules for move ordering (i.e., that try to search best moves from a >>given node first to achieve more efficient cut-offs) > >Unlikely to be a big win, because existing heuristics are highly successful. I guess it'd still be interesting (for me personally) to see if the automatic method's performance could match the existing heuristics. In any case, move ordering could be handled alongside the extension/reduction task. >>Specifically, I'm curious which areas are already "mature", which seem >>promising/new, or even if you have any other ideas/references. > >In my opinion, no area of machine learning in chess is mature. Some machine >learning techniques are used in production chess programs, but they are very >limited. The most common use is learning opening books, which commonly work by >rote learning and ad hoc score adjustments. > >I'd also note that search control and position scoring interact. Each has to be >tuned in the context of the other. If you're just starting out, it'll be easier >to work on one or the other and not both. > > Jay Thanks for your feedback. I've also checked out your "machine learning in games" site periodically, and found it very helpful. Michael
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.