Author: Michael Yee
Date: 13:37:46 03/09/04
Go up one level in this thread
On March 09, 2004 at 16:05:15, Gian-Carlo Pascutto wrote: >On March 09, 2004 at 15:37:30, Michael Yee wrote: > >You are fitting an evaluation function with tactical training data. >You're not only tuning parameters, you're influencing everything >since speed and tree size also varies. > >The analogy is completely flawed. [snip] Wow. I was partly just being facetious with my initial comments. But I actually was mostly serious. I certainly don't think the analogy is "completely" flawed (since I think that would invalidate a lot of reasonable ideas in machine learning). If you parameterized your whole program, I don't see why a global search technique couldn't find the same weights that you hand-coded or even better ones (given a nice large training set). For example, let f(x) = DS's performance in a tournament given param vector x. Then a search technique (e.g., tabu search) could be used to optimize f(x) over x. I admit that it could take a long time, but I don't think it's impossible. (Also, I think it would still work if f(x) was based on the ability to match GM moves form a large set of training positions.) Michael
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.