Author: Steven Edwards
Date: 10:30:18 05/12/05
Go up one level in this thread
On May 12, 2005 at 12:58:00, Matthew Hull wrote: >I see this process as similar to NN technology in terms of the utilization of >processing resources. The bulk of the compute resource are consumed ahead of >time, and the generalized "understanding" stored for later reference. The >application of these learned things will require further effort in a game, but >most of the effort will have already been computed and stored in generalized >form. A traditional AB searcher remembers almost nothing and must re-compute >it's "knowledge" frequently and afterward remembers nothing, except what may be >in the transient hash table. The exception is the EGTB, whose compute resources >were already consumed and don't need recalculation. The EGTB is the only thing >"remembered" between games, except some basic position/book learning. But even >the hash table and EGTB are not really "understanding", but just rote memeory. > >The conservation of computing effort is what I like most about your approach, >and other approaches like NN. I like it because it's a well tested technique borrowed fram the AI literature and because I can take a nap while it's doing its work. The alternative of having to fiddle with weigting factors manually is merely low level tinkering; at least GA is high level tinkering. Deep Blue used an automated technique for optimizing positional evaluation coefficients. But these weights are used ex post facto interior move selection. My idea is to use the GA output to assist with the control of the search via focussing move selection.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.