Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Interview with Christophe theron (from chessbase) about GT2 and CT14

Author: David Rasmussen

Date: 04:40:21 04/18/01

Go up one level in this thread


But everyone today (including you) is writing programs that _does_ take into
account the realistic time controls that chess will be played at.

As all of us know, there are basically two types of optimization, an algorithmic
one that will lower the time-complexity of a program, either by taking it to a
lower complexity class altogether (say from exponential to polynomial) or by
changing the constants involved in the asymptotic complexity of the program, for
example in the case of a*b^c, lowering the value (average) of a, b or c. Of
course we are much more interested in lowering c than lowering a, typically. But
if we can change a with a to 0.00000000000001 of its original value, while
increasing c to twice its value, it would often be a good idea, in practice,
given the time controls that chess programs work under. It would not be a good
idea if time controls were 1 billion years per move, given todays hardware.

Chess programming is in an odd position right now, with todays hardware
technology and the technology of chess programs themselves, because we are in
the case that the constant _are_ important. Very important. Often more important
than the algorithmic complexity. Just like it is often better to use bubblesort
on a small dataset, instead of quicksort, even if it has worse complexity,
because it's constants are better, so on the datasets at hand, it will perform
better. The nature of chess, modern chess programs, and modern computers, fits
together in a way, that make the practically used time controls very short, in
the sense that constant time optimizations make much of the difference.

I think that is what Ernst Heinz means by scalable search. To make changes that
makes the search complexity better, so that the program will eventually perform
better, even if the scale of the time used, is changed dramatically.

If we had a computer than ran 1000000000000000000 times faster than todays
computers, we wouldn't make chess programs as we do today. We wouldn't optimize
for a constant increase in time. On the other hand, we would welcome _any_
complexity-decreasing improvement whatsoever. For example, we would use Enhanced
Transposition Cutoff at all nodes.

Maybe someone should set out to make this kind of scalable chess program, and
only when "finished" optimize for constant speed improvements.



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.