Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Interview with Christophe theron (from chessbase) about GT2 and CT14

Author: Uri Blass

Date: 13:01:06 04/18/01

Go up one level in this thread


On April 18, 2001 at 14:33:46, David Rasmussen wrote:

>On April 18, 2001 at 13:36:18, Uri Blass wrote:
>
>>On April 18, 2001 at 07:40:21, David Rasmussen wrote:
>>
>>>But everyone today (including you) is writing programs that _does_ take into
>>>account the realistic time controls that chess will be played at.
>>>
>>>As all of us know, there are basically two types of optimization, an algorithmic
>>>one that will lower the time-complexity of a program, either by taking it to a
>>>lower complexity class altogether (say from exponential to polynomial) or by
>>>changing the constants involved in the asymptotic complexity of the program, for
>>>example in the case of a*b^c, lowering the value (average) of a, b or c. Of
>>>course we are much more interested in lowering c than lowering a, typically. But
>>>if we can change a with a to 0.00000000000001 of its original value, while
>>>increasing c to twice its value, it would often be a good idea, in practice,
>>>given the time controls that chess programs work under.
>>
>>Yes but it is impossible to do it in chess because it means that you can
>>calculate the first plies 1000000's times faster and it means that calculating a
>>simple computatation like a+b is going to be slower than calculating the first
>>plies.
>>
>>It may be possible to reduce a and increase c but it seems that it is going to
>>do the program weaker because computers are very fast today.
>>
>>Uri
>
>I don't agree. It is like quicksort at bubblesort. One is a*n*log(n) average,
>the other is b*n^2 average, but b is so much lower than a, that for small
>datasets (say less than 30), bubblesort will be quicker, for some fixed type of
>data, say integers.

I know it but the point is that programs are very fast and every algorithm in
chess that is better for deep search is probably better for today's hardware.
>
>If you draw the two functions with a larger a value, you will see than at some
>point they cross, and then n*log(n) wins. You know this, of course. But my point
>is that the domain of chess, modern chess program implementation and modern
>hardware, is in a state like bubblesort at the very beginning of the graph (1-5
>data elements). That means the constants are often far more important than the
>algorithmic complexitites, because the datasets are _very_ small (far to small
>time controls).

I guess that it is the case for hardware of 1970 but I guess that it is not
truth for the hardware of today unless you play games at 0.001 second per move.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.