Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Never Say "Impossible"

Author: Dann Corbit

Date: 08:49:50 05/09/01

Go up one level in this thread


On May 09, 2001 at 03:39:42, Uri Blass wrote:

>On May 09, 2001 at 01:11:50, Dann Corbit wrote:
>
>>On May 09, 2001 at 00:41:38, Uri Blass wrote:
>>[snip]
>>>No
>>>Chess is clearly polynomial and even O(1) like every solvable problem that is a
>>>practical problem.
>>>
>>>You need constant time to solve chess when the problem is only that the constant
>>>is large.
>>>
>>>The only problems that are exponential are problems when the size of the input
>>>has no finite upper bound.
>>>You can see the input of every practical problem in the world as something that
>>>has an upper bound so saying that an algorithm is not O(1) has only mathematical
>>>meaning and not practical meaning.
>>
>>Unbelievable that you also do not understand Big-O notation.
>
>I responded to the sentence of Hyatt:
>"In the case of chess it is clearly exponential, which is _not_ any form of a
>polynomial."
>
>I did not say that O(1) is not also O(exp(n)).
>I understand from the sentence of Hyatt that chess is not O(1) and even not
>O(n^500000)
>
><snipped>
>>I suggest that you read my other posts on the topic.
>>
>>Anyone who says that an O(1) process is not O(exp(n)) does not know what the
>>notation means!!!
>>
>>Anyway, you can say what O(f(n)) is by looking only at the algorithms not the
>>data.  Do you really understand this!?
>
>I understand what you mean but
>Every algorithm when there is a bound on the size of the data is practically
>O(1) because exp(constant) is another constant.

OK, I see where you are coming from now.  But I must point out something:
That is the ENTIRE reason for algorithm analysis.
If I have an algorithm, it will have some sort of O(f(n)) behavior.  The nature
of the smallest bounding function will determine exactly what sort of set is
computable.  For instance, if f(n) = c, some constant, we can effectively
compute the answers for very, very large n.  If f(n) is a polynomial, it can be
a difficult computation, but for moderate set sizes, these are at the edge of
what is computable for polynomials with low powers (even with n^3 like matrix
operations, there are real limits on what is practical, but with enough compute
power almost all such problems are doable.  At n^4 or n^5 it becomes very
painful to process large inputs, even on very powerful systems).

So, I have some algorithm and an O(f(n)) behavior defined.  What does this tell
me?  It tells me how large of an input is practical.  If, for instance, f(n) is
n! or exp(n), then only very tiny inputs are computable.

Consider chess.  Current algorithms are O(exp(n)) and so let's make a table:
n | time
--+------------------------
0 | 1
1 | 2.7
2 | 7.4
3 | 20
4 | 55
5 | 1.5e+002
6 | 4e+002
7 | 1.1e+003
8 | 3e+003
9 | 8.1e+003
10 | 2.2e+004
11 | 6e+004
12 | 1.6e+005
13 | 4.4e+005
14 | 1.2e+006
15 | 3.3e+006
16 | 8.9e+006
17 | 2.4e+007
18 | 6.6e+007
19 | 1.8e+008
20 | 4.9e+008
21 | 1.3e+009
22 | 3.6e+009
23 | 9.7e+009
24 | 2.6e+010
25 | 7.2e+010
26 | 2e+011
27 | 5.3e+011
28 | 1.4e+012
29 | 3.9e+012
30 | 1.1e+013

By n=30, we are talking 10 trillion time units.  Hence, it is plain that for
difficult branches of the tree, even tiny inputs like 30 are simply impractical.

This knowlege is just what we need to attack the problem.  We will *never* get a
computation all the way to the end (at least with a standard tree construction
and search).  Hence, we will have to "guess" based on heuristics.  That is how
all functional chess programs work.

>Algorithms to solve problem like sort can be described theoretically
>as not O(1) because there is not a finite upper bound for the number of elemnts
>by definition.

All algorithms have a finite bound on the number of elements.  The next person
who sites this absurdity should show the textbook where it came from so that the
author can be hanged.  There is (in fact) an entire different school of
calculation where algorithms are not assumed to have upper bounds.  In this mode
of calculation (for instance) multiplies and divides do NOT have constant time
because it takes more time to calculate the product of two {for instance}
trillion digit numbers than two 3 digit numbers.

>For practical purposes you can say that there is a finite upper bound so you can
>say that Solving the practical problems can be done in O(1) when the only
>problem is the constant.

This is sheer nonesense.  Then *all* problems are computable.   All problems are
O(1).  This is wrong, wrong, wrong, wrong, wrong!  And did I mention that it is
wrong?

The entire notion and essense of big-O estimation is to get a grip on the
running time of algorithms.  To say if we have fixed input the algorithm is O(1)
is simply infuriating.

>Algorithms to solve problem like chess,go or other games can be described as
>O(1).

Not by someone who knows what they are doing.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.