Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Big-O notation

Author: Andrew Dados

Date: 10:34:26 05/09/01

Go up one level in this thread


On May 09, 2001 at 13:20:32, Dann Corbit wrote:

>On May 09, 2001 at 13:07:20, Ricardo Gibert wrote:
>
>>On May 09, 2001 at 11:27:46, Dann Corbit wrote:
>>
>>>On May 09, 2001 at 10:12:30, Ricardo Gibert wrote:
>>>
>>>>On May 09, 2001 at 02:00:25, Dann Corbit wrote:
>>>>
>>>>>For those of you who don't want to perform your own web search, just choose one
>>>>>of these:
>>>>>
>>>>>http://hissa.nist.gov/dads/HTML/bigOnotation.html
>"or the algorithm will always be too slow on a big enough input"
>
>>>>>http://bio5495.wustl.edu/textbook-html/node15.html
>>>>>http://umastr.math.umass.edu/~holden/Math136-99_projects/Amstutz-OBoyle-Petravage/big-o.html
>"whether this particular problem of size  is a hard case or an easy case for our
>algorithm to solve"
>
>>>>>http://www.eecs.harvard.edu/~ellard/Q-97/HTML/root/node8.html
>"The Size of a Problem
>Big-O notation is used to express an ordering property among functions. In the
>context of our study of algorithms, the functions are the amount of resources
>consumed by an algorithm when the algorithm is executed. This function is
>usually denoted T(N). T(N) is the amount of the resource (usually time or the
>count of some specific operation) consumed when the input to the algorithm is of
>size N."
>
>>>>>http://classes.monterey.edu/CST/CST338-01/world/BigO.html
>"In determining the efficiency of an algorithm, we are interested how the
>problem scales as the size of this characteristic number grows."
>
>>>>>http://shalim.csustan.edu/~john/Classes/CS3100_DataStructures/Previous_Semesters/1999_04_Fall/Examples/big-O
>
>"It may seem odd, but two positive real valued functions on N can each be big-O
>of the other, and such pairs of functions are considered roughly EQUIVALENT --
>in mathematical parlance, "ASYMPTOTICALLY PROPORTIONATE". f is big-O of g if and
>only if
>
>f(n) < [C*g(n)] whenever n > m,
>
>for some positive constants m and C. This is true if and only if
>
>f(n)/g(n) < C whenever n > m,
>
>and if and only if
>
>g(n)/f(n) > 1/C when n > m.
>
>On the other hand, if g is big-O of f, then for some positive K and r,
>
>g(a) < [K*f(a)] whenever a > r,
>
>and this is the same as
>
>g(a)/f(a) < K, and f(a)/g(a) > 1/K.
>
>So
>
>C > f(s)/g(s) > 1/K
>
>whenever s is greater than both m and r. If f(s)/g(s) was a constant, not
>depending at all on s, then we would say that f and g are proportionate. The
>last inequality above shows that the ratio of f(s) to g(s), while quite possibly
>not a constant, does become "trapped" between C and 1/K for large enough values
>of s. This phenomenon is called "asymptotic proportionality". We think of the
>two functions as being "roughly proportionate"."
>
>
>>>>>
>>>>>CS:201, FCOL!
>>>>
>>>>Big-O notation is used to describe asymtotic behavior. It commonly used to
>>>>describe the "running time" of an algorithm. If an algorithm is O(f(n)), n is
>>>>understood to be a finite, but *unbounded*. (For some reason, "unbounded" gets
>>>>confused with infinity. This is an error, but let's not get into that. It isn't
>>>>relevant here)
>>>>
>>>>In chess, n in is bounded. This is a critical distinction, that means chess is
>>>>*not* NP.
>>>
>>>GREAT!  Then it's computable.  What's the answer, win-loss-draw?
>>>;-)
>>>
>>>>If we were to allow the size of the chessboard to expand, then the
>>>>number of steps in its solution would be NP (extremely likely, but not certain!)
>>>
>>>{Not you too!}
>>>
>>>WHAT UTTER POPPYCOCK.  Please provide a citation from a reputable source that
>>>says if we have a bound on the data our algorithm becomes O(1).
>>
>>In all the definitions you refer to in the links you gave, n is clearly given as
>>unbounded.
>>
>>The notation O(f(n)) is used to describe the asymptotic behavior for
>>sufficiently large n as n increases. This is crystal clear from the citations
>>You give above.
>>
>>It makes no sense to speak of the asymptotic behavior of an algorithm as n
>>increases when n is in fact constant as it is in chess.
>>
>>That's so
>>>nonesensical that it renders an analysis of algorithsm as completely pointless.
>>>ALL INPUTS ARE BOUNDED!!!!!
>>
>>Of course they are bounded in a specific instantiation of a problem where n is
>>unbounded, but they are not bounded (at least in theory) before the problem is
>>instantiated.
>>
>>>
>>>There is no such requirement anywhere in the literature.  The running time of
>>>the algorithm can be discerned by examination of the algorithm alone.  In fact,
>>>that is how it is always done.
>>>
>>
>>Read and understand the very citations you give above more carefully.
>>
>>[snip]
>>
>>If I want to find the maximum of 2 integers, I can give an algorithm that is
>>O(1). This means I can furnish a finite program that does not contain any loops
>>that will accomplish the task.
>>
>>If I generalize this problem to n integers, where n is unbounded, then I must
>>resort to using a loop. The algorithm is then O(n). If n is bounded then I can
>>write the program without any loops, since I know n <= some constant. The
>>algorithm is then O(1).
>>
>>In chess, n is bounded. I can write a finite program (in theory I can) that does
>>not contain any loops. The running time is O(1). If n is unbounded, no such
>>program without loops exists, so instead if I use minimax, the running time is
>>O(exp(n)).
>>
>>This is all very clear from the citations you give. Please read carefully.
>
>You are right that one of us has a serious comprehension problem.

Dann, would you consider tic-tac-toe problem as O(1)? Sure you would.
So what is your problem with similar, chess problem?
(Just trying to find a simple analogy why chess problem as in 'solving chess' is
O(1). )

If you are talking about mini-max and n being depth of chess tree search, then
for small n cost of minimax is O(exp(n)) indeed. It will not stay O(exp(n)) for
huge n, because chess is finite problem and has finite number of states needed
to visit ion order to solve the game.

So Ricardo is right. And you are right. Practical chess trees are of O(exp(n))
cost for small n; it will not hold for huge n.


-Andrew-



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.