Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: O(1) garbage

Author: David Rasmussen

Date: 15:16:04 05/15/01

Go up one level in this thread


On May 15, 2001 at 17:52:07, Jesper Antonsson wrote:

>On May 14, 2001 at 13:58:50, Robert Hyatt wrote:
>
>>On May 13, 2001 at 20:42:29, Jeremiah Penery wrote:
>>
>>>On May 13, 2001 at 19:17:36, Jesper Antonsson wrote:
>>>
>>>>On May 11, 2001 at 21:42:53, Jeremiah Penery wrote:
>>>>
>>>>>On May 11, 2001 at 16:43:38, Jesper Antonsson wrote:
>>>>>
>>>>>>My understanding of this topic is without a doubt as good or better than yours,
>>>>>>thank you very much. I agree that it is not meaningful to talk about complexity
>>>>>>for sorting 1000 items, while I don't agree that's comparable to chess. You,
>>>>>>however, doesn't seem to understand that *depth* was the input considered in the
>>>>>>start of this argument, and that *chess* is the topic.
>>>>>
>>>>>It seems you don't understand the point of the O-notation.  It's not supposed to
>>>>>tell you that algorithm FOO takes time X to run on input BAR.  It's supposed to
>>>>>let you _estimate_ the running time of algorithm FOO on input BAR+1 (BAR+2,
>>>>>BAR+3, etc...) when you know the running time on input BAR.  ***You know that
>>>>>the algorithm will eventually complete, no matter what the input!***  But, for
>>>>>example, naming a sorting algorithm O(1) says that it will sort 1000000 numbers
>>>>>just as fast as it sorts 5 numbers.  When you call a tree-searching algorithm
>>>>>O(1), when depth is the input, you're saying that it takes the same amount of
>>>>>time to search depth 500 as it does to search depth 3.  Obviously, this is wrong
>>>>>for any chess program.
>>>>
>>>>This is all wrong, my friend. The "O-notation" doesn't have a "point" (or at
>>>>least not that one)
>>
>>
>>That is totally incorrect.  From one of many books, "Introduction to Algorithms"
>>by Udi Manber:
>>
>>"The purpose of algorithm analysis is to predict the behavior, expecially
>>the running time, of an algorithm."
>>
>>As far as the "mythical" requirement that N be unbounded, I give this explicit
>>definition of Big-oh:
>
>Bob, you are dreaming. You keep repeating that "unbounded" nonsense, but at
>least I have never given that requirement.
>
>>"We say a function g(n) is O(f(n)) for another f(n) if there exist constants
>>c and N such that for all n>=N, we have g(n) <= cf(n).
>
>Exactly! Now when you have the definition in front of you, you should be able to
>easily see that chess is O(1). Why? Because when n>=5000 or whatever,
>g(n) <= c*1. (n is target depth, g(n) running time, c is a big constant) It
>isn't harder than that.
>
>>I don't see any _requirement_ that n or N be unbounded.  The "for all n" only
>>says that the expression is valid for each n that is >= N.  IE for large enough
>>n,
>>the function g(n) is nore more than a constant times the function f(n).  The
>>function g(n) may be less than cf(n), even substantially less;  the O notation
>>bounds it only from above."
>
>Yup. So chess is also O(2^n), but you repeatedly claimed that it *isn't* O(1),
>which is quite wrong, of course.
>
>>So I don't see how anyone can say "The O notation does not have a point."  Every
>>book I have gives this "point".
>
>Of course, even definitions are made for a reason, but the O notation is old and
>it's definition carved in stone. You can't go around claiming things that
>opposes the definition because it doesn't fit you intuitive notion of whether
>the consequenses of that definition are useful or not in a particular case.
>
>>  Nor do I see any requirement (yet) that says
>>that unless the size of the input is essentially without limit, O notation does
>>not apply.
>>
>>
>>
>>>
>>>Then why take the time to find the O(f(n)) of any algorithm?  The point of doing
>>>it is so that you can see whether algorithm A will run faster than algorithm B
>>>(quicksort runs faster than bogosort, for example) with arbitrary input.  If you
>>>have two sorting algorithms, one with O(n + 5000) and one with O(n!), you can
>>>say that for very small N, the O(n!) program will be faster, but otherwise it
>>>will be much slower.  This is "point" of O(f(n)).  If you don't correctly find
>>>O(f(n)) of an algorithm (or algorithms), how will you ever know which one to
>>>choose for a particular job?
>>>
>>>>and O(1) does not guarantee same running time for different
>>>>inputs. The O-notation is clearly defined, and my use adheres to the definition.
>>>
>>>O(1) is _constant_ running time, regardless of input length.  You can make a
>>>graph and see this very clearly.  Any algorithm with different running times for
>>>different N must use an O(f(n)) that actually has an "n" in it - O(n),
>>>O(log(n)), O(n*log(n)), O(n!), O(exp(n)), etc.
>>>
>>>>It's quite simple math, look it up.
>>>
>>>I agree, the math is quite simple.
>>
>>
>>Nobody is looking this up.  Everyone is using their own personal interpretation
>>of something they were exposed to years ago.  I am _still_ waiting on someone
>>to quote a book that _requires_ the number of inputs to be unlimited.  And I
>>would hope that everyone knocks off the "O is not used to estimate run-time of
>>an algorithm" without quoting their source.  I just quoted one of mine.  I will
>>be happy to quote others if requested...
>
>You are fighting a straw man, Bob. We don't require the depth to be unlimited.
>It doesn't matter. Whether the depth is unlimited or not, chess is O(1). Just
>look at the definition you gave.


Nonsense. "Chess is O(1)" is a statement void of meaning.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.