Author: Dann Corbit
Date: 15:44:44 05/09/01
Go up one level in this thread
On May 09, 2001 at 18:32:59, Uri Blass wrote: >On May 09, 2001 at 18:13:15, Dann Corbit wrote: > ><snipped> >>>You can look at sorting(not from computer's point of view) as an infinite number >>>of problems(sorting 1 element,sorting 2 elements....) when everyone of them can >>>be solved in a finite time but the input is not bounded. >>> >>>You have no constant C such that every one of the problems can be solved in less >>>than C units of time. > >>Not only is that not correct, but I detailed examples. For instance, in the >>ANSI C99 language, the highest precision counting type that can be relied upon >>is unsigned long long. Hence, the maximum possible time to sort a vector via >>comparison is O(n*(log((2^64-1)!)) which is a constant. > >I explained that I talk about sorting not from a computer's point of view. >I talk about sorting from the point of view of a non existing machine when the >number of elements is not bounded and the precision is not bounded. > >Another point is that O(n*log(2^64-1)!)=O(n). >It is clear that from practical point of view it is worse than O(n) because the >constant is too big but from computer's point of view you can look it also as >O(1) because n is bounded by the memory of the computer when the only problem is >that the constant that you get is too big. n is 2^64th-1 >I am not going to continue to argue about it. >I am using a definition that I studied at university. > >I can agree that the fact that searching n plies is O(1) by the defintion is not >important for practical purposes. Maybe we are arguing about angels dancing on the heads of pins, and I do regret the harsh tone that I have taken in some of the messages. My point is that a definition of big-O that considers chess to be and O(1) computation is a -- let's call it *pragmatically* -- bad definition. It gives you no idea about the running time of the algorithm (which is infinite). Why do we calculate algorithm complexity? To know what we can solve and cannot solve. Your calculation has put chess in the EASIEST POSSIBLE COMPUTATIONAL CATEGORY. Doen't that seem strange to you? Hence your, Gilbert, and Jasper's definition is one that has no value. At least not the value projected by performing the computational complexity study in the first place. Why would anyone bother? I also insist that arbitrary inputs change the nature of computation so that an entirely different system must be used. Consider the cost of a simple multiplication. For small numbers, hardware multiplication can be used. For larger numbers, Karatsuba multiplication must be used. For even larger numbers FFT muliplication must be used. For even larger numbers, the hardware base for the FFT values will be insufficient so FFT's will have to be performed on base numbers for the array elements which are themselves extended precision numbers. In actual calculations we never (or very rarely) perform such mental gymnastics because there are implicit assumptions about the range of the inputs into the function. In fact, you are likely to see assert() calls documenting this at the top of the functions (size assumptions). In short, I use the useful definition. The alternative definition does not convey value to the end-user.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.