Author: Robert Hyatt
Date: 19:22:48 05/17/01
Go up one level in this thread
On May 17, 2001 at 16:25:37, Jesper Antonsson wrote: > >Huh? Of course I can. It's not *real* cities we input to the algorithm/computer, >you know, it's just a series of symbols. OK... Now I begin to see the problem. I _do_ happen to use real cities. Real proteins. Real compounds. Real game trees. And _real_ computers. That makes a _huge_ difference in the context. I'm trying to route a guy through all the major cities in the southeast. Not including any on the moons of Jupiter or the planets around alpha-centauri. Complexity is about real algorithms to solve real problems. As the problem gets bigger I need to know how that affects run-time. That is _all_ that complexity is interesting for to a computer scientist. Because _I_ am going to use a real computer, one with a (today) cycle time of around .5-1 nanosecond. Not one with a cycle time of .5 picoseconds (or faster). _real_ problems. Using real solutions that _always_ terminate or else they are useless. > >Hey, don't kill the messenger. I didn't invent the big O notation, I just tell >you that by definition, a "good" chess algorithm is O(1). I think it's an >amusing fact, and it may be useful, if not for anything else, for opening up >peoples eyes to the fact that big O notation doesn't say everything about an >algorithm or which you should choose. You, Hyatt and others seem to have too >much intuitive garbage attached to it. The notation is really useful only if you >have a precise understanding of what it tells you, else you risk going wrong at >one point or another... I won't go wrong by saying that alpha/beta minimax is _not_ O(1). I would _definitely_ be in the soup if I said it was. Because I can demonstrate (and prove) that the exponential function N=2*sqrt(W^D) correctly predicts the effort to search a tree of depth D, for all D that can be used in the game of chess. And since the math can be used to predict how long it would take, considering the first computer all the way to today's fastest, for computers to get fast enough to finally solve the game (never) then I consider "D" to be bounded, but only in theory, since no search will ever reach the maximum value of D. Again, all _real-world_ assumptions. That I would feel comfortable defending against anyone from Knuth down to a traveling-salesman. >> >>Chess is finite. TSP is finite. Matrix multiplication is finite. All >>algorithms are finite. > >Depends on your definition of "finite" and what you mean by "TSP is finite". The traveling salesman problem is well-known. Route a salesman through N cities with the shortest total travel time possible. Since the number of cities on Earth are limited, and since the number of cities that a traveling salesman would actually need to visit is even more limited, it is a finite problem. As I said before, I don't care about visiting the moons of Jupiter or Mars, nor planets around alpha-centauri, etc. To the traveling salesman, the problem is finite. Yet the complexity makes it intractable even for reasonable numbers of cities. Just try the county seats of government for every county or parish in the US. Not with today's computers. Not with next year's either. Or next century's. Yet that is smaller than chess. > >>Otherwise, they AREN'T algorithms. > >The programs will terminate on every finite input? Yes, of course. That doesn't >make them O(1). > >>When the time it takes to complete the task is so long that it cannot possibly >>be useful to anyone, this is something of value that people want to know. The >>calculations you produce seem to say something, but it does not have any real >>value. For any algorithm, we can choose any input and it will terminate. For >>instance, I can put a quintillion by quintillion matrix into gaussian >>elimination and eventually it would stop running. I could consider every >>elementary particle in the universe a 'city' and feed it to TSP. It would >>eventually stop running (in ideal mathematical sense, of course). But these >>answers have no value because: >>1. We already knew the algorithms terminate eventually. Otherwise, they are >>not algorithms. >>2. The answer of O(1) is entirely useless. > >They are not O(1) just because they terminate on every input. It's the >asymptotical growth on increasing input that counts. > >>>>>>So I don't see how anyone can say "The O notation does not have a point." Every >>>>>>book I have gives this "point". >>>>> >>>>>Of course, even definitions are made for a reason, but the O notation is old and >>>>>it's definition carved in stone. You can't go around claiming things that >>>>>opposes the definition because it doesn't fit you intuitive notion of whether >>>>>the consequenses of that definition are useful or not in a particular case. >>>>> >>>>>> Nor do I see any requirement (yet) that says >>>>>>that unless the size of the input is essentially without limit, O notation does >>>>>>not apply. >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>>Then why take the time to find the O(f(n)) of any algorithm? The point of doing >>>>>>>it is so that you can see whether algorithm A will run faster than algorithm B >>>>>>>(quicksort runs faster than bogosort, for example) with arbitrary input. If you >>>>>>>have two sorting algorithms, one with O(n + 5000) and one with O(n!), you can >>>>>>>say that for very small N, the O(n!) program will be faster, but otherwise it >>>>>>>will be much slower. This is "point" of O(f(n)). If you don't correctly find >>>>>>>O(f(n)) of an algorithm (or algorithms), how will you ever know which one to >>>>>>>choose for a particular job? >>>>>>> >>>>>>>>and O(1) does not guarantee same running time for different >>>>>>>>inputs. The O-notation is clearly defined, and my use adheres to the definition. >>>>>>> >>>>>>>O(1) is _constant_ running time, regardless of input length. You can make a >>>>>>>graph and see this very clearly. Any algorithm with different running times for >>>>>>>different N must use an O(f(n)) that actually has an "n" in it - O(n), >>>>>>>O(log(n)), O(n*log(n)), O(n!), O(exp(n)), etc. >>>>>>> >>>>>>>>It's quite simple math, look it up. >>>>>>> >>>>>>>I agree, the math is quite simple. >>>>>> >>>>>> >>>>>>Nobody is looking this up. Everyone is using their own personal interpretation >>>>>>of something they were exposed to years ago. I am _still_ waiting on someone >>>>>>to quote a book that _requires_ the number of inputs to be unlimited. And I >>>>>>would hope that everyone knocks off the "O is not used to estimate run-time of >>>>>>an algorithm" without quoting their source. I just quoted one of mine. I will >>>>>>be happy to quote others if requested... >>>>> >>>>>You are fighting a straw man, Bob. We don't require the depth to be unlimited. >>>>>It doesn't matter. Whether the depth is unlimited or not, chess is O(1). Just >>>>>look at the definition you gave. >>>> >>>>I'm waiting on your O(1) algorithm. I want to be world champion again. >>> >>>You won't become world champ with it, because it takes too long to complete. >>>Perhaps the remaining time of the universe isn't enough, but that's irrelevant. >>>(If you want to continue claiming it is, perhaps you could quote a book where >>>the big O definition includes something about the remaining time of the >>>universe.) >> >>The only real function of complexity analysis is to decide what is computable. >> >>From: >>http://www.csc.liv.ac.uk/~ped/teachadmin/algor/intcom.html >> >>We have: >>"Comparing Algorithms >> >>Computability and Complexity >>The final part of the course deals with the issue of assessing how difficult >>specific computational problems are to solve. Given a description of a >>particular problem a number of questions arise: >> >>Is it possible to design an algorithm which solves the problem? >>Is it possible to design an efficient algorithm which solves the problem? >>How can one tell if a given algorithm is the `best possible'? >>These questions are imprecisely formulated: >>What exactly is meant by the term `problem'? >>What distinguishes algorithms which can be implemented from those which cannot? >>What is meant by terms such as: >>Difficulty >>Efficient >>Best >>COMPUTABILITY THEORY >>and >>COMPUTATIONAL COMPLEXITY THEORY >>are the fields of Computer Science concerned with the questions raised earlier. >> >> >>Computability Theory >>is concerned with identifying one particular class of >> >>INTRACTABLE PROBLEMS >>(Those for which no `effective' algorithm exists) One aim of >> >>Computational Complexity Theory >>is to identify solvable problems that are intractable in the sense that >> >>No Efficient Algorithm Exists >>which solves them. >>Both areas require a formal definition of the concepts: >> >> >>Problem >> >>Algorithm >>Complexity Theory also proposes definitions for the ideas of >> >>Problem Difficulty >> >>Algorithm Efficiency >> >>Best Algorithm >> >>Computational Problems >>At the lowest level of abstraction any computer program may be viewed as a means >>of producing >> >>Specific Output Values >> >>from >> >>Given Input Data >>i.e. Programs compute functions whose domain is the set of valid inputs and >>whose range is the set of possible outputs. In practice the input and output are >>(ultimately) represented in binary Hence any program may be viewed as >>(computing) some function >> >> >>f : {0,1}^* -> {0,1}^* >> >>({0,1}^* is the set of all finite strings (or words) containing only 0s and 1s. >> >>It is only necessary to consider such functions as have range {0,1}. These are >>known as >> >> >>Decision Problems >> >>Summary >>All computer programs may be viewed as computing functions >>Since all computers employ binary notation, such functions are defined over sets >>of binary strings. >>In considering the question `what problems can be solved by computers' it is >>sufficient to concentrate on decision problems >>Hence the question >> >>What problems can be solved by computers? >>is equivalent to >> >> >>What decision problems can be solved? >> >>Decision Problems >>Any binary string can be viewed as the representation of some natural number. >>Thus for decision problems on binary strings we can concentrate on the set of >>functions of the form >> >> >> >>f : N -> {0,1} >> >>That is `problems' of the form >> >>Input: n a natural number >> >>Output: 1 if n satisfies some property; 0 if n does not satisfy it. >> >>Examples: >> >>EVEN: Returns 1 if n is an even number; 0 if n is an odd number >>PRIME: Returns 1 if n is a prime number; 0 if n is a composite number. >>ADA_PROGRAM: Returns 1 if n written in binary corresponds to the ASCII >>representation of a syntactically correct ADA program; 0 otherwise. >>Notice that the last example shows that we are not restricting attention to >>merely `arithmetic' decision problems. >> >>What is a `reasonable' algorithm? >>In all programming systems >> >>Programs have a finite number of instructions. >>Progams make use of instructions and operations from a finite set of basic >>operations >>Programs should come to a halt for any valid input data. >>An effective algorithm is one which meets these criteria. >>Suppose we can show that an ADA program cannot be written which solves some >>problem. >> >>Why should one take this as evidence that >> >> >>NO PROGRAM (ALGORITHM) AT ALL >>exists for the problem? >> >>In other words >> >>How can it be argued that results on computability apply to all types of >>computer? (Past, Present, and Future) >> >>The answer is that such results are proved with respect to abstract models of >>computation that capture all the essential elements of a typical computer. A >>`typical computer' has >> >>Memory >>Input and Output Mechanisms >>A processor handling a finite instruction set. >>In an abstract model we can permit an infinite supply of memory. The only >>restraint on an algorithm is that it can only use a finite amount of memory when >>dealing with any single input (which, if the algorithm terminates, is all that >>could be accessed). >>This is not an unrealistic assumption: we do not want to classify a problem as >>unsolvable just because on some real machines there may not be enough memory >>available. >> >>The first abstract model of computation was defined by Alan Turing in 1936. This >>had: >> >>An infinite memory of single bit cells. >>Its `processor' dealt with only two types of `instruction' >>A stop instruction which would return a result (0 or 1). >>A test instruction which would read and write to a single bit of memory, examine >>an adjacent memory location, and jump to another instruction. >>Although the instruction set of the Turing Machine is extremely limited Turing >>machine programs can be written to solve any decision problem that can be solved >>on a `normal' computer. >>The Church-Turing Hypothesis >> >> >>The class of decision problems that can be solved by any reasonable model of >>computation is exactly the same as the class of decision problems that can be >>solved by Turing machine programs. >>Since 1936 several thousand different models of computation have been proposed, >>e.g. >>Post Systems >>Markov Algorithms >>lambda-calculus >>Gö del-Herbrand-Kleene Equational Calculus >>Horn Clause Logic >>Unlimited Register Machines >>ADA programs on unlimited memory machines >>Every single one of these satisfies the Church-Turing hypothesis, i.e. anything >>that can be solved by a Turing machine program can be programmed in any one of >>the system above and vice-versa >>The importance of the Church-Turing hypothesis is that it allows us, without any >>loss of generality, to consider computability results solely with respect to >> >> >>ADA programs on unlimited memory computers >>Suppose P is some decision problem. >> >>The Church-Turing hypothesis asserts >> >> >>There is no Turing machine program that computes P >> >>IF AND ONLY IF >> >>There is no ADA program which solves P >>Because we can (in principle) write a Turing machine program which can exactly >>simulate the behaviour of any given ADA program. >> >>Similarly, we can write an ADA program which can exactly simulate the behaviour >>of any given Turing machine program." > >So?
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.