Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: O(1) garbage

Author: Dann Corbit

Date: 16:12:23 05/16/01

Go up one level in this thread


On May 16, 2001 at 17:44:13, Jesper Antonsson wrote:

>On May 15, 2001 at 23:14:44, Robert Hyatt wrote:
>
>>On May 15, 2001 at 17:52:07, Jesper Antonsson wrote:
>>
>>>On May 14, 2001 at 13:58:50, Robert Hyatt wrote:
>>>
>>>>On May 13, 2001 at 20:42:29, Jeremiah Penery wrote:
>>>>
>>>>>On May 13, 2001 at 19:17:36, Jesper Antonsson wrote:
>>>>>
>>>>>>On May 11, 2001 at 21:42:53, Jeremiah Penery wrote:
>>>>>>
>>>>>>>On May 11, 2001 at 16:43:38, Jesper Antonsson wrote:
>>>>>>>
>>>>>>>>My understanding of this topic is without a doubt as good or better than yours,
>>>>>>>>thank you very much. I agree that it is not meaningful to talk about complexity
>>>>>>>>for sorting 1000 items, while I don't agree that's comparable to chess. You,
>>>>>>>>however, doesn't seem to understand that *depth* was the input considered in the
>>>>>>>>start of this argument, and that *chess* is the topic.
>>>>>>>
>>>>>>>It seems you don't understand the point of the O-notation.  It's not supposed to
>>>>>>>tell you that algorithm FOO takes time X to run on input BAR.  It's supposed to
>>>>>>>let you _estimate_ the running time of algorithm FOO on input BAR+1 (BAR+2,
>>>>>>>BAR+3, etc...) when you know the running time on input BAR.  ***You know that
>>>>>>>the algorithm will eventually complete, no matter what the input!***  But, for
>>>>>>>example, naming a sorting algorithm O(1) says that it will sort 1000000 numbers
>>>>>>>just as fast as it sorts 5 numbers.  When you call a tree-searching algorithm
>>>>>>>O(1), when depth is the input, you're saying that it takes the same amount of
>>>>>>>time to search depth 500 as it does to search depth 3.  Obviously, this is wrong
>>>>>>>for any chess program.
>>>>>>
>>>>>>This is all wrong, my friend. The "O-notation" doesn't have a "point" (or at
>>>>>>least not that one)
>>>>
>>>>
>>>>That is totally incorrect.  From one of many books, "Introduction to Algorithms"
>>>>by Udi Manber:
>>>>
>>>>"The purpose of algorithm analysis is to predict the behavior, expecially
>>>>the running time, of an algorithm."
>>>>
>>>>As far as the "mythical" requirement that N be unbounded, I give this explicit
>>>>definition of Big-oh:
>>>
>>>Bob, you are dreaming. You keep repeating that "unbounded" nonsense, but at
>>>least I have never given that requirement.
>>
>>
>>I'm not dreaming.  It has been stated _multiple_ times.  If N is "bounded"
>>then the search is O(1), rather than O(W^(D/2)).  I happen to not buy that
>>since my books don't mention it, but some insist on it...
>
>Well, I have never stated such a thing. I consider N to be unbounded. The search
>is O(1) for other reasons, namely that the *tree* is bounded. If I input 1e20 as
>target search depth to a "good" chess algorithm, the search won't take longer
>than when i input 1e5. The same goes for 1e21, 1e22, ...

The algorithm will never finish.  It is trillions of transistor lifetimes.

>>>>"We say a function g(n) is O(f(n)) for another f(n) if there exist constants
>>>>c and N such that for all n>=N, we have g(n) <= cf(n).
>>>
>>>Exactly! Now when you have the definition in front of you, you should be able to
>>>easily see that chess is O(1). Why? Because when n>=5000 or whatever,
>>>g(n) <= c*1. (n is target depth, g(n) running time, c is a big constant) It
>>>isn't harder than that.
>>>
>>
>>And I maintain that definition of O(1) is flawed and nonsensical.  IE I redefine
>>the TSP to include every city currently on the surface of planet earth and no
>>more.  You still claim that when I go from 1000 cities to 2000 cities to solve
>>the problem the solution is O(1)?  I don't.  Even though the number of cities
>>has some upper bound.
>
>No, the number of cities has no upper bound.

You can't input more cities than all of them.

>Neither has the target search depth
>as input to the chess algorithm. They are both unbounded.

Now, this is not different than the chess problem.  With TSP, we are ALWAYS
dealing with a strict subset of the set of all cities.

>> Because I can do the math to see that the upper bound
>>isn't going to be reached for a long time.  In the case of chess, your constant
>>C has to be on the order of 38^10000.  I claim that is close enough to infinity
>>that the two can't be distinguished.
>
>Close enough is not enough.

Tell me.  What are you doing the complexity analysis *FOR*?  If the purpose is
to figure out how long it takes, then your approach is completely useless.  If
not, then why would you bother to come up with such an estimate?

>>in another thread I noted that one trillion years is 10^23 nanoseconds.  I
>>posed the question "if we double speed every year, how long until we can count
>>to 38^10000, given that today we can only count to 10^23 at one operation per
>>nanosecond, assuming we have one trillion years to count.  I also claim that
>>the universe will collapse and big-bang again _long_ before we are even a
>>fraction of the way done with that "count".
>
>Irrelevant.

Not from the practical standpoint.  Your answer is mathematically correct.  But
in a very, very silly sense.

>>>>I don't see any _requirement_ that n or N be unbounded.  The "for all n" only
>>>>says that the expression is valid for each n that is >= N.  IE for large enough
>>>>n,
>>>>the function g(n) is nore more than a constant times the function f(n).  The
>>>>function g(n) may be less than cf(n), even substantially less;  the O notation
>>>>bounds it only from above."
>>>
>>>Yup. So chess is also O(2^n), but you repeatedly claimed that it *isn't* O(1),
>>>which is quite wrong, of course.
>>
>>Just give me that algorithm that will search from depth D to depth D+1 in
>>constant time.  Then I'll believe.
>
>You know it can be done. You are not that old and you know that chess is finite.
>The parameter D above would be the maximum number of moves in a chess game
>(counting 50-move and 3-fold as draws). Then D+1 won't take longer time.

Chess is finite.  TSP is finite.  Matrix multiplication is finite.  All
algorithms are finite.  Otherwise, they AREN'T algorithms.

When the time it takes to complete the task is so long that it cannot possibly
be useful to anyone, this is something of value that people want to know.  The
calculations you produce seem to say something, but it does not have any real
value.  For any algorithm, we can choose any input and it will terminate.  For
instance, I can put a quintillion by quintillion matrix into gaussian
elimination and eventually it would stop running.  I could consider every
elementary particle in the universe a 'city' and feed it to TSP.  It would
eventually stop running (in ideal mathematical sense, of course).  But these
answers have no value because:
1.  We already knew the algorithms terminate eventually.  Otherwise, they are
not algorithms.
2.  The answer of O(1) is entirely useless.

>>>>So I don't see how anyone can say "The O notation does not have a point."  Every
>>>>book I have gives this "point".
>>>
>>>Of course, even definitions are made for a reason, but the O notation is old and
>>>it's definition carved in stone. You can't go around claiming things that
>>>opposes the definition because it doesn't fit you intuitive notion of whether
>>>the consequenses of that definition are useful or not in a particular case.
>>>
>>>>  Nor do I see any requirement (yet) that says
>>>>that unless the size of the input is essentially without limit, O notation does
>>>>not apply.
>>>>
>>>>
>>>>
>>>>>
>>>>>Then why take the time to find the O(f(n)) of any algorithm?  The point of doing
>>>>>it is so that you can see whether algorithm A will run faster than algorithm B
>>>>>(quicksort runs faster than bogosort, for example) with arbitrary input.  If you
>>>>>have two sorting algorithms, one with O(n + 5000) and one with O(n!), you can
>>>>>say that for very small N, the O(n!) program will be faster, but otherwise it
>>>>>will be much slower.  This is "point" of O(f(n)).  If you don't correctly find
>>>>>O(f(n)) of an algorithm (or algorithms), how will you ever know which one to
>>>>>choose for a particular job?
>>>>>
>>>>>>and O(1) does not guarantee same running time for different
>>>>>>inputs. The O-notation is clearly defined, and my use adheres to the definition.
>>>>>
>>>>>O(1) is _constant_ running time, regardless of input length.  You can make a
>>>>>graph and see this very clearly.  Any algorithm with different running times for
>>>>>different N must use an O(f(n)) that actually has an "n" in it - O(n),
>>>>>O(log(n)), O(n*log(n)), O(n!), O(exp(n)), etc.
>>>>>
>>>>>>It's quite simple math, look it up.
>>>>>
>>>>>I agree, the math is quite simple.
>>>>
>>>>
>>>>Nobody is looking this up.  Everyone is using their own personal interpretation
>>>>of something they were exposed to years ago.  I am _still_ waiting on someone
>>>>to quote a book that _requires_ the number of inputs to be unlimited.  And I
>>>>would hope that everyone knocks off the "O is not used to estimate run-time of
>>>>an algorithm" without quoting their source.  I just quoted one of mine.  I will
>>>>be happy to quote others if requested...
>>>
>>>You are fighting a straw man, Bob. We don't require the depth to be unlimited.
>>>It doesn't matter. Whether the depth is unlimited or not, chess is O(1). Just
>>>look at the definition you gave.
>>
>>I'm waiting on your O(1) algorithm.  I want to be world champion again.
>
>You won't become world champ with it, because it takes too long to complete.
>Perhaps the remaining time of the universe isn't enough, but that's irrelevant.
>(If you want to continue claiming it is, perhaps you could quote a book where
>the big O definition includes something about the remaining time of the
>universe.)

The only real function of complexity analysis is to decide what is computable.

From:
http://www.csc.liv.ac.uk/~ped/teachadmin/algor/intcom.html

We have:
"Comparing Algorithms

Computability and Complexity
The final part of the course deals with the issue of assessing how difficult
specific computational problems are to solve. Given a description of a
particular problem a number of questions arise:

Is it possible to design an algorithm which solves the problem?
Is it possible to design an efficient algorithm which solves the problem?
How can one tell if a given algorithm is the `best possible'?
These questions are imprecisely formulated:
What exactly is meant by the term `problem'?
What distinguishes algorithms which can be implemented from those which cannot?
What is meant by terms such as:
Difficulty
Efficient
Best
COMPUTABILITY THEORY
and
COMPUTATIONAL COMPLEXITY THEORY
are the fields of Computer Science concerned with the questions raised earlier.


Computability Theory
is concerned with identifying one particular class of

INTRACTABLE PROBLEMS
(Those for which no `effective' algorithm exists) One aim of

Computational Complexity Theory
is to identify solvable problems that are intractable in the sense that

No Efficient Algorithm Exists
which solves them.
Both areas require a formal definition of the concepts:


Problem

Algorithm
Complexity Theory also proposes definitions for the ideas of

Problem Difficulty

Algorithm Efficiency

Best Algorithm

Computational Problems
At the lowest level of abstraction any computer program may be viewed as a means
of producing

Specific Output Values

from

Given Input Data
i.e. Programs compute functions whose domain is the set of valid inputs and
whose range is the set of possible outputs. In practice the input and output are
(ultimately) represented in binary Hence any program may be viewed as
(computing) some function


f : {0,1}^* -> {0,1}^*

({0,1}^* is the set of all finite strings (or words) containing only 0s and 1s.

It is only necessary to consider such functions as have range {0,1}. These are
known as


Decision Problems

Summary
All computer programs may be viewed as computing functions
Since all computers employ binary notation, such functions are defined over sets
of binary strings.
In considering the question `what problems can be solved by computers' it is
sufficient to concentrate on decision problems
Hence the question

What problems can be solved by computers?
is equivalent to


What decision problems can be solved?

Decision Problems
Any binary string can be viewed as the representation of some natural number.
Thus for decision problems on binary strings we can concentrate on the set of
functions of the form



f : N  -> {0,1}

That is `problems' of the form

Input: n a natural number

Output: 1 if n satisfies some property; 0 if n does not satisfy it.

Examples:

EVEN: Returns 1 if n is an even number; 0 if n is an odd number
PRIME: Returns 1 if n is a prime number; 0 if n is a composite number.
ADA_PROGRAM: Returns 1 if n written in binary corresponds to the ASCII
representation of a syntactically correct ADA program; 0 otherwise.
Notice that the last example shows that we are not restricting attention to
merely `arithmetic' decision problems.

What is a `reasonable' algorithm?
In all programming systems

Programs have a finite number of instructions.
Progams make use of instructions and operations from a finite set of basic
operations
Programs should come to a halt for any valid input data.
An effective algorithm is one which meets these criteria.
Suppose we can show that an ADA program cannot be written which solves some
problem.

Why should one take this as evidence that


NO PROGRAM (ALGORITHM) AT ALL
exists for the problem?

In other words

How can it be argued that results on computability apply to all types of
computer? (Past, Present, and Future)

The answer is that such results are proved with respect to abstract models of
computation that capture all the essential elements of a typical computer. A
`typical computer' has

Memory
Input and Output Mechanisms
A processor handling a finite instruction set.
In an abstract model we can permit an infinite supply of memory. The only
restraint on an algorithm is that it can only use a finite amount of memory when
dealing with any single input (which, if the algorithm terminates, is all that
could be accessed).
This is not an unrealistic assumption: we do not want to classify a problem as
unsolvable just because on some real machines there may not be enough memory
available.

The first abstract model of computation was defined by Alan Turing in 1936. This
had:

An infinite memory of single bit cells.
Its `processor' dealt with only two types of `instruction'
A stop instruction which would return a result (0 or 1).
A test instruction which would read and write to a single bit of memory, examine
an adjacent memory location, and jump to another instruction.
Although the instruction set of the Turing Machine is extremely limited Turing
machine programs can be written to solve any decision problem that can be solved
on a `normal' computer.
The Church-Turing Hypothesis


The class of decision problems that can be solved by any reasonable model of
computation is exactly the same as the class of decision problems that can be
solved by Turing machine programs.
Since 1936 several thousand different models of computation have been proposed,
e.g.
Post Systems
Markov Algorithms
lambda-calculus
Gö del-Herbrand-Kleene Equational Calculus
Horn Clause Logic
Unlimited Register Machines
ADA programs on unlimited memory machines
Every single one of these satisfies the Church-Turing hypothesis, i.e. anything
that can be solved by a Turing machine program can be programmed in any one of
the system above and vice-versa
The importance of the Church-Turing hypothesis is that it allows us, without any
loss of generality, to consider computability results solely with respect to


ADA programs on unlimited memory computers
Suppose P is some decision problem.

The Church-Turing hypothesis asserts


There is no Turing machine program that computes P

IF AND ONLY IF

There is no ADA program which solves P
Because we can (in principle) write a Turing machine program which can exactly
simulate the behaviour of any given ADA program.

Similarly, we can write an ADA program which can exactly simulate the behaviour
of any given Turing machine program."



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.