Author: Ricardo Gibert
Date: 16:23:45 05/09/01
Go up one level in this thread
On May 09, 2001 at 18:16:48, Dann Corbit wrote: >Some people are insisting that finite inputs mean O(1). > >My insistence is that the big-O behavior should model the difference from n to >n+1 and from n to n-1 (past some point n0). Definition: f(n) = O(g(n)) (read as "f of n equals big oh of g of n") iff there exists two positive constants c and n_0 sth |f(n)| <= c*|g(n)| for all n >= n_0. I see nothing in the above definition that talks about "n to n+1 and from n to n-1". I do see that n is unbounded. I also note that in chess, n *is* bounded unless you generalize 8*8 chess to n*n chess when n is then unbounded. If you make such a generalization, you can conclude that n*n chess is O(exp(n)). You can then instantiate it to the 8*8 case to obtain a usable result, but please do not say that 8*8 chess is O(exp(n)) rather than say n*n chess is O(exp(n)). I don't see this can be made more clear. I'm really at a loss as to why you cannot interpret formal definitions correctly. > >There are no real algorithms with infinite inputs. These so-called algorithms >can never terminate. I never said anything about infinite inputs. "Unbounded" is not the same as "infinite". Get the terminology straight or you will get nowhere. > >Because we cannot agree on definitions we can never come to an agreement. My >definitions are the classiscal defintions used in all the literature. The >alternative definitions make for interesting debate, but since they have no >value at predictions, they have no practical value.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.