Author: Jeremiah Penery
Date: 19:02:48 05/09/01
Go up one level in this thread
On May 09, 2001 at 17:58:08, Uri Blass wrote: >The definition that I remember from university for O(1) is: > >f(n) is O(1) if there is a constant C such that |f(n)|<C for every finite n. >(|f(n)| is the absulute value of f(n)). > >We learn from this definition that if n is bounded then f(n) is O(1) because you >can choose C=1+max(|f(1)|,|f(2)...|f(n)|) > >An algorithm that help to solve only one problem that can be solved in a finite >time is always O(1) by this definition. >An algorithm that helps to solve a finite set of problems that everyone of them >can be solved in a finite time is also O(1) by this definition. > >1)If you look at the problems search to depth n in chess then there is only a >finite number of problems because searching to depth 9^(9^9) gives the same >result as searching to depth 9^(9^9)+1 even without the 50 move rule. > >Conclusion:Searching to depth n in chess is O(1). No. The idea behind O (for tree-searching algorithms) is to find out how much longer will it take to search from depth N to N+1. Both depths will be completed in a finite time - that doesn't matter. You want to be able to estimate _how much_ longer N+1 will take than N. If you know the behavior of N, N+1, N+2, etc... you can estimate (using O(f(n))) how long it will take for any arbitrary N. If you use O(1), you can never get an estimate, and the definition is useless.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.