Author: Eugene Nalimov
Date: 12:11:41 05/09/01
Go up one level in this thread
Ok, here is simple example: let's consider sorting but limit the input by (a) limiting # of digits in each number (e.g. 1024 bits), and (b) limiting # of elements in the sequence we want to sort (e.g. max 10**40 elements). Yes, theoretical # of operations that are necessary to sort such a sequence is limited. But, nevertheless, you still can say that * insertion sort is O(N*N), * quick sort is O(N*ln(N)) on average, but O(N*N) in the worst case, * heap sort is O(N*ln(N)) in the worst case. So I don't see why the fact that something is finite changes something in the complexity tree search. Eugene On May 09, 2001 at 14:53:19, Ricardo Gibert wrote: >On May 09, 2001 at 14:30:36, Eugene Nalimov wrote: > >>Input in our case is (position, depth), and the question is "how longer will >>take to evaluate (position, depth+1)"? >> >>Eugene > > >The time it takes to return an evalutation in both cases is bounded by a >constant, since the time it takes to solve chess is bounded by a constant (quite >huge of course). > >If the size of the chessboard were a finite, but unbounded value. I would agree >with you. In that case, no such constant exists. In actuality, chess is >restricted to an 8x8 board. > >In all the definitions for big-O notation I have seen, it is assumed that n is >unbounded for O(f(n)). This is not the case for chess. > >Accessing your Nalimov TBs are good example of an algorithm that is O(1). In >theory, there is no reason why you can't construct 32 piece EGTBs. In practice, >it is a different story. > > >> >>On May 09, 2001 at 14:11:25, Ricardo Gibert wrote: >> >>>On May 09, 2001 at 14:08:10, Ricardo Gibert wrote: >>> >>>>On May 09, 2001 at 14:04:55, Eugene Nalimov wrote: >>>> >>>>>Uri, there is branch of the mathematics (not even computer science, just >>>>>ordinary mathematics) that studies the complexity of algorithms. Algorithms were >>>>>used in mathematics long before computers appeared, for example GCD algorithm >>>>>was known to the classic greeks. >>>>> >>>>>*Very* rude explanation of big-O notation is: you have the algorithm that >>>>>require M operations (or steps, or machine instructions, or clock cycles, etc.) >>>>>when input in N elements long. You are increasing length of the input; how much >>>>>operations will be necessary now? That has *nothing* to do with the fact that >>>>>majority of practically used algorithms will terminate in finite number of steps >>>>>when input is finite. >>>>> >>>>>Eugene >>>> >>>> >>> >>> >>>But [in chess] the the length of the input does not increase and that's the >>>whole point. >>> >>> >>>> >>>> >>>>> >>>>>On May 09, 2001 at 13:37:58, Uri Blass wrote: >>>>> >>>>>>On May 09, 2001 at 13:23:25, Dann Corbit wrote: >>>>>> >>>>>>>On May 09, 2001 at 13:18:52, Uri Blass wrote: >>>>>>> >>>>>>>>On May 09, 2001 at 11:27:46, Dann Corbit wrote: >>>>>>>> >>>>>>>>>On May 09, 2001 at 10:12:30, Ricardo Gibert wrote: >>>>>>>>> >>>>>>>>>>On May 09, 2001 at 02:00:25, Dann Corbit wrote: >>>>>>>>>> >>>>>>>>>>>For those of you who don't want to perform your own web search, just choose one >>>>>>>>>>>of these: >>>>>>>>>>> >>>>>>>>>>>http://hissa.nist.gov/dads/HTML/bigOnotation.html >>>>>>>>>>>http://bio5495.wustl.edu/textbook-html/node15.html >>>>>>>>>>>http://umastr.math.umass.edu/~holden/Math136-99_projects/Amstutz-OBoyle-Petravage/big-o.html >>>>>>>>>>>http://www.eecs.harvard.edu/~ellard/Q-97/HTML/root/node8.html >>>>>>>>>>>http://classes.monterey.edu/CST/CST338-01/world/BigO.html >>>>>>>>>>>http://shalim.csustan.edu/~john/Classes/CS3100_DataStructures/Previous_Semesters/1999_04_Fall/Examples/big-O >>>>>>>>>>> >>>>>>>>>>>CS:201, FCOL! >>>>>>>>>> >>>>>>>>>>Big-O notation is used to describe asymtotic behavior. It commonly used to >>>>>>>>>>describe the "running time" of an algorithm. If an algorithm is O(f(n)), n is >>>>>>>>>>understood to be a finite, but *unbounded*. (For some reason, "unbounded" gets >>>>>>>>>>confused with infinity. This is an error, but let's not get into that. It isn't >>>>>>>>>>relevant here) >>>>>>>>>> >>>>>>>>>>In chess, n in is bounded. This is a critical distinction, that means chess is >>>>>>>>>>*not* NP. >>>>>>>>> >>>>>>>>>GREAT! Then it's computable. What's the answer, win-loss-draw? >>>>>>>>>;-) >>>>>>>> >>>>>>>>I see no point in continuing to argue. >>>>>>>>The question is simply question of definition. >>>>>>>>I did not say that it is easy to solve. >>>>>>>> >>>>>>>>I use the definition of NP only for problems with n that is not bounded >>>>>>>>otherwise the mathematical definition say that it is O(1)(there is a constant >>>>>>>>and the only problem is that it is too large) >>>>>>>> >>>>>>>>I can agree that chess is practically O(exp(n)) and not polynomial for practical >>>>>>>>purposes but it does not change the fact that by mathematical definition it is >>>>>>>>O(1). >>>>>>> >>>>>>>This is simply wrong. I guess we are at an impasse. >>>>>>> >>>>>>>>You can say that Sorting is also O(1) from theoretical point of view if you look >>>>>>>>at sorting that is done by a computer. >>>>>>> >>>>>>>Show me any algorithms book that says any sorting algorithm is O(1). >>>>>> >>>>>>The sorting from theoretical point of view is not O(1) because the size of the >>>>>>input is not bounded. >>>>>>Sorting done by a computer has bounded size and every problem of bounded size is >>>>>>O(1) by the definition that I know. >>>>>> >>>>>>A problem can be O(n) only if n is not bounded by a finite bound by the >>>>>>definition that I use. >>>>>> >>>>>>I look at sorting from mathematical point of view and not from computer point of >>>>>>view and this is the reason that I said that it is not O(1). >>>>>> >>>>>>Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.