Author: David Rasmussen
Date: 00:33:07 05/11/01
Go up one level in this thread
On May 10, 2001 at 10:18:21, Vincent Diepeveen wrote: >On May 10, 2001 at 04:26:27, David Rasmussen wrote: > >>I can't believe how many people here make comments about complexity classes, >>without really understanding the basics of it. >> >>That a problem is finite in size, doesn't make it O(1). For example, if I have a >>fixed dataset of 10000 integers, it isn't O(1) to sort them, regardless of what > >Post office is sorting each element O(1) though. > >If all those integers are for example between 1 and 10, then >all you need are 10 buckets and you can sort each element in O(1). > >So all of them are sorted in O(n) then! > I know. Radix Sort is the same. >>I do. I can sort them in n*log(n) time or n*n time or n! time or time >>asymptotical to ackermanns function for that matter. In fact, it doesn't even >>have any meaning to say that it takes O of anything to sort a fixed dataset. O >>notation is only about asymptotic behavior. So even if my actual dataset right >>now is fixed, any mentioning of time complexity talks about how the algorithm >>used, would perform asymptotically as the dataset grows. It's not irrelevant >>whether I use an n*log(n) algorithm or an n*n algorithm, just because my dataset >>is fixed. > >It does matter. If i sort an automatic generated openingsbook >in O(n*n), then it would be still busy sorting, as the book is >10 million moves. so that would be 100 billion operations to sort (don't >confuse 'instruction' here with what an instruction is on the machine). > >However i use Heapsort so i do it in n log n. > >Order DOES matter in chess. > What are you talking about? I haven't said that it didn't matter. I said that it _did_ matter. I said it is "not irrelevant". That means that it is relevant. i.e. that it matters. >Suppose getting 10 ply i have a branching factor of 10. > >That's 10^10 then to get 10 ply. Pretty hard to get 10 ply then >as i need > >10 000 000 000 nodes to get there. > >If my b.f. is 4 then the exponent is 4 so it's 4^10 = 2^20 = 1024^2 = 1 million > >Saves factor 1000. Orders do matter in chess! > >>There is a wellknown "trick" from concurrent programming, to use n agents to >>sort a dataset of size n. This would reportedly lower the complexity to O(1), >>when sorting. Such a statement is faulty, however, because complexity talks > >Such a statement is not fault if the range of the elements is small, >see the post office trick. > Yes it is, because, as I said (and this is a very basic notion in complexity analysis), you can't talk about complexity on a fixed dataset. You will by definition have to look at what happens if the dataset grows. So you can't say: This is O(1) because my dataset is small. By definition, it doesn't make sense. >>about what happens when the dataset grows. In this case, when the dataset grows, >>the complexity can _still_ not get below O(n), and probably not below >>O(n*log(n)). > >There is a huge difference between O(n) and O(n log n). > I know. Have I said otherwise? >>In the same sense, even if chess is finite (which it is), it is still not O(1) >>to solve it, because you will have to look at what happens when the dataset >>grows. In this case, what happens when e.g. the chessboard grows? With any known >>algorithm, the complexity grows as O(a^n). If anyone have an algorithm that is >>in a complexity class lower than this, I would really like to know. > >Basically what you say is that if we can search 10^40 positions that >we can't solve chess still because 10^40 positions is not enough. > No, that's not what I say. I say that you can't talk about complexity on a fixed set. But that doesn't mean that you can't take a exponential algorithm, and run it on a "fast enough" computer, and be done "quickly". But the point of complexity analysis, is exactly that it abstracts the notions of fast and slow. >However if you don't care to solve it in O(n*n) then you would need >10^40 whereas i need less! > What are you talking about? >>Tablebases are not a solution, since it takes exponential time to generate them. > >Aha here is the mistake. Suppose we have a machine with 10^40 RAM. >How many operations do we need to make a tablebase? > I don't know how many operations you need, and I don't care. That's not what we're talking about. We're talking about complexity. But if you're asking what the number of operations grow like, when the dataset grows (i.e. complexity), then it is exponential. In our case, the dataset doesn't grow, but that is irrelevant. If you want to talk about complexity, you have to look at the datasize growing. Otherwise you're just talking about runtime. >No it isn't exponential time what we need to generate them! > Yes. >What we need is 10^40 x overhead. > >The overhead are basically a pass or 100 if we care for 50 move rule. > >So that would make generating them like 10^42 operations or so. >Each operation is is very small. It involves generating index and >looking up in memory. > >So it definitely is *not* 10^(40.i) where i > 1 is not a broken number! > You seem to be analyzing runtime and to not understand the basics of complexity analysis. Take a look at http://www.icdchess.com/forums/1/message.shtml?169133 http://www.icdchess.com/forums/1/message.shtml?169134
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.