Computer Chess Club Archives


Search

Terms

Messages

Subject: O(1) garbage

Author: David Rasmussen

Date: 01:26:27 05/10/01


I can't believe how many people here make comments about complexity classes,
without really understanding the basics of it.

That a problem is finite in size, doesn't make it O(1). For example, if I have a
fixed dataset of 10000 integers, it isn't O(1) to sort them, regardless of what
I do. I can sort them in n*log(n) time or n*n time or n! time or time
asymptotical to ackermanns function for that matter. In fact, it doesn't even
have any meaning to say that it takes O of anything to sort a fixed dataset. O
notation is only about asymptotic behavior. So even if my actual dataset right
now is fixed, any mentioning of time complexity talks about how the algorithm
used, would perform asymptotically as the dataset grows. It's not irrelevant
whether I use an n*log(n) algorithm or an n*n algorithm, just because my dataset
is fixed.

There is a wellknown "trick" from concurrent programming, to use n agents to
sort a dataset of size n. This would reportedly lower the complexity to O(1),
when sorting. Such a statement is faulty, however, because complexity talks
about what happens when the dataset grows. In this case, when the dataset grows,
the complexity can _still_ not get below O(n), and probably not below
O(n*log(n)).

In the same sense, even if chess is finite (which it is), it is still not O(1)
to solve it, because you will have to look at what happens when the dataset
grows. In this case, what happens when e.g. the chessboard grows? With any known
algorithm, the complexity grows as O(a^n). If anyone have an algorithm that is
in a complexity class lower than this, I would really like to know.

Tablebases are not a solution, since it takes exponential time to generate them.



This page took 0.04 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.