Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: O(1) garbage

Author: Vincent Diepeveen

Date: 07:18:21 05/10/01

Go up one level in this thread


On May 10, 2001 at 04:26:27, David Rasmussen wrote:

>I can't believe how many people here make comments about complexity classes,
>without really understanding the basics of it.
>
>That a problem is finite in size, doesn't make it O(1). For example, if I have a
>fixed dataset of 10000 integers, it isn't O(1) to sort them, regardless of what

Post office is sorting each element O(1) though.

If all those integers are for example between 1 and 10, then
all you need are 10 buckets and you can sort each element in O(1).

So all of them are sorted in O(n) then!

>I do. I can sort them in n*log(n) time or n*n time or n! time or time
>asymptotical to ackermanns function for that matter. In fact, it doesn't even
>have any meaning to say that it takes O of anything to sort a fixed dataset. O
>notation is only about asymptotic behavior. So even if my actual dataset right
>now is fixed, any mentioning of time complexity talks about how the algorithm
>used, would perform asymptotically as the dataset grows. It's not irrelevant
>whether I use an n*log(n) algorithm or an n*n algorithm, just because my dataset
>is fixed.

It does matter. If i sort an automatic generated openingsbook
in O(n*n), then it would be still busy sorting, as the book is
10 million moves. so that would be 100 billion operations to sort (don't
confuse 'instruction' here with what an instruction is on the machine).

However i use Heapsort so i do it in n log n.

Order DOES matter in chess.

Suppose getting 10 ply i have a branching factor of 10.

That's 10^10 then to get 10 ply. Pretty hard to get 10 ply then
as i need

10 000 000 000 nodes to get there.

If my b.f. is 4 then the exponent is 4 so it's 4^10 = 2^20 = 1024^2 = 1 million

Saves factor 1000. Orders do matter in chess!

>There is a wellknown "trick" from concurrent programming, to use n agents to
>sort a dataset of size n. This would reportedly lower the complexity to O(1),
>when sorting. Such a statement is faulty, however, because complexity talks

Such a statement is not fault if the range of the elements is small,
see the post office trick.

>about what happens when the dataset grows. In this case, when the dataset grows,
>the complexity can _still_ not get below O(n), and probably not below
>O(n*log(n)).

There is a huge difference between O(n) and O(n log n).

>In the same sense, even if chess is finite (which it is), it is still not O(1)
>to solve it, because you will have to look at what happens when the dataset
>grows. In this case, what happens when e.g. the chessboard grows? With any known
>algorithm, the complexity grows as O(a^n). If anyone have an algorithm that is
>in a complexity class lower than this, I would really like to know.

Basically what you say is that if we can search 10^40 positions that
we can't solve chess still because 10^40 positions is not enough.

However if you don't care to solve it in O(n*n) then you would need
10^40 whereas i need less!

>Tablebases are not a solution, since it takes exponential time to generate them.

Aha here is the mistake. Suppose we have a machine with 10^40 RAM.
How many operations do we need to make a tablebase?

No it isn't exponential time what we need to generate them!

What we need is 10^40 x overhead.

The overhead are basically a pass or 100 if we care for 50 move rule.

So that would make generating them like 10^42 operations or so.
Each operation is is very small. It involves generating index and
looking up in memory.

So it definitely is *not* 10^(40.i) where i > 1 is not a broken number!

Best regards,
Vincent




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.