Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Never Say "Impossible"

Author: Robert Hyatt

Date: 10:16:42 05/09/01

Go up one level in this thread


On May 09, 2001 at 12:25:50, Dann Corbit wrote:

>On May 09, 2001 at 09:19:37, Sven Reichard wrote:
>
>>On May 09, 2001 at 03:39:42, Uri Blass wrote:
>>
>>>Algorithms to solve problem like chess,go or other games can be described as
>>>O(1).
>>>
>>>Uri
>>
>>At the risk of repetition and adding to the confusion, here are my two bits:
>>
>>The complexity of an algorithm is usually given as a function of the input size
>>n; moreover we usually consider its behavior as n goes to infinity.  Since the
>>complete information about a Chess position can be described by input that is
>>bounded in size (just give the moves that lead to the position; I think we all
>>agree that no game can last more than, let's be generous, 10,000 moves), it
>>makes no real sense to consider the theoretical complexity of the Chess problem.
>
>This isn't too far from it.  However, what we consider is what happens to the
>algorithm as n "gets large."  What "gets large" means is purely a function of:
>1.  Our available computer power
>2.  The type of algorithm
>3.  How desperately we want an answer.
>
>For f(n) = a constant, n can be very, very large.
>For f(n) = n, n can be in the billions or perhaps even trillions (consider the
>RSA factoring efforts, or the GIMPS project or SETI, etc.)
>For f(n) = n*log(n) we can process very large sets
>For f(n) = n^3, we can process large sets (e.g. solve a one million by one
>million matrix requires 10^18 time steps and so would be solvable on a petaflop
>computer in unit time.  Calculating nuclear explosions in core, etc. springs to
>mind)
>For f(n) = n! or exp(n) only very, very tiny problems are workable.  Even
>problems that are polynomial time can be extremely unweildy if the highest order
>term has a large constant exponent.
>
>Now that we are armed with data about how our algorith performs, we can decide
>how large of a problem we can solve.  That is the very reason to be behind
>algorithm analysis.  We want the answer to a single question:
>"What problem size is workable?"
>
>>What does make sense is to consider the complexity of an algorithm for any
>>Chess-like game (2p0s, i.e., TicTacToe, Othello, Go, ...). And afaik, these
>>problems are known to be worse than NP.
>
>tic-tac-toe is probably not a good example.  Though there are three states for
>each box, there are only nine boxes.  There are less than 3^9th final output
>states since many are illegal like these are:
>
>O | O | O
>--+---+--
>O | O | O
>--+---+--
>O | O | O
>
>O | O | O
>--+---+--
>x | x | x
>--+---+--
>  |   |
>
>Now, this is a microcosm of the whole chess argument, I think, so why don't we
>take a moment to consider the terrificially easier game of tic-tac-toe.
>
>I might write a tic-tac-toe algorithm that takes random moves and keeps trying
>until it has proven a win.  Such an algorithm could be so horrible that it
>basically never completes, even with a microscopic problem set like this.  That
>algorithm is *not* O(1).  If there were a reasonable time limit imposed, that
>tic-tac-toe engine would lose every game on time.
>
>I can do a minimax-search with alpha-beta.  This will complete (for all intents
>and purposes) instantaneously on a modern computer.  That is an artifact of the
>size of the tree, and does *NOT* effect the complexity of the search.  It just
>so happens that when the tree is tiny, it completes very rapidly.  In other
>words, we have a tiny input -- hence it is practical and even efficient.  You
>might argue that such an algorithm is pragmatically O(1) {and indeed, in this
>case, I might be forced to agree} but such an algorithm is ALSO O(exp(n)) (and
>O(f(n)) for any other function which grows faster than the constant function).
>
>For tic-tac-toe you could completely calculate every move before-hand from the
>bare board to every possible outcome with brute force and store the table.  This
>algorithm *is* Theta(1) in time at least [simple table lookup for every board
>position that is legal].
>
>>It might be more relevant to consider the problem as it usually used by us
>>programmers: Given a minimax tree with nominal depth n (there might be
>>extensions and transpositions), determine its minimax value. If we leave out the
>>transpositions, this is still worse than NP. If we factor in transpositions and
>>the correlations between "similar positions", the complexity goes down, but the
>>complexity analysis becomes a total mess. But my guess is that there is no
>>algorithm for this that is P or even NP. Personally, I'd be happy with an
>>algorithm that is O(n^3) as opposed to O(n^12) :)
>
>Simple, come up with a branching factor that is almost identically 1.  Do like
>the chessplayer {I forget who} who said, "I only consider one move -- the best
>one!"
>
>;-)
>
>You have to have a fully formed tree to start with.  For deep searches, half the
>pain is forming and storing the data.


Another idea...  find a way to search with a branching factor of less than 1.
Then all these "non-exponential" guys will be right.  :)




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.