Computer Chess Club Archives


Search

Terms

Messages

Subject: Computational question for mathematicians, philosophers & computer-geeks

Author: Axel Schumacher

Date: 15:59:18 03/02/05


Hi all,
I have two question regarding the storage requirements for information; I hope
somebody can help me with answering them. Please excuse if my questions are
stupid.

1. For each data-point (e.g. let's say the position of a pawn on the chessboard)
one requires 1 bit (either 0 or 1). Right? However, the information does not
include where the pawn is located. So, how much data has to be stored to
describe e.g. the position of a pawn?

2. How much calculation power is need to calculate a certain amount of data? I
know, this this may sound a little bit abstract and, of course, it depends on
the time-factor. But let's you have 1 terabyte of data in a data-spreadsheet.
What calculation-power (e.g. amount of average desktop computers) is needed to
make simple algebraic calculations with such a data-table?

I hope sombody can help me with this.
I'm writing a paper in which I make an analogy from biostatistic calculations
with chess and calculations in chess (e.g. from a typical chess program). The
reason for this is to examplify how biological data can be stored and how it can
be interpreted. In this special case we are dealing with 3.6 x 10^14 raw data
points deriving from chemical modifications in the human genome (so called
epigenetics). For example, is a specific DNA base in the genome methylayted or
not we have the state 0 or 1 again (plus this data has to be referenced). These
information-units could interact in an infinite number of ways, so that it seems
that it impossible to make sense out of them. However, IMHO, the analogy with
the game of chess exemplifies that it still should be feasible to approach the
problem of complex genetic information. In chess, a small number of rules can
generate a huge number of board configurations (states), which are analogous to
the configurations of molecules obeying physiological laws. Chess is known to
have also an infinite number of possible combinations in its play but in theory
the number is finite, since specific positions are impossible, as not all
(epi)genetic factors can be found in all functional working combinations. E.g.
it is said that in chess ‘merely’ ~10^43 to 10^50 states (positions) are needed
to to describe the state (or the game) of the system. Out of these subsets of
possible states, patterns can be established and calculated. So it is not
necessary, to know every possible state. It is obvious that pure reductionism,
the theory that all complex systems can be completely understood in terms of
their components, may not be a fully fruitful approach.
Yet, recent development in the field of complexity (e.g. statistical mechanics)
has come up with alternative statistical approaches. It considers the average
behaviour of a large number of components rather than the behaviour of any
individual component, drawing heavily on the laws of probability, and aims to
predict and explain the measurable properties of macroscopic systems on the
basis of the properties and behaviour of their microscopic constituents. Chess
programs don't rely on brute force alone anymore. Maybe such 'pattern
recognition' or reduction of legal states can help in making sense out of
complex data.
Your opinion? Answers to the qustions? :-)

Axel



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.