Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hyatt vs corbit solving chess

Author: Dann Corbit

Date: 19:08:35 01/25/05

Go up one level in this thread


On January 25, 2005 at 20:07:50, Duncan Roberts wrote:

>On January 25, 2005 at 19:47:34, Dann Corbit wrote:
>
>>On January 25, 2005 at 18:42:07, Duncan Roberts wrote:
>>
>>>On January 25, 2005 at 16:51:37, Dann Corbit wrote:
>>>
>>>>On January 25, 2005 at 16:06:42, Duncan Roberts wrote:
>>>>
>>>>>On January 24, 2005 at 12:41:43, Dann Corbit wrote:
>>>>>
>>>>>>On January 24, 2005 at 12:35:56, Dann Corbit wrote:
>>>>>>
>>>>>>>On January 24, 2005 at 12:33:49, Dann Corbit wrote:
>>>>>>>
>>>>>>>>On January 24, 2005 at 12:04:56, Dieter Buerssner wrote:
>>>>>>>>
>>>>>>>>>On January 24, 2005 at 11:53:38, Dann Corbit wrote:
>>>>>>>>>
>>>>>>>>>> It might require the square of that (so 50,000*50,000 acres).
>>>>>>>>>
>>>>>>>>>Dann, think again about this :-) Also, assume for a moment, you had given the
>>>>>>>>>area in square miles instead of acres. Now square that area, or in square light
>>>>>>>>>years - you will come to the conclusion, that almost no space at all will be
>>>>>>>>>needed ... . And of course, if you square an area, you don't have an area
>>>>>>>>>anymore, but rather something with dimentsion length^^4.
>>>>>>>>
>>>>>>>>Actually, a cube is a very good idea.  The particular substance I described for
>>>>>>>>storing data is a doped crystal (rather inexpensive too).  It is the same thing
>>>>>>>>that is used for dosimeters for people who walk around in nuclear reactors.
>>>>>>>>When ionizing radiation strikes the crystal, it leaves tracks that can be
>>>>>>>>measured.  Using this principle, they are able to record a terrabyte in one
>>>>>>>>square centimeter.  Interesingly, you can read the whole crystal at once with
>>>>>>>>CCDs.
>>>>>>>>
>>>>>>>>Now, suppose that we record in layers so that really we record data in 3
>>>>>>>>dimentions.  Instead of a terrabyte per square centimeter, we may get 1e36 bytes
>>>>>>>>per cubic centimeter.  Now, suppose that we have some kind of loss with a factor
>>>>>>>>of one million.  That would mean 1e30 bytes per cubic centimeter.
>>>>>>>>
>>>>>>>>A cubic meter of this crystal could store an awful lot of information.
>>>>>>>>Specifically, 1e90 bytes.
>>>>>>>
>>>>>>>Math spasm.  Only 1e45 bytes, since we already had the square.
>>>>>>>But that looks like a pretty nice number for chess.  And a cubic meter of
>>>>>>>crystal is certainly doable.  Even if we need two or three of them.
>>>>>>>
>>>>>>>>So anything is possible, if we put our minds to it.
>>>>>>
>>>>>>Time for yet another retraction.  Since a square centimeter gives 1e12 bytes, a
>>>>>>cubic centimeter is only 1e18 bytes.  So a cubic meter is 1e18*100*100*100 =
>>>>>>1e24 bytes.  Not bad, but a long way to go to store a chess tree.
>>>>>
>>>>>so in cubic kilometers 1e24 * 1000, * 1000 * 1000 = 1e33 bytes.
>>>>>
>>>>>assume 1e48 for all positions  so 1e15 cubic kilometres needed or a cube of 2.5
>>>>>by 2.5 of crystal should do the trick.
>>>>
>>>>You probably made the same mistake that I did.
>>>>
>>>>1e48/1e33=1e15
>>>>cbrt(1e15) = 1e5
>>>>The cube would have to be 100,000 kilometers on a side.
>>>>Bigger than the volume of the earth, I'm afraid.
>>>
>>>
>>>thanks for the correction. are you still hoping to see this in your life time ?
>>
>>Of course.  Exponential functions
>
>what do you mean by exponential functions ?

Historically, for hundreds of years, compute power has been growing
exponentially as a function of time.

That means that no matter how big a problem is, even something that is
exponential, it will eventually be matched in compute power by the advancement
of the tools alone.

In 1950, it would have been a chore to get a computer to play a good game of
Tic-Tac-Toe.  Now they can play chess like the best players in the world.
Today, computers play the game of Go like a 5 year old.  In 40 years, no human
will stand against the computer at Go.

Read this link:
http://www.kurzweilai.net/articles/art0134.html?printable=1

I think very definitely Mr. Kurzweil is right.

I think that this is the most startling piece of information from that link:

Some prominent dates from this analysis include the following:

We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year
2023.
We achieve one Human Brain capability (2 * 10^16 cps) for one cent around the
year 2037.
We achieve one Human Race capability (2 * 10^26 cps) for $1,000 around the year
2049.
We achieve one Human Race capability (2 * 10^26 cps) for one cent around the
year 2059.
The Model considers the following variables:

V: Velocity (i.e., power) of computing (measured in CPS/unit cost)
W: World Knowledge as it pertains to designing and building computational
devices
t: Time
The assumptions of the model are:

(1) V = C1 * W
In other words, computer power is a linear function of the knowledge of how to
build computers. This is actually a conservative assumption. In general,
innovations improve V (computer power) by a multiple, not in an additive way.
Independent innovations multiply each other's effect. For example, a circuit
advance such as CMOS, a more efficient IC wiring methodology, and a processor
innovation such as pipelining all increase V by independent multiples.

(2) W = C2 * Integral (0 to t) V
In other words, W (knowledge) is cumulative, and the instantaneous increment to
knowledge is proportional to V.

This gives us:

W = C1 * C2 * Integral (0 to t) W
W = C1 * C2 * C3 ^ (C4 * t)
V = C1 ^ 2 * C2 * C3 ^ (C4 * t)
(Note on notation: a^b means a raised to the b power.)
Simplifying the constants, we get:

V = Ca * Cb ^ (Cc * t)
So this is a formula for "accelerating" (i.e., exponentially growing) returns, a
"regular Moore's Law."

As I mentioned above, the data shows exponential growth in the rate of
exponential growth. (We doubled computer power every three years early in the
twentieth century, every two years in the middle of the century, and close to
every one year during the 1990s.)

Let's factor in another exponential phenomenon, which is the growing resources
for computation. Not only is each (constant cost) device getting more powerful
as a function of W, but the resources deployed for computation are also growing
exponentially.

We now have:

N: Expenditures for computation
V = C1 * W (as before)
N = C4 ^ (C5 * t) (Expenditure for computation is growing at its own exponential
rate)
W = C2 * Integral(0 to t) (N * V)
As before, world knowledge is accumulating, and the instantaneous increment is
proportional to the amount of computation, which equals the resources deployed
for computation (N) * the power of each (constant cost) device.

This gives us:

W = C1 * C2 * Integral(0 to t) (C4 ^ (C5 * t) * W)
W = C1 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)
V = C1 ^ 2 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)
Simplifying the constants, we get:

V = Ca * (Cb ^ (Cc * t)) ^ (Cd * t)
This is a double exponential--an exponential curve in which the rate of
exponential growth is growing at a different exponential rate.

Now let's consider real-world data. Considering the data for actual calculating
devices and computers during the twentieth century:

CPS/$1K: Calculations Per Second for $1,000
Twentieth century computing data matches:

CPS/$1K = 10^(6.00*((20.40/6.00)^((A13-1900)/100))-11.00)
We can determine the growth rate over a period of time:

Growth Rate =10^((LOG(CPS/$1K for Current Year) - LOG(CPS/$1K for Previous
Year))/(Current Year - Previous Year))
Human Brain = 100 Billion (10^11) neurons * 1000 (10^3) Connections/Neuron * 200
(2 * 10^2) Calculations Per Second Per Connection = 2 * 10^16 Calculations Per
Second
Human Race = 10 Billion (10^10) Human Brains = 2 * 10^26 Calculations Per Second
These formulas produce the graph above.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.