Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DIEP (too much power for humans)

Author: Vincent Diepeveen

Date: 12:18:22 07/10/03

Go up one level in this thread


On July 10, 2003 at 03:41:27, Omid David Tabibi wrote:

>On July 10, 2003 at 01:53:26, Derek Paquette wrote:
>
>>Just read a great post by the author of DIEP, and how he is getting an
>>Incredible machine, a godly machine I should say, now really,
>>Would a human even have a chance in hell? its going to be dozens of times faster
>>than Deep Blue
>
>He didn't say that.

I'll outsearch deep blue by a *big margin*. Note that the machine
where deep blue ran on was a 32 nodes cluster with 100Mhz processors and 2 of
them 120Mhz. This machine is a 1440 processor machine with over 3 teraflops.

Deep blue reported without proof that they guessed they got 125 MLN nodes a
second. Not a single output of them proofs that. That means that if this is true
that the first few moves out of book, and they were always out of book quickly
compared to nowadays computerchess, that the 10 ply searches they did there in
opening (as you can see in the output, this is the *total* as repeatedly told by
Hsu & co) where they perhaps finished 10 ply and started sometimes 11 ply
(output doesn't show that) that the average will be lower.

I need to note that their NPS count is *extrapolated* from 1 node. They observed
what they got at 1 node and then *extrapolated* that to n nodes where n is 30 in
their case.

The NPS of deep blue still is very impressive. If we compare what the program
did then it was doing simplistic mobility and such. Gnuchess when optimized for
the processors which are in the TERAS machine deliver between 1 and 5.xx gflops
a cpu. Gnuchess when a bit better programmed would get easily half a million
nodes at it (especially without hashtable it will be sick fast). That is not a
joke amount of NPS. each cpu has 8MB L2 cache for the R14000 and 3MB L3 for the
itanium2-madison.

If you parallellize just for nps, then such a machine delivers simply 500k nps x
500 cpu's = 1 mln nps * 250 = 250 Million NPS. Of course search depth won't be
impressive then.

If they got 125 MLN, even when it is extrapolated on average in 1997 then that
is still very impressive. They got paid for high NPS, they got high NPS.
Brilliant job by Hsu & co.

End of story.

However this is 2003 there is other demands now.

Take the book of deep blue had only 4000 moves given in by hand. Not very
impressive. Chance any of those 4000 move lines got on the board is like 0.01%
or something.

A good player, especially the strong openingsbook creators that are there now
like Arturo Ochoa, Jeroen Noomen, Alexander Kure, Necchi. These guys have hand
tuned books with millions of moves.

A random book like deep blue had, if you play mainline against it, especially
the opening kasparov plays: Najdorf (he of course didn't want to show that
against deep blue as it was in those days a waste of his preparation to show
hard prepared lines against a computer; you show your best lines against humans
is the habit of these guys which disgusts me that mentality). The result is that
you end up in lines which the GMs play a lot and improve. So the most played
move when you follow it in mainlines you will garantueed in any world
championship lose to commercial programs a lot of points as they are already
refuted.

Evaluation is a lot better and therefore slower.

Using nullmove AND global shared hashtables.

Every researcher who says he has a great way of parallellizing chess without
global shared hashtable is a lunatic as your b.f. directly is terrible. The b.f.
improvement at a big parallel system from the hashtable is more than 2 folded.

First trivial things is the transposition table cutoffs which provable make the
branching factor better. Secondly, what most tend to forget is that a lot of
processors are searching big crap. A big global hashtable prevents that the
processors will keep researching the same crap. Instead they get a cutoff now or
search it more efficient.

Additionally if your opponent makes a move you directly get a huge depth out of
hashtable. This is especially useful in positions where score is going down a
little. Usually that is the hardest positions. When score is around or below
zero slightly. Like directly out of book against strong well prepared opponent.

Such searches are important.

Average depth IMHO doesn't count. Minimum depth counts. Currently DIEP doesn't
dubiously forward prune. Despite hundreds of tries from which some took months
to implement and retune, forward pruning never worked in diep. Nullmove is in
contradiction to FHR a replacing search. It is giving the OPPONENT the move and
if he can't improve then in combination with zugzwang detection by means of
double nullmove you garantuee a correct search.

Some tactical combinations might take a few ply more, but you already get more
than a few ply more thanks to nullmove.

So if i get 15 ply in middlegame it sure will be 15 ply and not a forward pruned
15. Add to that a very big qsearch.

From Hyatt we heard unconfirmed stories that deep blue forward pruned in
hardware using futility and such stuff.

As we know Deep Blue forward pruned last few plies in hardware for sure using no
progress pruning.

Hyatt claims it was turned off. Recent writings from Hsu suggests it was turned
on.

DIEP avoids all those dubious stuff and simply doesn't use futility at all. Yes
it eats way more nodes, but i for sure do not run into troubles there.

If diep shows 15 ply it for sure is 15 ply!

I do not know which depths i will see in middlegame. Book lines are deep usual.
Some book lines start in the endgame already. I would be dissappointed then with
depths of 15 ply knowing how common endgames are or static closed positions
where transpositions give major depths.

>>I know that speed isn't everything,  but when you are looking 45ply ahead....
>
>And definitely didn't say this. Even if you search 15 plies in one minute
>(impossible today), and have a branching factor of 3, it will take you

???

I want to bet that my average search depth at the tournament will be >= 15.

It is 2 minutes a move on average the first 60 moves and though you might
mispredict moves of the opponent DIEP still fills a couple of hundreds of
gigabyte with positions. Why wouldn't i get 15 ply simply already out of
hashtables?

In endgame b.f. is < 3 for sure. The deeper you search the better the b.f. will
get for sure because majority of lines you see are in endgame already. It
exchanges somewhere in line queen and dang there goes number of possibilities i
nposition down.

In openingsposition it is the worst always, but the book handles that. It makes
a lot of moves. Usual you are closer to the endgame then in every search line.

I measure at 19 ply search in openingsposition an average number of semi-legal
moves of 40. Positions in check not counted because you extend those anyway so
they are not interesting to count for average number of moves.

It is important to note that average search depth in *middlegame* from shredder
against diep at the 90 0 game of ict3 was already like 18 ply.

That is with a lot of forward pruning most likely but still it is a big depth!

>205891132094649 minutes to reach depth 45, i.e., a little under 400 MILLION
>YEARS!

He is quoting the nonsense that the deep blue marketing was telling. "search
lines up to 32 plies deep," they said or something. That's peek depths, or as
they call it 'observed depths'. In diep they're about 128 ply. In opening i
already measure 70-80 ply peek depths at a small dual PC at depth 12 or
something. Trivially this is not very interesting lines and they will be a lot
more there at the many processors from which big part is always inefficient
searching.

>>I put all bets on the machine with 500 processors
>
>ëáãäå åçùãäå
>
>
>>what do the rest of you think?



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.