Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: What is the public's opinion about the result of a match between DB and

Author: Vincent Diepeveen

Date: 19:37:14 04/26/01

Go up one level in this thread


>>Neither did Hsu, et. al...  They used some search extensions most of us don't.
>>They used lots of eval terms that we don't/can't...  but the overall framework
>>was the same as the original chess 4.x basically, just like most every other
>>program around.  Except for no real selective forward pruning that we know of.

It's nonsense that they had a good eval. they had something like gnuchess.
But without doubled pawn well implemented even.

simplistic mobility. and simplistic attack evaluation, though that's
very effective in games versus programs not doing it, it was simplistically
done obviously. Just analyze the games!

Moves like Qa5? and Bc7? in game 1 are easy to explain by simple evaluation
terms and simple mobility.

No i am pretty convinced that their evlauation was no longer piece square
table like it was before that, but that it wasn't even close to what i'm
doing in DIEP.

I'm very sure that they just improved it from piece square table to
gnuchess standards, which is already a big and important step.

Most work probably was endgame as it really did well in endgame when
compared to other stages of the game.

Old GMs LOVE to teach on endgame... ...but then Hsu gotta understand it
first and the first GM who can explain what a good bishop is and what
a bad bishop is i must find. Logically deep blue didn't show that it
knew the difference.

Nevertheless, the games are showing big positional weaknesses which even
todays top PSQ progs do not make.

It looks to me as a very bad tested evaluation.

>>Null-move might have been _really_ interesting on that hardware...

>Of course. It is a nonsense IMO to have ignored null move.

>You can say they did not need it, but it is so simple to implement that I cannot
>find a valid excuse for them.

Oh well, let's talk about this.

First of all some scientists, especially Bob, was 24 hours a day
telling how dubious nullmove was. As if it was just another pruning
mechanism. Only a few programmers were convinced about it as they
had experimented with it bigtime.

Also in many programs the wrong conclusion was drawn that nullmove was
bad, whereas in fact dubious forward pruning last few plies or pruning
on alpha or whatever kind of lazy evaluation or forward pruning was
causing the problem.

Obviously the combination of a reduction factor for nullmove +
pruning is a deadly one. First you reduce, then the few plies left
you throw away by pruning.

Very very dangerous!!!

Many amateurs always complaining daily how dubious their pruning works
and especially nullmove in those days...

Anyway, just look to the machine. From Deep thought being a PSQ program
it becomes to Deep Blue being a program with a bit more evaluation,
but still a preprocessor to some extend if i understood well.

DEEP BLUE WAS PREPROCESSING!!

HEHE 1000x

Explains why it had problems seeing what swapping queen would result in.

But still it was BETTER as deep thought, probably by a large margin.

And then there is the law of the stopping advantage. If i always outsearch
you pathetically. Now next year i search 1 ply deeper, then why would i
want more?

I already was outsearching you, next year i search 1 ply deeper, so
who cares? The approach seemed to work. No one asked in 1997 what the
search depth was of deep blue!

Only "how many nodes a second Hsu?".

Of course, most interesting question which recently was answerred to me
by a hardware designer of graphics cards, is why wasn't nullmove even
'tried'.

Well, let's be clear here. Nullmove gets its biggest reduction out of
the pruning near the leaves.

At least in DIEP it does, i bet it does in other progs too.

As we know in deep blue the leaves are 6 ply hardware searches where
no RAM is getting used at all...

In hardware things do not work like in software.

In software it doesn't matter whether a search of 6 ply takes 0.05 seconds
one day then it takes 0.50 seconds then it takes 0.001 seconds etcetera.

In software it just doesn't matter. You just wait till it finishes the 6 ply
and that's it.

However we don't deal with software here. We deal with pieces of sand and
metal which are depending upon busspeed and each single SP processor which
is keeping an eye on around 30 processors.

So you have to give it a certain interval to finish search to keep it
simple!

If you use nullmove sometimes it finishes within 0.001 seconds sometimes
it finishes in 0.50 seconds.

So what the hack? Why not simply always let it search fullwidth,
as we must wait that interval anyway!

So whether you use nullmove or not, the worst case will not change
with nullmove! And the interval of course has to take worst case
into account. So it had to wait anyway, so you can search fullwidth
anyway!

Best regards,
Vincent



>
>
>
>    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.