Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: What is the public's opinion about the result of a match between DB and

Author: Vincent Diepeveen

Date: 07:05:55 04/24/01

Go up one level in this thread


On April 24, 2001 at 08:47:06, Uri Blass wrote:

>On April 24, 2001 at 08:20:57, Vincent Diepeveen wrote:
>
>>On April 24, 2001 at 03:47:15, Uri Blass wrote:
>>
>>>the best software that is not IBM.
>>>
>>>Suppose there is a match of 20 games at tournament time control
>>>
>>>I am interested to know how many people expect 20-0 for IBM
>>>How many people expect 19.5-.5?....
>>
>>>If IBM expect to do better result then the average result that the public expect
>>>then they can earn something from playing a match of 20 games with Deep Blue.
>>>
>>>I believe that a part of the public who read the claim that kasparov played like
>>>an IM are not going to expect good result for IBM.>
>>>Uri
>>
>>First of all IBM would get out of book every game with -1.0 pawn
>>disadvantage (which is about the average of what Kure and Noomen
>>get in tournaments, sometimes they get out of book with mate in XXX even).
>
>I disagree.
>
>1)It is easy to avoid -1 pawn disadvantage by using a small book.
>It is also easy to get the opponent out of book (for example by lines like 1.c3
>and 2.Qc2)

>2)Kure and Noomen do not get +1.0 pawn advantage from the opening after every
>game

Oh well says the beginner who never visited a world champ.

Please visit a big tournament, or ask what the average score was of
Nimzo / Fritz out of book and Tiger.

The average score from Tiger out of book in dutch open 2000 was +1.0,
world champs it was even worse.

Please get a bit less stubborn and either analyze the games or take it
from someone who was there already for a lot of times.

Each game one of those books, either Noomen or Kure, gets out of book
with a mate in XXX score against its opponent.

That's not a joke. That's REALITY.

Even that score not counted the other games the average is +1.0.

Most games at a world champ you play against AMATEURS / weak commercials,
difference is very little between that. Basically defined as
basic income of chess is not enough to feed ones family from.

Every programmer who joined these tournaments has EXPERIENCED this
problem. Programs aren't humans who FLEXIBLE can pick a new line
somewhere.

I never manage to explain that problem to chessplayers, like
for example team members of mine.
They always laugh and say: "how can your opening book be at most 1900
rated and the program way better as that? Must be EASY to make a 2500+
book at home".

It is NOT easy. All my teammembers share they play just one stupid opening.
One of them always plays ben-oni. The other plays always accelerated
dragon and so on.

Recently one of the big criticizers (he 2347 rated)
i prepared a SINGLE line for at home,
i just took it from NCO99. I got out of book with black very well now.
10 moves later i was bigtime won and won the first pawn.

Now IMAGINE the problem for a chessprogram if i use the NCO99 book where
many lines are already refuted from when playing Noomen and Kure nowadays...
...note i already win from all other progs easily

If i one day entered the line then i must update it every week to not
get outbooked in a tournament, because old lines keep in the book of
course. I can't follow daily the latest novelties as published in the
latest books and issues of magazines.

If you play a correspondence game, then you search at THIS moment for
new openingstheory to play at THIS moment in a game.

Now that's just ONE openings line. Even with a small book
I need to be prepared to face thousands of opening lines. Suppose you
short before a tournament start 1000 correspondence games, how high is
the quality of the opening in those games?

As i must make my book BEFORE the tournament. I can't tell my opponent:
"please wait half an hour i go first buy a good chessbook for this
specific line which we have on the board now".

So in computerchess the time between my book has been made and
when it gets to the board is the problem.
Some lines date back from 1.5 years ago.
At home i didn't
recently manage to improve it and now it happens on the board.

Of course not playing theory usually is an even bigger problem as progs
suck in opening bigtime. Deep Fritz most of them all.

>>
>>I would expect IBM to lose with 18-2.
>
>I am more optimistic for IBM and Amir Ban's opinion seems to me more realistic.
>>
>>Let's be realistic
>>
>> a) IBM searched 11-13 ply in 97, nowadays programs search deeper
>
>You repeat it again and again when we have no evidence if it is truth and hyatt
>explained that it is not truth.

See the logfiles.

REMEMBER THAT A BRANCHING FACTOR OF 5.0 IN 1997 WAS VERY NORMAL.

And no program with a bit of mobility got 11-13 ply in 1997 at
tournament level.

Also the deep blue processors were designed years before 1997.

It was really an old machine.

If design started in 1995 at the new processor, then getting 2 ply
extra was very good. In 1995 imagining a program searching 11-13 ply
WITH a lot of extensions was very good.

Very little people believe in nullmove or forward pruning.

Note that in deep blue there is a technical problem to use forward pruning
also.

Each processor searches 6 ply in a certain time interval.

Suppose that i set time interval to 0.5 seconds to search 6 ply, which
is probably very near to what Hsu used.

So let's assume that each SP processor gave
its 30 hardware processors a time interval
of 0.5 seconds.

Whether it uses nullmove or not is not interesting. I CANNOT CHANGE
THE TIMING.

Hsu writes about this too in IEEE97. He couldn't use more extensions the
last 6 ply because that was 'too dangerous'. Of course Hsu isn't
lying there. The reason is an obvious timeout problem.
Because suppose that a search would take longer as 0.5 seconds!

Then he would be faced with a big problem!!

So using nullmove would be completely useless, not to mention
the dubious Fail HIgh reductions.

FHR is a very dubious form of searching. Any mathematician should be
able to proof it to be dubious when using positions stored in hashtable.

Note i do not know even whether in 1997 those were already posted
anyway.

But sure last 6 ply all those things couldn't be used.

Whether a 6 ply search was finished in 600000 nodes or 1500000 or
sometimes with nullmove with 50000 nodes that's completely *irrelevant*.

Always that 0.5 seconds was used (or the time as set by Hsu).

Why does everyone forget this timing issue?

11-13 ply in 1997 was more than OK.

>I believe that they searched deeper and the output suggest 16-18 plies but I do
>not believe that 16-18 plies were totally brute force because it seems
>impossible to me to search 16-18 plies brute force even with their hardware.

Please see the logfiles and analyze.

It's theoretic impossible to search fullwidth 16-18 ply with just 40M nodes
a second (20% speedup as claimed by Hsu from 480 chess processors which
get 200M nodes a second is very good).

Also the Qsearch was using up big overhead as explained by Hsu in IEEE99.

getting 11-13 ply fullwidth with last 6 ply no hashtables at 40M nps
with first 5-7 ply loads of extensions is VERY good.

Note Deep blue didn't use a sophisticated form of alphabeta like
PVS or something similar.

It used NORMAL alphabeta. To reduce the RAM needed to press in a CPU
or whatever, Hsu DID decrease the number of parameters shipped to the
cpu. He just shipped a bound to the cpu.

When i use normal alphabeta i need loads of nodes more.

>They probably used futility pruning.

Complete nonsense from a layman.

As a mathematician first please proof FHR to be dubious!

Hsu always claimed to never have used forward pruning and he
didn't do it in past and didn't do it in 1997 either.

Why would he? In 1997 even Hyatt was saying nullmove was completely
dubious.

Nowadays we can easily proof (using for example double nullmove)
that it isn't positional dubious but that in some weird cases you need
some ply extra to find something.

>> b) their book is hell worse as nowadays books are

>They only need a small book.
>Big book may be good only for blitz and when the time control is slower it is
>better to use a smaller book.

No matter what level it's important to have a good book as usually the
book is the weakest part of the program,
a program gets completely annihilated with black after 1.d4,Nf6 2.c4,g6
if you're out of book there with black!

Just try it at home at auto232 player!

Note that when playing against a human i definitely agree that a big
book is not very important as they get you out of it quick anyway, and
if they don't get you out of it, then they know probably a novelty
in the line you have on the board, so with direct advantage for human...

>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.