Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Wanted: Deep Blue vs. today's top programs recap

Author: Robert Hyatt

Date: 10:57:53 08/27/01

Go up one level in this thread


On August 26, 2001 at 08:12:10, Vincent Diepeveen wrote:

>On August 26, 2001 at 05:38:51, Uri Blass wrote:
>
>>On August 26, 2001 at 05:16:15, Mike S. wrote:
>>
>>>On August 25, 2001 at 21:42:43, Vincent Diepeveen wrote:
>>>
>>>>let's look to facts, ignoring it played kasparov
>>>>  - deep blue searched between 11 and 13 ply, most likely 12 ply
>>>
>>>I have read that it searched deeper (the SP/2 8...9 plies during the middlegame,
>>>and the chess hardware 5...7 ply additionally each). This would mean, it should
>>>have searched 13 ply minimum, up to 16.
>>>
>>>I've seen an example of the log files which was published, where DB searched 17
>>>plies for the move 35.Bxd6 in game two of the Kasparov match (full board, only 2
>>>minor pieces were exchanged in that position).
>>>
>>>I doubt 12 or 13 ply, admitting I haven't looked into the log files (yet).
>>>
>>>Regards,
>>>M.Scheidl
>>
>>people do not agree about the meaning of the logfiles and vincent believes that
>>11(6) means 11 plies and not 17 plies.
>
>>IBM claims that 11(6) means 17 plies but vincent does not believe them.
>
>IBM claims *nowhere* it's 17 plies.
>
>Only Bob posted one day something.

IBM _does_ state exactly what was given above.  I specifically asked members
of the DB team and they very specifically responded.  11(6) means 11 plies in
software, with the chess processors adding another 6 on to the end of that.
The first eleven plies have all the normal DB extensions active, as described
in their various papers.  The last 6 plies are very primitive in terms of
just having the normal check/recapture/pawn push extensions, and the q-search
is also restricted by some form of futility pruning.  It is not clear to me
whether the last 6 plies have futility pruning (or anything else) in the non-
capture part of the search, however.

But to ease the debate, here is a direct quote from an email:


=============================begin quote==========================
CB is in town this week and I had lunch with him, where we
chatted a bit about DB and the like. A while back, when you
looked over DB's logs (put up by Murray without IBM careing
much), you were impressed by their depth and branching factor.

Well, the depth notation is as I told you and just like it was
in DT, so it really does go *that* deep...
However the branching factor you inferred, which lead to some
speculation about DB using null-move pruning (it does not and
never did) is not correct. The times per depth are obscured by
search inefficiencies and start-up costs, which dominate the
upper portion of the tree. So as DB searches deeper, it also
searches more efficiently, or let's say, the amount of time
wasted is less. It's actual branching factor is much more
mundane.
======================end quote=================================

This is from _another_ member of the DB team (CB is what many
people call Hsu.  I won't spoil the fun explaining why until his
book comes out.  But in any case CB=Hsu, the above person is just
a DB team member that was there from the beginning of the project.

The reference to "the depth notation is as I told you and just like..."
is just what was given by me months ago  software-depth(hardware-depth)
with total-depth=software + hardware.

Like it or not, that _is_ what it did...





>
>In all examples ever posted by Hsu, see IEEE99 for example he
>talks about 12 ply search depth.

That is because, as I have stated many times, Hsu _always_ refered to
_software_ part of the search, since the last N plies were much simpler
in how they were searched.  Belle did 2 plies in software, period.  The
rest were done in hardware.  All his PVs had 2 moves unless the first
move was a check or the second was a recapture.  Then he got three-move
PVS, because just like deep blue (which is designed around the same hardware
search that Belle used) Belle didn't get any PV back from the hardware.
Although the hardware of Belle _did_ have a hash table.




>
>>If 11(6) means 17 plies then the impression that I got based on analysis of the
>>logfiles is that other programs have better extensions than Deep blue because I
>>saw cases when they can see the same start of the main line at smaller depthes.
>>
>>If people are interested in comparing the analysis of Deeper blue with the
>>analysis of Deep Fritz or Junior7 or Crafty18.10 then I have no problem to help
>>but I am not going to do alone all this project of giving top program some hours
>>for every position that Deeper blue pondered.
>>
>>I think that complete analysis of these positions by other programs may give
>>better information and we can see in cases that both saw the same line if the
>>commercial programs are faster or Deeper blue is faster.
>>
>>It seems that nobody or almost nobody except me is interested in the analysis of
>>the positions from these games for hours so I do not do it because if I do it
>>alone or almost alone it is going to take too much time.
>>
>>If I spend 5 hours per position and use my computer 20 hours per week for Deeper
>>blue-kasparov games then I need years to finish the job and I am not interested
>>so much in the problem in order to give my programs more than 20 hours per week
>>or to wait years for all the relevant information.
>>
>>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.