Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Status of Brutus?

Author: Robert Hyatt

Date: 20:22:45 07/29/03

Go up one level in this thread


On July 29, 2003 at 20:56:50, Vincent Diepeveen wrote:

>On July 29, 2003 at 18:17:24, Slater Wold wrote:
>
>>On July 29, 2003 at 16:12:40, Vincent Diepeveen wrote:
>>
>>
>>>>This is just another area where you know nothing, but write as though you are
>>>>an expert.
>>>>
>>>>BTW, Hsu's move generator is _not_ a lot better than Belle.  All you have to
>>>>do is read his paper to see what he did...
>>>
>>>Of course everyone can. It is described at several papers. What Brutus has is a
>>>*lot* better i can garantuee it.
>>>
>>>Hsu didn't program in verilog or some hardware language. because of that it is
>>>amazing he managed to get stuff bugfree to work. However you can't simply
>>>compare all that university stuff with what Donninger has!
>>
>>Are you joking?
>>
>>A hand laid out board is *TONS* faster (and stable) than an auto placed & routed
>>design.  They teach you that in like the 2nd class of EE.
>>
>>University stuff?  Cause the knowledge from a few MiT grads working with IBM is
>>probably pre-k stuff right?  Nothing close to what Chessbase can do with chips.
>
>we are not talking about a small chip here.
>
>the big processors are all made by hand. But by a *lot* of persons.
>
>Now suppose a single person doesn't have anything except transistors to his
>avail. simple blocks.
>
>then make a chessprogram from that.
>
>that's *very* hard.
>
>How can you *ever* experiment with something?

Ever heard of "prototyping" and "software emulation"???


>
>it is very hard to produce new versions. Perhaps once each 3 month you can
>produce a new version with a few minor changes.

Or, you can put the things that change into RAM, rather than into firmware.


>
>producing a cpu cost at the time must have cost IBM like 30000 a cpu anyway.

Not even close.  Too high.  look up "project MOSIS"

>
>So producing 'test cpus' is not so simple.
>
>Then with all that very low level stuff you are making something that is complex
>searching.
>
>So it's way harder than making a graphics processor for example.
>
>Reason: a graphics processor has a clear goal. If some pixel is showed wrong at
>the screen then you have a bug, period.
>
>In chessprograms that is however harder. It is a black box which just calculates
>moves.
>
>We're not talking about something that was tested in FPGA.
>
>So you really talk about beginners level testing and improvements.
>
>That's one of the things that explains why he didn't have stuff like nullmove
>working very well of course and why his parallellism was so buggy.
>
>'aborting' a search because a search took too long????
>
>I won't ever do that in DIEP :)

You'd better or you will certainly lose on time.


>
>The problems to solve were because of this very low level design so huge that he
>simply didn't have time to experiment with new algorithms except what they
>invented themselves (no progress, singular extensions).
>
>Till to date it is unclear how singular extensions was implemented by them.

They spelled it out _clearly_ in their JICCA journal article.  I don't know
why _you_ can't understand what they did.  I understood it just fine and
implemented it in 1993 in Cray Blitz.  It wasn't easy.  But their explanations
were good.

>
>All i know is via via that they for example most likely didn't do >= beta
>singular moves. If so then how did they research, because the *original*
>definition that a move must be margin S better than the rest you can't do
>without having a truebound.

Again, you simply don't read.  They did PV singular, and fail-high singular,
which is >= beta.

All you have to do is read their paper.  They didn't do "fail-low" singular,
because (in their words) "they could find no useful definition of a fail-low
singular move".

>
>So all with all the way it searched was a mess of course.
>
>Results are like most academics have been doing in supercomputer chess:
>Extrapolation of nodes a second. extrapolation of parallel speedup.
>extrapolation of search depth. In 1 sentence it says 12.2 ply observed search
>depth on average.
>
>So that's including qsearch.
>
>If i would count the qsearch and extensions to my search depth then my depth
>will look huge too of course.
>
>The actual iteration depth is somewhere between 10-12 ply. More like 10.xx ply.
>
>Not 12.2 ply.
>
>*everywhere* the thing has been overguessed a bit too much.
>
>Sure i do believe it was tactical strong. Look how they did extensions. Created
>to solve mating problems simply.
>
>But i would love to compare its tactical strength with for example a program
>like ferret on a quad opteron after i may overtune some king safety parameters
>in ferret by hand.
>
>The search lines from deep blue clearly reveal such aggressive king safety
>tunings.
>
>For its time that was a very good thing, don't take me wrong!
>
>Nimzo a year later impressed a lot by such aggressive tunings (and a Kure book).
>
>That the ultra passive program Shredder nowadays also is aggressively tuned, is
>just showing that Deep Blue wasn't behind in that respect.
>
>So in short where the book deep blue will be never closed because of the IBM
>propaganda and because of the fact that it was the first to show that mankind
>can lose from computers in a clear way, it is trivial that in 2003 we better
>cannot look at deep blue.
>
>It just played too many bad moves.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.