Author: Robert Hyatt
Date: 10:59:40 04/24/01
Go up one level in this thread
On April 24, 2001 at 11:53:42, Vincent Diepeveen wrote: > >that's some practice blitz against a tiger which plays random openings. > >Please remember how crafty got lost out of book game after game at >world champs. now that's a world champ, and THERE your book must be ok! Remember that we had a book problem there that was unknown prior to the event, caused by an error in the code that broke on 64 bit machines. We couldn't even use the "good" book at all, which never plays the sicilian line that caused the problem... > >Of course that's going to be hard as Noomen and Kure prepared very well >for that tournament. Kure + Noomen were rewarded bigtime in world champs >for their work, many games they got out of book with huge advantage. > >ICC is completely irrelevant for disproving the problems which happen >at major events, as they prepare for those events, NOT for icc! > >May 18-20 there is another tournament and i'm pretty sure that Kure and >Noomen already work hard for that tournament... ...so am i, but i >also have to take care for other things like getting diep to work >over a network and improving it anyway :) > >Best regards, >Vincent > > >Diep mated crafty very soon after opening in world champs. > >Especially after crafty grabbed that pawn on a2. Diep was at +10.0 already >when crafty slowly started to realize it had a worse position. > >First move out of book diep also did *not* realize it had a bigtime >won position. that happened moves later. I see that game as a simple bookwin, >despite that both progs initially thought crafty was ok after opening. > > That happens on occasion. It happened frequently in London because we couldn't control the book we used at all. I've been to WMCCC events where this was not a problem, but there we could control the book better... IE see Jakarta event. >Hsu obviously fixed some major drawback in his evaluation and endgame! > >For 1997 standards the endgame of deep blue was everything but bad. >I remember how anyone could win a rook endgame in 1997 from programs >playing at icc at pentiumpro 200 cpu's if they had a fast machine... > >In april 1997 i had a pentiumpro 200 and that was by far fastest cpu >for my program. Only when PII300 came out end of 97 this was a *bit* >faster as a pro200. > >In those days everyone could fool progs in endgame! Not everyone. I can show you thousands of wins over GM players in endgames. I can show you a standard time control win against a GM with opposite bishops. > >These areas were adressed by Hsu quite a bit i guess as we can see >from the games against kasparov. Deep Blue plays positionally pathetic >in middlegame but seems quite well tuned in endgame. > >>> >>>It was really an old machine. > >>???? > >0.6 micron technology where DB2 processors were made of >was even in 1997 outdated! So? 1997 processor was nothing like 1996 processor. But they used the same "process" to make the chips for cost and speed reasons. They were constrained more by time, than by anything else. > >But it tells why a machine which on paper could get over a billion nodes >a second got in reality 200M nodes a second. with 30 hardware processors >at 1 SP processor getting 1/5 of the potential speed is very good >actually as the cpu's of course search a 500 times faster as >a software cpu. > Still, the 200M number is an "effective" number. Which factors in the search overhead lost by the parallel search. Just do the math on 480 processors, half at 2M nodes per second, half at 2.4M nodes per second. >> >>> >>>Hsu writes about this too in IEEE97. He couldn't use more extensions the >>>last 6 ply because that was 'too dangerous'. Of course Hsu isn't >>>lying there. The reason is an obvious timeout problem. >>>Because suppose that a search would take longer as 0.5 seconds! >> >>.5 seconds is wrong. It is more like .05 seconds. > >This is not possible Bob. fullwidth a 6 ply search with a big qsearch >and all without hash is around 1.5M nodes a second in middlegame >(don't forget majority >of those positions are real stupid compared to a 6 ply search in >the root). Not for my program. Futility in the q-search makes a big difference. > >potentially the cpu's of Hsu could get around 2.5M nodes a second. > >That means he needed most likely sooner 0.5 seconds as he >could need 0.05 seconds. Note that they did not always search 6 plies in hardware either, it varied from 4-7 depending... > >Even a very efficient 6 ply search in a stupid middlegame position >without hash and big qsearch is 600k *at least*. and without "big q-search"? 100K ? or less? I can see that happening easily. > >No he lost a factor 4 first to the hardware timing. That's why IBM >was so proud to announce the fastest searching chessmachine in the world >build on (outdated 0.6 micron) ibm technology getting 200M nodes a second. > >Hsu writes from that he estimated he got a 20% speedup out of that 200M >nodes a second. No he didn't. First, his IEEE article says that on average, they were able to drive the chess processors at a 70% duty cycle on average. Sometimes 100% sometimes 50%. But they averaged 70% for a search. That gives well over 700M nodes per second peak. Those numbers are public and were confirmed multiple times in the past... and they can be found in the IEEE paper. So it peaks at 1B nodes per second plus, and averages 700M, from the numbers given by them. The 200M number is called the "effective speed" by Hsu, which suggests that it factors in search overhead. His thesis said that the least efficiency they saw was on the order of .2 or 20%. That would turn into about 140M nodes per second. But that was the worst they did. 200M sounds like a good conservative estimate to me...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.