Author: Vincent Diepeveen
Date: 10:59:06 10/18/02
Go up one level in this thread
On October 17, 2002 at 19:25:11, Robert Hyatt wrote: >On October 17, 2002 at 12:41:59, Vincent Diepeveen wrote: > >>On October 16, 2002 at 11:03:33, emerson tan wrote: >> >>Nodes a second is not important. I hope you realize that >>if you create a special program to go as fast as possible, >>that getting around 40 million nodes a second is easily >>possible at a dual K7. >> >>Do not ask how it plays though or how efficient it searches. >> >>Important factors are >> - he needs a new very good book. He will not even get >> 10th at the world championship when his book is from 1997, >> and i do not know a single GM in the world who could do the >> job for him. You need very special guys in this world to do >> a book job. They are unique people, usually with many talents. >> Just hiring a GM is not going to be a success in advance. >> If you look what time it took for Alterman to contribute something >> to the junior team, then you will start crying directly. >> - the evaluation needs to get improved bigtime >> - To get a billion nodes a second chip he needs around 100 million >> dollar. Of course more cpu's doing around 40 MLN nodes a second >> at say 500Mhz, he could do with just 10 million dollar. >> But if you can afford 10 million dollar for 40MLN nps chips, >> you can afford a big parallel machine too. Note that for a single >> cpu chip doing about 4 million nodes a second, all he needs is >> a cheap 3000 dollar FPGA thing. If you calculate well, then >> you will see that deep blue got not so many nodes a second in >> chip. it had 480 chips, and deep blue searched around 126 million >> nodes a second on average against kasparov. So that's 265k nodes >> a second at each chip. >> >> So a single chip getting 4 million nodes a second is very efficient >> compared to that. >> >> - He needs more like a trillion nodes a second to compensate for >> the inefficiency in hardware. No killermoves. No hashtables etcetera. > > >You keep saying that without knowing what you are talkingabout. Read his book. >You will find out that the chess processors _did_ have hash table support. He >just >didn't have time to design and build the memory for them. Belle was the >"pattern" >for deep thought. It was essentially "belle on a chip". Belle _did_ have hash >tables >in the hardware search... > >Given another year (a re-match in 1998) and they would have been hashing in the >hardware. > >Killermoves is not a _huge_ loss. It is a loss, but not a factor of two or >anything close >to that... I can run the test and post the numbers if you want... You know too that it's not only killermoves. You can't use a SEE (meaning a function that gives a value for an exchange, not a replacement for qsearch where the 'static exchange evaluation' was originally defined to be doing right that) easily in hardware either. In fact he doesn't describe anything regarding move ordering in hardware, so i assume he had nothing there. perhaps he ordered captures first. that's about only thing that's easy there in hardware. It's ballony to talk about something which he didn't use. If he didn't use it then he didn't use it. Simple as that. Hardware search with a single singular extension without hashtables and without nullmove and very primitive move ordering is *very* primitif. Or better formulated it is requiring *loads* of nodes more. Way more than a factor 2 in total. Killermoves itself is like 20%. All those 20%, 40%, 60% and such they add and add and add up to way way more than a factor 2 bob. You know and i know it. > >> Of course the argument that it is possible to make hashtables in >> hardware is not relevant as there is a price to that which is too >> big to pay simply. > >Based on what? Memory is not particularly complex. It certainly is not >expensive... > > >> >> Even for IBM it was too expensive to pay for >> hashtables in hardware, despite that Hsu had created possibilities >> for it, the RAM wasn't put on the chips and wasn't connected to the >> cpu's. Something that improves the chips of course do get used when >> they work somehow. Only price could have been the reason? Don't you >> think that too? If not what could be the reason to not use hashtables, >> knowing they improve efficiency? > >Lack of time. Hsu completely re-designed the chess chips, got them built, >tested them, worked around some hardware bugs, suffered thru some fab >problems that produced bad chips, and so forth. All in one year. He got the >final chips weeks before the Kasparov match. > >It was an issue of time. Memory would have cost _far_ less than the chips >(chess chips). > > > > > >> >> the important thing to remember is that if i want to drive to >> Paris with 2 cars and i just ship cars in all directions without >> looking on a map or roadboard (representing the inefficiency), then >> the chance is they land everywhere except on the highway to Paris. >> >> Even a trillion nodes a second isn't going to work if it is using >> inefficient forms of search. >> >> It is not very nice from Hsu to focus upon how many nodes a second >> he plans to get. For IBM that was important in 1997 to make marketing >> with. It is not a fair comparision. > > >The match was _not_ about NPS. It was purely about beating Kasparov. If they >could have done it with 10 nodes per second, they would have. I don't know >where >you get this NPS fixation you have, but it is wrong. Just ask Hsu... > > >> >> If i go play at world champs 2003 with like 500 processors, i >> do not talk about "this program uses up to a terabyte bandwidth >> a second (1000000 MB/s) to outpower the other programs, whereas >> the poor PC programs only have up to 0.000600 terabyte bandwidth >> a second (600MB/s). > > >First, you had better beat them... That's not going to be easy. NUMA has >plenty of problems to overcome... > > > > >> >> That is not a fair comparision. Do you see why it is not a fair >> comparision? >> >> He should say what search depth he plans to reach using such >> chips. > > >Depth is _also_ unimportant. Elsewise they could have just done like Junior >does and report some "new" ply definition of their choosing, and nobody could >refute them at all. > >This was about beating Kasparov. Not about NPS. Not about Depth. Not about >_anything_ but beating Kasparov... > >Had you talked to them after they went to work for IBM you would know this. >Those of use that did, do... > >> >> However he quotes: "search depth is not so relevant". If it is not >> so relevant then, why talk about nodes a second then anyway if >> the usual goal of more nps (getting a bigger search depth) is >> not considered important. > >They haven't been talking about NPS except in a very vague way. You have >made it an issue, not them. They can't really tell you _exactly_ how fast they >are going since they don't count nodes.. > > >> >>>EeEk(* DM) kibitzes: kib question from Frantic: According to what was >>>published DB was evaluating 200 million positions per second (vs 2.5 >>>to 5 million for the 8-way Simmons server running Deep Fritz). How >>>fast would be Beep Blue today if the project continued? >>>CrazyBird(DM) kibitzes: it contains a few reference at the end of the >>>book for the more technically inclined. >>>CrazyBird(DM) kibitzes: if we redo the chip in say, 0.13 micron, and >>>with a improved architecture, it should be possible to do one billion >>>nodes/sec on a single chip. >>>CrazyBird(DM) kibitzes: so a trillion nodes/sec machine is actually >>>possible today. >>> >>>If the cost is not that high maybe Hsu should make ala chessmachine that can be >>>plug into computers (assuming that he has no legal obligation from ibm) The >>>desktop pc is a long way from hiting 1billion nodes/sec. I think most of the >>>professional chessplayers and serious chess hobbyist will buy. He can easily get >>>1 million orders. 1 billion nodes/sec, mmm....:)
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.