Author: Robert Hyatt
Date: 10:42:15 01/23/00
Go up one level in this thread
On January 23, 2000 at 12:53:54, Tom Kerrigan wrote: >On January 23, 2000 at 11:16:50, Chris Carson wrote: > >>Dr Hyatt, >> >>I remember the CCC discussions. Since you are quoting >>someone else, please post your source so that everyone >>can review. If you have no source, then please state as >>"opinion only", everything that can not be verified is >>only speculation or hearsay, not fact. It is your responsibility >>to post the source if you use a quote or make a reference. It is >>not the responsibility of the reader to find your source or >>to take your word for it. No researcher gets a free ticket and >>all research should be scrutinized. > >Speaking of evidence... > >Hsu basically estimated how fast DB would run on a PC, and it's published (!) in >the IEEE journal. Now there are two possibilities: > >1) Hyatt knew about this estimate but chose to ignore it. He writes many FUD >(Fear Uncertainty Doubt) posts to confuse people into thinking that DB is >untouchable by mere mortals. > >2) Hyatt did not know about this estimate. I have a hard time believing this, >because I just checked and it's even in the ABSTRACT of the article. It took all >of 30 seconds to find. Anyway, if we compare Hyatt's estimates against Hsu's >estimates, we can see that Hyatt's estimates are HORRIBLE. Is this really >somebody we want to trust a lot? Next he'll be telling us that Ferraris are made >out of diamonds and can go 4.8 million miles per second. > >So, is Bob a liar, or is he just really clueless? You make the call... > >-Tom Or perhaps Bob reads more carefully? we have seen the 40,000 instructions per node mentioned several times. The question was, "how hard would it be to emulate DB's hardware stuff _in software_." I said that _if_ you used a typical emulator (as used to debug/test chip designs before dumping them to a fab center for production, that would run probably 1 billion times slower than the chip would actually run. That isn't a guess. I have used simulators for years doing O/S stuff... simulators that have to trap to simulate privileged instructions that a user-mode program can't do... Our old xerox took about 10 seconds to boot normally, in 1970's, on the simulator it took an hour. That was executing 99% of the instructions directly on the cpu, with the remaining 1% in some complex emulation code to simulate I/O, interrupts, trap handling, and so forth. To take this one level further and emulate _every_ instruction in software would have made this thing maybe 100,000X slower than the real CPU. To then take the cpu design, and run that on the type of software used to test gate-level hardware designs... and then use that to emulate an instruction-level design, would be horrible. And it would have nothing to do with trying to port the eval to software... because software is two levels removed from the kind of debugging stuff he had. Another angle: yes, you would want to have something to use to test the thing to be sure that your idea of an evaluation was the same as the hardware's output... but how would you design your software tester? Hint: look at "validate.c" in crafty. I use that from time to time to make sure that no data structures are badly corrupted, that all the bitmaps are consistent, and so forth. It only slows me down by a factor of 20 to run the thing. Because I didn't want to invest a bunch of time to make it fast, when it is only used for testing where speed doesn't matter. You want to bounce back and forth. He definitely had a hardware netlist to send to the fab shop. He definitely had some (admittedly primitive tools according to his new book) to test this design before fabbing it. It was _definitely_ slow. Does he take _that_ to do the PC eval? That's where you started. He also has some bits and pieces of code that can be used to do some pieces of eval in software, then get the same result from the hardware, to be sure that it works or fails. Does he start with this to produce the PC version? My guess would be _no_ to both. Because _neither_ is what you would want for a pc implementation of a _real_ chess engine evaluation. Somehow that isn't coming through. Why, I don't know. I do know it has nothing to do with "lying" or about being "clueless". At least on _my_ part... I am reminded of a story that fits here pretty well... about a kid that had graduated from college years ago, talking to someone about his experiences since finishing. He said: "When I was in college, I could not believe how dumb my dad was. He had opinions that made no sense. He quoted facts I couldn't believe. We never agreed on anything as everything he said was wrong. You know... it is _amazing_ how much _he_ learned over the next 10 years." Think about it...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.