Author: Robert Hyatt
Date: 06:31:14 01/24/00
Go up one level in this thread
On January 24, 2000 at 04:22:14, blass uri wrote: >On January 23, 2000 at 22:58:42, Robert Hyatt wrote: > > >>That is where we disagreed. I absolutely said it was not a good idea. And >>could see _no_ reason Hsu would invest that much work. I know _I_ don't write >>code just for the heck of it... And I absolutely hate to rewrite already >>existing code for the heck of it... which is what we would be asking him to >>do: "Hsu, please take your hardware design, reproduce it in C, and put it into >>a program so we can see how it does. We know it will be way slower. And >>that your old search won't work well as it was tuned for a much faster program. >>And that the eval weights might not be tuned to the new shallower depth... But >>do it anyway." >> >>Berliner reported in a HiTech paper that when he tried testing at very shallow >>depths it broke his program because his eval assumed a certain basic search >>depth to find some simple tactics... and when he ran the "hitech vs lotech" >>tests to try to predict rating per ply increases, he saw this. I don't think >>it is possible to just re-do DB in a PC disguise. I think Hsu would start over >>and end up with something pretty similar to what everybody else has. Evolution >>has not brought us all to the same 'neighborhood' accidentally... > >If the evaluation was not good for 100,000 nodes/second but only for 200,000,000 >nodes/seconds then the 38:2 or 10:0 results against micros do not make sense. It does in the context of Hsu/Campbell explanations. They both have said that these games were decided by king safety (at least the first 10 games I reported on that went 10 wins, no losses.) Hsu said that the commercial programs didn't know much about king safety (at the time, and he was _clearly_ right there). And that DB just mounted strong king-side attacks and blitzed them off the board. He mentioned that they were especially vulnerable to the classic Bishop sac on h7 or f7. I have seen this kind of thing happen a bunch when two programs play. If they are pretty even except for one important (and common) bit of information, the one with the extra eval wins way more than expected. I played Nimzo a bunch of games on ICC over the past year. For about a week, Nimzo won every game. When I stopped to look at them, it seemed that Crafty was ignoring passed pawns. When I looked at the code, somehow I had set the scaling for passed pawns to zero (probably when debugging something else) and left it there. Crafty was playing without any knowledge about passed pawns... and it lost game after game, because in _every_ one Nimzo made one and pushed it.. and Crafty would not be concerned until it actually saw it promoting. One piece of knowledge, missing, many lost games in a row. I think this is what happened to the commercial programs. Heaven knows that 2 years ago they were clueless on king safety. Ask any IM/GM that played them regularly on ICC. Many manual operators would refuse to play certain GM/IM players for this reason. > >If the evaluation was also good for 100000 nodes/second then there is no reason >not to do it for the PC because I believe based on hsu's words that it can >search 100000 nodes/seconds in the near future on regular computers and if it is >400 elo better than 2450 ssdf rating at 100000 nodes/seconds(I assume that the >38:2 or 10:0 results were against programs with 2450 ssdf rating on p200 and not >against weaker programs like sargon) then it can be the best in comp-comp in the >near future. > >Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.