Author: Vincent Diepeveen
Date: 08:53:42 04/24/01
Go up one level in this thread
On April 24, 2001 at 11:31:28, Robert Hyatt wrote: >On April 24, 2001 at 10:05:55, Vincent Diepeveen wrote: > >>On April 24, 2001 at 08:47:06, Uri Blass wrote: >> >>>On April 24, 2001 at 08:20:57, Vincent Diepeveen wrote: >>> >>>>On April 24, 2001 at 03:47:15, Uri Blass wrote: >>>> >>>>>the best software that is not IBM. >>>>> >>>>>Suppose there is a match of 20 games at tournament time control >>>>> >>>>>I am interested to know how many people expect 20-0 for IBM >>>>>How many people expect 19.5-.5?.... >>>> >>>>>If IBM expect to do better result then the average result that the public expect >>>>>then they can earn something from playing a match of 20 games with Deep Blue. >>>>> >>>>>I believe that a part of the public who read the claim that kasparov played like >>>>>an IM are not going to expect good result for IBM.> >>>>>Uri >>>> >>>>First of all IBM would get out of book every game with -1.0 pawn >>>>disadvantage (which is about the average of what Kure and Noomen >>>>get in tournaments, sometimes they get out of book with mate in XXX even). >>> >>>I disagree. >>> >>>1)It is easy to avoid -1 pawn disadvantage by using a small book. >>>It is also easy to get the opponent out of book (for example by lines like 1.c3 >>>and 2.Qc2) >> >>>2)Kure and Noomen do not get +1.0 pawn advantage from the opening after every >>>game >> >>Oh well says the beginner who never visited a world champ. >> >>Please visit a big tournament, or ask what the average score was of >>Nimzo / Fritz out of book and Tiger. >> >>The average score from Tiger out of book in dutch open 2000 was +1.0, >>world champs it was even worse. > >I play Tiger all the time. It doesn't get +1 against me very often. I have >played Nimzo a bunch as well. It doesn't get +1 either. In fact, in the first >CCT it got -1.5 against me and lost a tough ending. that's some practice blitz against a tiger which plays random openings. Please remember how crafty got lost out of book game after game at world champs. now that's a world champ, and THERE your book must be ok! Of course that's going to be hard as Noomen and Kure prepared very well for that tournament. Kure + Noomen were rewarded bigtime in world champs for their work, many games they got out of book with huge advantage. ICC is completely irrelevant for disproving the problems which happen at major events, as they prepare for those events, NOT for icc! May 18-20 there is another tournament and i'm pretty sure that Kure and Noomen already work hard for that tournament... ...so am i, but i also have to take care for other things like getting diep to work over a network and improving it anyway :) Best regards, Vincent > >> >>Please get a bit less stubborn and either analyze the games or take it >>from someone who was there already for a lot of times. >> >>Each game one of those books, either Noomen or Kure, gets out of book >>with a mate in XXX score against its opponent. > > >I will play a 10 game match against _either_ on ICC. I'll bet you there >will be _no_ mate in X games against my program. I'll bet you there will Diep mated crafty very soon after opening in world champs. Especially after crafty grabbed that pawn on a2. Diep was at +10.0 already when crafty slowly started to realize it had a worse position. First move out of book diep also did *not* realize it had a bigtime won position. that happened moves later. I see that game as a simple bookwin, despite that both progs initially thought crafty was ok after opening. >be no +2.x scores either. And out of 10 games, I'll bet you there will be >no more than 2 +1 games against me and there will also be at least two -1 >games to offset them. >That based on real experience. > > >> >>That's not a joke. That's REALITY. >> >>Even that score not counted the other games the average is +1.0. >> >>Most games at a world champ you play against AMATEURS / weak commercials, >>difference is very little between that. Basically defined as >>basic income of chess is not enough to feed ones family from. >> >>Every programmer who joined these tournaments has EXPERIENCED this >>problem. Programs aren't humans who FLEXIBLE can pick a new line >>somewhere. >> >>I never manage to explain that problem to chessplayers, like >>for example team members of mine. >>They always laugh and say: "how can your opening book be at most 1900 >>rated and the program way better as that? Must be EASY to make a 2500+ >>book at home". >> >>It is NOT easy. All my teammembers share they play just one stupid opening. >>One of them always plays ben-oni. The other plays always accelerated >>dragon and so on. >> >>Recently one of the big criticizers (he 2347 rated) >>i prepared a SINGLE line for at home, >>i just took it from NCO99. I got out of book with black very well now. >>10 moves later i was bigtime won and won the first pawn. >> >>Now IMAGINE the problem for a chessprogram if i use the NCO99 book where >>many lines are already refuted from when playing Noomen and Kure nowadays... >>...note i already win from all other progs easily >> >>If i one day entered the line then i must update it every week to not >>get outbooked in a tournament, because old lines keep in the book of >>course. I can't follow daily the latest novelties as published in the >>latest books and issues of magazines. >> >>If you play a correspondence game, then you search at THIS moment for >>new openingstheory to play at THIS moment in a game. >> >>Now that's just ONE openings line. Even with a small book >>I need to be prepared to face thousands of opening lines. Suppose you >>short before a tournament start 1000 correspondence games, how high is >>the quality of the opening in those games? >> >>As i must make my book BEFORE the tournament. I can't tell my opponent: >>"please wait half an hour i go first buy a good chessbook for this >>specific line which we have on the board now". >> >>So in computerchess the time between my book has been made and >>when it gets to the board is the problem. >>Some lines date back from 1.5 years ago. >>At home i didn't >>recently manage to improve it and now it happens on the board. >> >>Of course not playing theory usually is an even bigger problem as progs >>suck in opening bigtime. Deep Fritz most of them all. >> >>>> >>>>I would expect IBM to lose with 18-2. >>> >>>I am more optimistic for IBM and Amir Ban's opinion seems to me more realistic. >>>> >>>>Let's be realistic >>>> >>>> a) IBM searched 11-13 ply in 97, nowadays programs search deeper >>> >>>You repeat it again and again when we have no evidence if it is truth and hyatt >>>explained that it is not truth. >> >>See the logfiles. >Here is another excerpt since you keep quoting them: > -13 T=938 >Nf6g4n > 8(4) #[Bd6](2)######################################### 2 T=6 >Bc7d6 nh2f3 Bh5g6 qc1e1 Kg8g7 ra1b1 Ra8c8 pb3b4 > 9(6) #[Bd6](9)######################################### 9 T=15 >Bc7d6 nh2f3 Bh5g6 qc1e1 Kg8g7 ra1b1 Ra8c8 >10(6) #[Bd6](3)#####################[a5](7)##[TIMEOUT] 7 T=230 >Pa7a5 nh2f3 Bc7d6 pc2c3 Bd6f8 qc1c2 Bh5f3n nd2f3B >--------------------------------------- >--> 17. .. a5 <-- 23/78:14 >--------------------------------------- >That last search was 16 plies deep. > >> >>REMEMBER THAT A BRANCHING FACTOR OF 5.0 IN 1997 WAS VERY NORMAL. >> >>And no program with a bit of mobility got 11-13 ply in 1997 at >>tournament level. >> >>Also the deep blue processors were designed years before 1997. > > > >Wrong again. The DB2 processors were designed _after_ the 1996 Kasparov >match. The machine was barely finished in time for the 1997 match. This >was all in Hsu's articles... Hsu obviously fixed some major drawback in his evaluation and endgame! For 1997 standards the endgame of deep blue was everything but bad. I remember how anyone could win a rook endgame in 1997 from programs playing at icc at pentiumpro 200 cpu's if they had a fast machine... In april 1997 i had a pentiumpro 200 and that was by far fastest cpu for my program. Only when PII300 came out end of 97 this was a *bit* faster as a pro200. In those days everyone could fool progs in endgame! These areas were adressed by Hsu quite a bit i guess as we can see from the games against kasparov. Deep Blue plays positionally pathetic in middlegame but seems quite well tuned in endgame. >> >>It was really an old machine. >???? 0.6 micron technology where DB2 processors were made of was even in 1997 outdated! > >> >>If design started in 1995 at the new processor, then getting 2 ply >>extra was very good. In 1995 imagining a program searching 11-13 ply >>WITH a lot of extensions was very good. >> >>Very little people believe in nullmove or forward pruning. >> >>Note that in deep blue there is a technical problem to use forward pruning >>also. >> >>Each processor searches 6 ply in a certain time interval. >> >>Suppose that i set time interval to 0.5 seconds to search 6 ply, which >>is probably very near to what Hsu used. >> >>So let's assume that each SP processor gave >>its 30 hardware processors a time interval >>of 0.5 seconds. >> >>Whether it uses nullmove or not is not interesting. I CANNOT CHANGE >>THE TIMING. > >So? Doesn't affect the _software_ part of the search, ie the first 12 plies. But it tells why a machine which on paper could get over a billion nodes a second got in reality 200M nodes a second. with 30 hardware processors at 1 SP processor getting 1/5 of the potential speed is very good actually as the cpu's of course search a 500 times faster as a software cpu. > >> >>Hsu writes about this too in IEEE97. He couldn't use more extensions the >>last 6 ply because that was 'too dangerous'. Of course Hsu isn't >>lying there. The reason is an obvious timeout problem. >>Because suppose that a search would take longer as 0.5 seconds! > >.5 seconds is wrong. It is more like .05 seconds. This is not possible Bob. fullwidth a 6 ply search with a big qsearch and all without hash is around 1.5M nodes a second in middlegame (don't forget majority of those positions are real stupid compared to a 6 ply search in the root). potentially the cpu's of Hsu could get around 2.5M nodes a second. That means he needed most likely sooner 0.5 seconds as he could need 0.05 seconds. Even a very efficient 6 ply search in a stupid middlegame position without hash and big qsearch is 600k *at least*. >> >>Then he would be faced with a big problem!! >> >>So using nullmove would be completely useless, not to mention >>the dubious Fail HIgh reductions. >> >>FHR is a very dubious form of searching. Any mathematician should be >>able to proof it to be dubious when using positions stored in hashtable. >> >>Note i do not know even whether in 1997 those were already posted >>anyway. >> >>But sure last 6 ply all those things couldn't be used. >> >>Whether a 6 ply search was finished in 600000 nodes or 1500000 or >>sometimes with nullmove with 50000 nodes that's completely *irrelevant*. >> >>Always that 0.5 seconds was used (or the time as set by Hsu). >> >>Why does everyone forget this timing issue? >> >>11-13 ply in 1997 was more than OK. >> >>>I believe that they searched deeper and the output suggest 16-18 plies but I do >>>not believe that 16-18 plies were totally brute force because it seems >>>impossible to me to search 16-18 plies brute force even with their hardware. >> >>Please see the logfiles and analyze. >> >>It's theoretic impossible to search fullwidth 16-18 ply with just 40M nodes >>a second (20% speedup as claimed by Hsu from 480 chess processors which >>get 200M nodes a second is very good). > >Hsu searched at 1B nodes per second max. 700M normally. He factored out >the search overhead to make that an effective 200M nodes per second. Do the >math to understand it. No he lost a factor 4 first to the hardware timing. That's why IBM was so proud to announce the fastest searching chessmachine in the world build on (outdated 0.6 micron) ibm technology getting 200M nodes a second. Hsu writes from that he estimated he got a 20% speedup out of that 200M nodes a second. That's how i get to 40M nodes a second, though this is all irrelevant when considering it was a fullwidth search! > > > >> >>Also the Qsearch was using up big overhead as explained by Hsu in IEEE99. >> >>getting 11-13 ply fullwidth with last 6 ply no hashtables at 40M nps >>with first 5-7 ply loads of extensions is VERY good. >> >>Note Deep blue didn't use a sophisticated form of alphabeta like >>PVS or something similar. >> >>It used NORMAL alphabeta. To reduce the RAM needed to press in a CPU >>or whatever, Hsu DID decrease the number of parameters shipped to the >>cpu. He just shipped a bound to the cpu. >> >>When i use normal alphabeta i need loads of nodes more. >> >>>They probably used futility pruning. >> >>Complete nonsense from a layman. >> >>As a mathematician first please proof FHR to be dubious! >> >>Hsu always claimed to never have used forward pruning and he >>didn't do it in past and didn't do it in 1997 either. >> >>Why would he? In 1997 even Hyatt was saying nullmove was completely >>dubious. >> >>Nowadays we can easily proof (using for example double nullmove) >>that it isn't positional dubious but that in some weird cases you need >>some ply extra to find something. >> >>>> b) their book is hell worse as nowadays books are >> >>>They only need a small book. >>>Big book may be good only for blitz and when the time control is slower it is >>>better to use a smaller book. >> >>No matter what level it's important to have a good book as usually the >>book is the weakest part of the program, >>a program gets completely annihilated with black after 1.d4,Nf6 2.c4,g6 >>if you're out of book there with black! >> >>Just try it at home at auto232 player! >> >>Note that when playing against a human i definitely agree that a big >>book is not very important as they get you out of it quick anyway, and >>if they don't get you out of it, then they know probably a novelty >>in the line you have on the board, so with direct advantage for human... >> >>>Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.