Author: José Carlos
Date: 00:03:52 05/05/03
Go up one level in this thread
On May 03, 2003 at 04:31:08, Kurt Utzinger wrote: >On May 02, 2003 at 13:01:09, José Carlos wrote: > >>On May 02, 2003 at 12:24:12, Kurt Utzinger wrote: >> >>>On May 02, 2003 at 12:05:22, Peter Berger wrote: >>> >>>>On May 02, 2003 at 11:47:46, Djordje Vidanovic wrote: >>>> >>>>>On May 02, 2003 at 09:01:29, Mogens Larsen wrote: >>>>> >>>>>>On May 02, 2003 at 08:36:30, Djordje Vidanovic wrote: >>>>>> >>>>>>>Hello Tony, >>>>>>> >>>>>>>which book is Ruffian using? Sorry if I am pestering you with a question >>>>>>>already asked, but I am really interested. Fritz 8 must be using its own Fritz >>>>>>>8 book (by A. Kure), how about Ruffian? >>>>>>> >>>>>>>Thanks in advance. >>>>>> >>>>>>Ruffian is using its own book available at the website. And since it's installed >>>>>>as an UCI engine there's no book learning. Essentials from answers to similar >>>>>>questions like the one above :-). >>>>>> >>>>>>Regards, >>>>>>Mogens >>>>> >>>>>Mogens, >>>>> >>>>>thanks for confirming my suspicion. Actually, as I can see, Mr. Hedlund did not >>>>>care to reply to my carefully phrased question. Ruffian is being tested in an >>>>>unfair manner. Partly due to Perola Valfridsson's benign attitude to the SSDF >>>>>testing methods, which is his own fault, partly due to UCI engines' not sending >>>>>the results back to the engine. UCI is hopeless there... >>>>> >>>>>I only wished to point out this fact that may open the eyes of the CCC members >>>>>who were not aware of the technicalities of this match. Fritz 8 keeps on >>>>>learning and finetuning its book, while Ruffian never gets back the previous >>>>>results... The final score should not be taken very seriously, IMHO. >>>>> >>>>>Rgds, >>>>> >>>>>Djordje >>>> >>>>As far as I know SSDF usually contact the authors and ask them with which >>>>settings they want their engines tested for maximum performance. >>>> >>>>If Ruffian doesnt have booklearning as an UCI engine that's a missing feature, >>>>nothing more, nothing less. It might be difficult to implement but it's not >>>>impossible, as others have it. That's similar to the time usage discussed >>>>somewhere else. Ruffian (only as an UCI engine?) doesn't use its time in an >>>>optimal way (an understatement) in 40/120. So is usage of this timecontrol >>>>unfair? Btw, I expect this problem to be more severe than the book issue because >>>>Ruffian's book is quite wide. >>>> >>>>I am convinced a future version of Ruffian will take revenge anyway :). >>>> >>>>Peter >>> >>> In my opinion much noise about nothing. First of all book learning is >>> less effective than people think and people should only claim after having >>> seen the games. Ruffian's book is wide and most probably there will be not >>> once exactly the same opening over the 40-games-match. >>> Kurt >> >> I don't know for this match, but I disagree about book learning not being >>effective. I know you didn't say that, but I don't know exactly what other >>people think. >> In my own experience, book learning is important for several reasons: >> -It tends to "forget" bad moves the book contains. The probability for bad >>moves in professional books is low, but not zero. >> -It tends to drive the program to positions it plays well (and thus get good >>results). Professional programs play well almost every position, but not all. >> -In long matches, it might be decisive if it finds a hole in the opponent's >>book (if the opponent can't learn). >> >> IMO, it's very similiar to human chess. GM's play openings they feel >>comfortable with, they understand, they get good results with, and in long >>matches, they try to find holes in opponent's book and take advantage of it. >> The difference, is that programs are basically stupid, so they make the same >>mistake a million times if they don't have some mechanism that detects it and >>tell the program to choose another way. >> >> José C. > > Has there ever been a test about using/not using book learning? I can't > remember. It would be interesting I think. This feature makes sense in > long engine/engine matches between two programs. To get fair matches > against the next engines to be tested, the book must be reset to its > default settings. And furthermore the learning files must be deleted. > And under such conditions book learning will never play a great rule. > Despite your arguments I doubt if book learning is that effective. This > might be the case if the opponent book is not a good one. > Kurt In my tests I never delete anything from engines directory, but the logs. Every new version of Averno I test starts off about 50 rating points below the highest rated version, after the first 100 games or so. After 200 it's very close to the strongest. Some games later it starts to get above. In those tests I don't usually play long matches, but round robin tournaments between programs with similar rating (+-150 points or so). That's all I can "prove". José C.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.