Author: John Coffey
Date: 15:56:20 10/19/98
Go up one level in this thread
On October 18, 1998 at 15:27:39, Alessio Iacovoni wrote: >On October 18, 1998 at 15:11:30, John Coffey wrote: > >>On October 18, 1998 at 12:13:34, Alessio Iacovoni wrote: >> >>>1) Shouldn't computer strenght it rather be measured on "average" entry-level >>>computers.. i.e. the ones actually used by the majority of people? >> >>Entry level is a moving target. What may be high end now might be much >>more common 6 months from now. If you test on an "average" machine now >>then your results will be worthless in 6 months. > >What? Why would they be worthless? They would just indicate exactly the same >ranking of a top-level computer (with different elo's). Or wouldn't they? The point is that some programs do better on slower machines than other programs. Right now most people have machines in the Pentium 133 to 200 range, but that is rapidly changing. 6 months from now the top machines of today will be more like "above-average" or even just average, and something else will be the top machines. > >> >> >>> >>>2) Also.. do programs benefit in the same way from higher speed and increased >>>hash tables? If not, tests would not be comparable, therefore useless. >>> >> >>If some programs benefit more from Hash tables then this indicates a better >>written program. Memory prices are so low now that you could get 256M and >>not break the bank. It used to be the most expensive component on the mahcine >>but not any longer. >> > >But 256M is not the average memory people usually have. I don't know you guys >there in the states, but in the rest of the world people still have 16-32 meg >and a pentium, some an MMX. >> The last I checked, memory was around $1 per meg. People used to spend $1,000 on memory, so 256M doesn't seem so unreasonable. For a new computer I would assume that 128M would be the minimum. >>>3) Why are books used in tests? Shouldn't a top level computer program be >>>capable of doing at least decently in the opening phase *without* resorting to >>>it's book? If the answer is no.. then it could be easily beaten by even >>>lower-performing computers by having it systematically go out of book. Or am I >>>wrong? >> >>The computer's opening book is very much a component of its skill, just as a >>human player's book is a component of his skill. >> > >I'm not too sure of that.. it's a component of somebody's elses skill (i.e. the >international master or grand master or team of masters that have helped to make >the book). Also.. I didn't say opening books should'nt be used, but that they >shouldn't be used in tests because they modify greatly the results. The message >people receive when fed with rankings is that x engine is stronger than y engine >and not that the book of x wis better than the one of y. Also.. If an existing >program was given some opening knowledge and a feature to sistematically oblige >the other program to get out of its book, it would result to be stronger (while >in reality it isn't). I wonder why nobody has come up with this yet. Hyat feels >the need to introduce a "no tricks" function in his crafty... why hasn't he >developed a switch to "DO tricks" when playing against other computers? If you want to determine how well a program would play in tournaments or against humans then it seems to me that the opening book and the ability to learn is very much a factor. Best wishes, John Coffey
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.