Author: Tom Likens
Date: 11:47:51 11/14/03
Go up one level in this thread
On November 14, 2003 at 13:31:19, Uri Blass wrote: >On November 14, 2003 at 12:46:36, Tom Likens wrote: > >>On November 14, 2003 at 12:26:53, martin fierz wrote: >> >>>we all know computer chess has evolved a lot over the last years. the top >>>programs are now battling (and beating) the very best players on the planet. >>>mainly through consistency, but sometimes also with non-materialistic moves that >>>computers would IMO not have made a few years ago (...Bxh2! by junior against >>>kasparov, ...OO! giving up the exchange by fritz yesterday). >>> >>>question: is this progress more due to hardware or more due to software >>>advancements? >>> >>>or in other words: if you took a top program of today (e.g. the current >>>fritz/shredder/junior) and ran it on 5-year old hardware against a 5-year old >>>fritz/shredder/junior version on today's hardware: which combination would win? >>> >>>cheers >>> martin >> >>I believe the advances in hardware have allowed the programs to evaluate >>things they wouldn't have attempted years ago, because of the resultant >>reduction in search depth (the kiss of death). > >I do not believe it. I don't think you can radically reduce the depth a program searches and not suffer somewhat strengthwise. A very sophisticated evaluation can compensate for this to a point, but there is a definite crossover point where significantly increasing the amount of knowledge in the evaluation, and thus dramatically reducing the search depth, will result in a weaker program. Lazy eval, hashing, extensions and pruning reduce this effect somewhat but don't completely eliminate it. The trick is to hit the sweet spot, which is different for every program. > A quick example, >>a fair number of programs used to be largely piece-table driven (i.e. they >>performed a large amount of evaluation at the root and used those results >>at the tip of the search tree). While this is still a component of all the >>top programs, most have been moving in the direction of more accurate full- >>leaf evaluation because the hardware is fast enough to support it without >>sacrificing too much depth. > >I do not think top programs of today sacrifice depth by full leaf evaluation. >Top program of today are better than the top programs of the past at all time >control including blitz. > >Uri I don't think they sacrifice depth either, because they are running on *much* faster hardware. It wasn't that long ago when 50,000 nps was blazingly fast. Now *everybody* does that and in fact 50k nodes/sec is considered slow. There have been tremendous software advances that Slate and Atkin had no inkling of (null-move pruning is a good example). But I still contend that the penalty for adding sophisiticated knowledge to the evaluation function isn't as steep today as it was 20 yrs. ago. Or perhaps, another way to think about it is that the difference between searching 12 plies deep vs. 15 plies deep isn't as dramatic strengthwise as searching 8 plies vs. 5 plies. Of course, if you can encode the knowledge *without* reducing the search depth then it's a win-win (and your name is probably Fritz ;-) --tom
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.