Author: Vincent Diepeveen
Date: 18:39:07 01/10/02
Go up one level in this thread
On January 10, 2002 at 17:50:46, Dan Andersson wrote:
>One reason that search is discussed so much is, IMO. That the static scores of
>evaluators are 'always' wrong. It means that an efficient and intelligent search
>(including extensions) will trumph a less efficient search allmost all the time.
>Due to the fact that the search essentially 'mines' the search space for a more
>accurate evaluation. A much better approach is to tailor your evaluator to your
>search. Granted that a good evaluator is preferable to a bad one. But making it
>behave consistently inside your search framework is the number one priority.
Not exactly the truth. A simple alfabetasearch + nullmove + hashtables
+ simple qsearch is going to beat any other program if evaluation is
real good and the opponents is no good.
Whatever your search, remember that the pro's are 99% busy with just
evaluation and testing.
Search is like 0.001% of the time invested.
the reason why most like to fiddle with search is
- lossless speedups are easy to measure
- it is easy to modify something and test
- there are great tactical testsets to see whether your
search is finding tactical more (also at the same time saying
that for tournament results this says nothing about engine
strength)
the reason why most do not discuss evaluation much
- the pro's keep it a secret that they win because of evaluation,
they say nothing anyway. But let's give example to Tiger. What is
the BIG difference between tiger 0.x versus the current tiger2?
Right, it is evaluation. What did Christophe post here not too long
ago? Right: "only searching deeper works". In the meantime only thing
improved in tiger is evaluation. Of course congrats Christophe that you
keep managing to improve it!
Try endgame on fritz3 versus fritz7a. Fritz7a is with induction everywhere
better. It even slowe down.
At nowadays hardware fritz3 would search like 20 ply easily, also in
middlegame (provided you improve its hashtables a bit by rewriting hashtable
to a better approach.
Reason is the what i call Peter Gillgasch lemma (he gave this
to me as reason why version of Darkthought he programmed in
alpha-assembly at the time searched so deep): if eval sucks then
*nearly everything* gives a cutoff, especially if you are material
ahead.
I remember that on 4x400Mhz linux machine in worldchamps paderborn 1999
i searched in *any* endgame like 20 ply easily. This with like 350MB
hash and a machine in total less than 1.6Ghz.
Right now i have 2 x 1.2Ghz at home but i sure do not get even *close*
to 20 ply in the same endgames.
DIEP 1999 , its endgame was major crap. Only after world champs 2000
i started improving DIEP's endgame. Right now it definitely is way
stronger there than it used to be.
- But the biggest 2 reasons why evaluation hardly gets discussed is
because it is hard work and the average guy posting here has a rating
way less than half mine.
Best regards,
Vincent
>But
>discussions of evaluation factors are always good. As for good/bad bishops a
>dynamically computed piece square table is an option. And not all that expensive
>if you hash it, or make an 'el cheapo' function. The bishop might not be bad if
>it occupies an active square. Or it might be very bad in an open position if it
>is acting as a blockading piece for a pawn.
>
>MvH Dan Andersson
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.