Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Algorithms vs. knowledge - What to do next? [correction]

Author: Robert Henry Durrett

Date: 12:54:35 06/04/02

Go up one level in this thread


On June 04, 2002 at 15:28:03, Robert Henry Durrett wrote:

>On June 04, 2002 at 15:11:31, Uri Blass wrote:
>
>>On June 04, 2002 at 14:35:08, Robert Henry Durrett wrote:
>>
>>>On June 04, 2002 at 13:12:14, Uri Blass wrote:
>>>
>>>>On June 04, 2002 at 10:49:00, Vincent Diepeveen wrote:
>>>>
>>>>>On June 04, 2002 at 08:54:57, José Carlos wrote:
>>>>>
>>>>>What i read in Dann's words is he is more believing in search
>>>>>rather than the knowledge. If that's the case then i think he is
>>>>>wrong.
>>>>>
>>>>>I do not see how to easily improve search either.
>>>>>
>>>>>Let's compare diep 1998 with diep 2002.
>>>>>
>>>>>Of course when takling about eval we are quickly finished. It's
>>>>>way bigger now and way better. Let's just compare the SEARCH now.
>>>>>
>>>>>DIEP 2002: 8 probes hashtable, nullmove R=3 always, 2 killermoves,
>>>>>complex move ordering but not that much changed last years,
>>>>>some complex extensions but those
>>>>>do not contribute much to the game, at most solve testsets a bit
>>>>>sooner. quiescencesearch is pretty complex but compared to 1998
>>>>>very simple as i do way more there now.
>>>>>
>>>>>Now DIEP 1998, this is a very complex search. First of all i did
>>>>>all kind of efforts to not get too undeep. It was getting not enough
>>>>>depth at tournament level to even see basic tactics which i see.
>>>>>
>>>>>So i did all kind of difficult forward pruning. Also weird things
>>>>>like special killertables were used. Special information was gathered
>>>>>in order to search less last few plies and qsearch was way more
>>>>>limited. Nearly no check was extended in the main search, because
>>>>>this was to expensive. Hardly any extension was done there.
>>>>>
>>>>>Of course it was not a parallel engine, but that's about only thing
>>>>>which has become more complex in search, though it in fact is still the
>>>>>same type of search.
>>>>>
>>>>>In short my search has become much simpler, especially when talking
>>>>>about quiescencesearch. I'm not blinking with my eyes now to have
>>>>>a bigger overhead there!
>>>>
>>>>Better search rules does not mean always more complex rules.
>>>>
>>>>The right rules also may be dependent on the evaluation and I think that this is
>>>>a good reason after getting to the level of programs like my program that is in
>>>>similiar level to the Baron to start with improving the evaluation(otherwise you
>>>>may waste time on implementing some rules that are good for your stupid program
>>>>when you will need to change them when you have a better program).
>>>>
>>>>I do not like the idea of writing a lot of evaluation code for a lot of cases
>>>>and I think that a better idea  is to think about few rules that generalize a
>>>>lot of cases.
>>>>
>>>>
>>>>Uri
>>>
>>>Is it fair to infer from the above that you are doing the following?
>>>
>>>(1)  First evaluate a position,
>>>
>>>(2)  Then chose a small set of "search rules" from a large set of available
>>>rules [available in your software], with this selection based on the findings of
>>>the position evaluation,
>>>
>>>(3)  And, finally, perform a search from that position using the selected
>>>"search rules"?
>>>
>>>Bob D.
>>
>>No.
>>
>>When I said that the right rules may be dependent on the evaluation I meant that
>>it is possible that the optimal search rules after improving the evaluation is
>>different(for example it is possible that null move with R=2 is better for a bad
>>evaluation when null move with R=3 is better for better evaluation).
>>
>>I do not like to waste time about testing when the new search rules may be worse
>>for the new evaluation.
>>
>>What you suggest may be a good idea but today there is only a small set of
>>search rules and not a large set of search rules.
>>
>>Uri
>
>Thanks.
>
>What I was trying to do is to better understand which positions are evaluated
>[only the position which occurs immediately after a move? Other positions
>occurring during search?], the form/format/content of the evaluation findings,
>and how these might be processed or utilized.
>
>Probably different for each chess engine?
>
>***Could there be room for new ideas here?***
>
>Bob D.

After thinking about what I wrote above, it occurred to me that my questions may
appear somewhat dumb.

I assume that chess computers, like humans, evaluate positions after each move
in a search, in order to obtain the next moves in the tree.  How else would they
get the next moves?

But your bulletin and others seem to be saying that the position evaluation for
the initial position [after a move but before any search] is evaluated much more
completely.  Am I reading this right?

If so, then maybe the extent of evaluation for later positions occurring DURING
the search should be different for different positions, at least sometimes.

For example, in human analysis, there sometimes comes a position which is
perceived as "critical" in some sense by the human evaluator.  In that case, the
human does "a deep think" on that particular position, or at least does a better
job at evaluating that position.  I wonder if similar situations occur during
the search, at least for some engines.

In essence, the question is whether or not the amount of effort expended in
evaluating positions might be deliberately different for different positions.

I apologize in advance if I am not making myself clear.

Bob D.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.