Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: More doubts with gandalf

Author: Uri Blass

Date: 21:54:13 02/26/01

Go up one level in this thread


On February 26, 2001 at 23:00:15, Christophe Theron wrote:

>On February 26, 2001 at 17:50:09, Miguel A. Ballicora wrote:
>
>>On February 26, 2001 at 16:47:28, Christophe Theron wrote:
>>
>>[deleted a lot]
>>
>>>>It is possible that the same program is going to be the best at long time
>>>>control and not the best at short time control because it use ideas to make it
>>>>better at long time control.
>>>
>>>
>>>That's something some people want you to believe.
>>>
>>>As for myself, and I think I have tried A LOT, I have never seen any idea that
>>>makes a program better at long time controls if it does not make it better at
>>>short time controls.
>>
>>First of all, let me say that this is an interesting discussion.
>>I am an amateur at C.C., Still, I am not sure about the above statement. For
>>instance, a better replacement scheme on the hashtables will have a great impact
>>at deeper searches. In shorter ones, it won't have any effect.
>>In general, I think that any idea that reduces the tree decreasing the
>>branching factor will have an impact in longer time searches, despite
>>that implementing the idea will consume cpu time (hurting short time searches).
>>Could not be SEE ordering and example?
>>Wouldn't preprocessed information help a lot in short searches but get
>>in the way in long searches?
>
>
>What you say here seems reasonnable at first glance, but closer examination
>leads to a different opinion:
>
>* Better replacement scheme on the hash tables: actually the best replacement
>schemes known are not very expensive in term of CPU resources, and I doubt that
>an "expensive" replacement scheme could provide a dramatic improvement. Imagine
>that a new replacement scheme is so good that it is like doubling the size of
>the hash table. Then it will provide (in average) between 6 and 7% speedup to
>your program. Expect a 5 elo points improvement for this, at best.
>
>* Reducing the branching factor: current programs on current hardware can search
>8 to 10 plies in average in blitz and 12 to 14 plies at long time controls.
>Imagine you find an improvement to your branching factor (maybe at the cost of
>some speed). If you can't measure the improvement at blitz (8 to 10 plies), I
>doubt you will be able to measure an improvement at long time controls (12 to 14
>plies).
>
>* Preprocessing: the same reasonning applies to preprocessing. The drawback of
>preprocessing is that as depth increases, the evaluation is more and more off,
>because it is using information computed at the root. I don't understand why
>preprocessing could be helpful at depth of 8 to 10 plies, and suddenly a
>disaster at 12 to 14 plies. Either preprocessing works or it does not work, but
>I don't believe it can work at 8-10 plies and not work at 12-14 plies.

The reason is that the evaluation at distance 8-10 plies from the root is more
correct than the evaluation at distance 12-14 plies from

It is possible that being twice faster for having a wrong evaluation is a good
deal when the distance is 8-10 plies but is not a good deal when the distance to
the root is 12-14 plies and the evaluation becomes more wrong.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.