Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The law of diminishing returns

Author: José Carlos

Date: 04:09:02 07/13/02

Go up one level in this thread


On July 13, 2002 at 05:35:24, Uri Blass wrote:

>On July 12, 2002 at 19:16:31, José Carlos wrote:
>
>>On July 12, 2002 at 14:56:11, Ed Schröder wrote:
>>
>>>Hi CCC,
>>>
>>>In Rebel I maintain a statistic file, on every iteration a counter is
>>>incremented with 1 (see column 2) representing the iteration depths Rebel has
>>>searched. When a new best move is found a second counter is incremented with 1
>>>(see column 3) representing how many times a new best move has been found on the
>>>given iteration depth, between brackets the percentage is calculated.
>>>
>>>As you can see the very first plies Rebel often changes to new best moves,
>>>however when the depth increases and increases the chance Rebel will change its
>>>mind drops and drops. From 16 plies on the chance a new better move is found is
>>>below 2%.
>>>
>>>I wonder what this all means, it is still said (and believed by many) that a
>>>doubling in computer speed gives 30-50-70 elo. That could be very well true for
>>>lower depths but the below statistic seem to imply something totally different,
>>>a sharp diminishing return on deeper depths.
>>>
>>>Interesting also is colum 4 (Big Score Changes), whenever a big score difference
>>>is measured (0.50 up or down) the percentage is calculated. This item seems to
>>>be less sensitive than the change in best move. However the maintained "Big
>>>Score Changes" statistic is not fully reliable as it also counts situations like
>>>being a rook or queen up (or down) in positions and naturally you get (too) many
>>>big score fluctuations. I have changed that and have limit the system to scores
>>>in the range of -2.50 / +2.50 but for the moment have too few games played to
>>>show the new statistic.
>>>
>>>Anyway the number of positions calculated seem to be more than sufficient (over
>>>100,000) to be reliable. The origin came from extensive testing the latest Rebel
>>>via self-play at various time controls.
>>
>>  Hi Ed, if I get this right, the second column (moves searched) is the number
>>of positions in which the program has reached the depth given by column 1. If it
>>was really "moves", there would be about 3x in depth 2 than in depth 1.
>>  Then the idea is that many more changes happen in low depths because the
>>program is there many more times, so I (ignoring "Big Changes") calculated a
>>couple of other numbers:
>>  The ratio moves changes / moves searched and the relative % of changes from
>>ply to ply:
>>
>>                 SEARCH OVERVIEW
>>                 ===============
>>
>>  (A)     (B)            (C)           (D)             (E)
>>Depth    Moves          Moves     Moves Changed /   rel % of changes from
>>       Searched        Changed    Moves Searched    ply n-1 to n
>>
>> 1     113768         0 =  0.0%        0
>> 2     113768     44241 = 38.9%    0.388870333
>> 3     113768     34262 = 30.1%    0.30115674        77.44%
>> 4     113194     32619 = 28.8%    0.288168984       95.69%
>> 5     113191     30697 = 27.1%    0.271196473       94.11%
>> 6     108633     28516 = 26.2%    0.262498504       96.79%
>> 7     108180     25437 = 23.5%    0.235135885       89.58%
>> 8     102782     22417 = 21.8%    0.218102391       92.76%
>> 9      82629     15400 = 18.6%    0.186375244       85.45%
>>10      59032      9144 = 15.5%    0.154899038       83.11%
>>11      39340      5183 = 13.2%    0.131748856       85.05%
>>12      23496      2350 = 10.0%    0.100017024       75.91%
>>13      12692       957 =  7.5%    0.075401828       75.39%
>>14       6911       396 =  5.7%    0.057299957       75.99%
>>15       4032       193 =  4.8%    0.047867063       83.54%
>>16       2471        72 =  2.9%    0.029138001       60.87%
>>17       1608        26 =  1.6%    0.016169154       55.49%
>>18       1138        17 =  1.5%    0.014938489       92.39%
>>19        921         6 =  0.7%    0.006514658       43.61%
>>20        795         7 =  0.9%    0.008805031      135.16%
>>21        711         1 =  0.1%    0.00140647        15.97%
>>22        636         2 =  0.3%    0.003144654      223.58%
>>23        574         5 =  0.9%    0.008710801      277.00%
>>24        507         1 =  0.2%    0.001972387       22.64%
>>25        451         3 =  0.7%    0.006651885      337.25%
>>26        394         1 =  0.3%    0.002538071       38.16%
>>27        343         2 =  0.6%    0.005830904      229.74%
>>28        296         2 =  0.7%    0.006756757      115.88%
>>29        269         0 =  0.0%    0                  0.00%
>>
>>  Column (D) means the probability at a certain position at a certain depth to
>>get a change, according to your data, for a random position (I assume you chose
>>random positions, because this data comes from real games).
>
>No
>
>I assume that the positions that was searched to big depthes like 16 are only
>positions that the program had enough time to search in the game to depth 16.
>
>These positions are not random positions from games.
>I expect in random positions from games to see at least 10% changes at depth 16.
>
>Uri

  It's interesting that Ed, who has been doing chess programming for a lot of
years rely on statistical data, and you, absolute newbie to chess programming
can 'expect'. Quite amazing.

  José C.



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.