Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Crafty modified to Deep Blue - Crafty needs testers to produce outputs

Author: Robert Hyatt

Date: 21:40:32 06/18/01

Go up one level in this thread


On June 18, 2001 at 16:20:17, Albert Silver wrote:

>On June 18, 2001 at 15:46:13, Bas Hamstra wrote:
>
>>On June 18, 2001 at 13:05:38, Robert Hyatt wrote:
>>
>>>On June 18, 2001 at 10:51:12, Bas Hamstra wrote:
>>>
>>>>On June 18, 2001 at 08:33:21, Ulrich Tuerke wrote:
>>>>
>>>>>On June 18, 2001 at 08:28:08, Bas Hamstra wrote:
>>>>>
>>>>>>On June 17, 2001 at 01:09:50, Robert Hyatt wrote:
>>>>>>
>>>>>>>On June 16, 2001 at 22:59:06, Vincent Diepeveen wrote:
>>>>>>>
>>>>>>>>Hello,
>>>>>>>>
>>>>>>>>From Gian-Carlo i received tonight a cool version of crafty 18.10,
>>>>>>>>namely a modified version of crafty. The modification was that it
>>>>>>>>is using a small sense of Singular extensions, using a 'moreland'
>>>>>>>>implementation.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>Instead of modifying Crafty to simulate Deep Blue, why didn't you
>>>>>>>modify Netscape?  Or anything else?  I don't see _any_  point in
>>>>>>>taking a very fishy version of crafty and trying to conclude _anything_
>>>>>>>about deep blue from it...
>>>>>>>
>>>>>>>Unless you are into counting chickens to forecast weather, or something
>>>>>>>else...
>>>>>>
>>>>>>I don't agree here. It is fun. Maybe not extremely accurate, but it says
>>>>>>*something* about the efficiency of their search, which I believe is horrible. I
>>>>>>think using SE and not nullmove is *inefficient* as compared to nullmove. We
>>>>>>don't need 100.0000% accurate data when it's obviously an order of magnitude
>>>>>>more inefficient.
>>>>>
>>>>>May be you are right, if the program is running on a PC. However if you can
>>>>>reach a huge depth anyway because of hardware, may be you can afford to use
>>>>>this, because it doesn't matter too much wasting one ply depth ?
>>>>
>>>>I don't see why inefficiency becomes less of a problem at higher depths.
>>>>Nullmove pruning reduces your effective branching factor to 2,5 where brute
>>>>force gets 4,5. So you could suspect at higher depths the difference in search
>>>>depths grows, starting with 2 ply, up till how much, 5 ply?
>>>
>>>Several things here.  First a normal alpha/beta program does _not_ have a
>>>branching factor of 4.5... it is roughly sqrt(n_root_moves) which is closer
>>>to 6.
>>
>>Not at all. Have you never tried Crafty without nullmove? See below my engine
>>output, rootposition. Where is the 6? I have never had 6, in my opinion 4,5 on
>>average is normal.
>>
>>Hash 8 Mb
>>PawnHash 4 Mb
>>Hashing 524288 positions
>>PHash 65536 records
>>Evaluation learning 50
>>Could not open Tao.bk for input
>>Ok[book]
>>Book[No]
>>Ok[analyze]
>>TargetTime set[4750]
>> 1.     25        0        25  d2d4
>> 2       0        0        52  d2d4 d7d5
>> 2.      0        0        91  d2d4 d7d5
>> 3      27        0       173  d2d4 d7d5 c1g5
>> 3.     27        0       661  d2d4 d7d5 c1g5
>> 4      -2        0      1219  d2d4 d7d5 c1g5 g8f6
>> 4.     -2        0      2206  d2d4 d7d5 c1g5 g8f6
>> 5      22        0      4611  d2d4 d7d5 g1f3 c8f5 c1g5
>> 5.     22        0     13223  d2d4 d7d5 g1f3 c8f5 c1g5
>> 6       0        0     24494  d2d4 d7d5 g1f3 c8f5 c1f4 g8f6
>> 6       8        1     52306  e2e4 e7e5 g1f3 g8f6 f1c4 b8c6
>> 6.      8        1     64702  e2e4 e7e5 g1f3 g8f6 f1c4 b8c6
>> 7      19        2    185234  e2e4 d7d5 e4d5 d8d5 g1e2 c8g4 d2d4
>> 7.     19        3    311857  e2e4 d7d5 e4d5 d8d5 g1e2 c8g4 d2d4
>> 8      11        6    686116  e2e4 b8c6 d2d4 d7d5 e4d5 d8d5 g1f3 c8f5
>> 8.     11        9    978711  e2e4 b8c6 d2d4 d7d5 e4d5 d8d5 g1f3 c8f5
>>Ok[quit]
>>
>>>Second, if you look at DB's log files for the kasparov match, you will find
>>>their branching factor is well below 4.0...
>>
>>So 4 is close to normal, I would say.
>>
>>>>Of course nullsearch has holes, but they are certainly not big enough to offset
>>>>a couple of plies, or none would use nullmove! In practice a n ply nullmove
>>>>search sees more than a n-2 ply BF search.
>>>>
>>>>Keeping that in mind, give Crafty 1000x faster hardware. It would search at
>>>>least 20 ply (normally 13 average according to Bob plus at least 7). I can tell
>>>>you DB does not search 18 ply BF. Therefore Crafty would in principle see more,
>>>>given the same eval. The SE thing only makes it worse.
>>>
>>>Again, I can tell you that DB did search 16-18 plies deep.  We have that in
>>>the log files and as direct quotes from the team members.  If you can get that
>>>deep without null-move, is another couple of plies _really_ worth all the
>>>nonsense that null-move search causes?  zugzwang problems.  fail high/fail low
>>>problems.  Tactical oversights.  Etc.
>>
>>Not convincing. Several times you have said deeper is better, and there is no
>>"tactical barrier". I agree. And nullmove gets you there which is more important
>>than the sideeffects. What evidence do you have that beyond 15 ply suddenly
>>different laws apply?



I missed the above so will respond here.  I didn't suggest that beyond 15
plies something different happens.  I suggested that with a _different_ search
something different might happen.  And their search is definitely different.




>
>Didn't Heinz recently publish a paper on diminishing returns?
>
>                                         Albert



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.