Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Crafty modified to Deep Blue - Crafty needs testers to produce outputs

Author: Ulrich Tuerke

Date: 08:00:14 06/18/01

Go up one level in this thread


On June 18, 2001 at 10:51:12, Bas Hamstra wrote:

>On June 18, 2001 at 08:33:21, Ulrich Tuerke wrote:
>
>>On June 18, 2001 at 08:28:08, Bas Hamstra wrote:
>>
>>>On June 17, 2001 at 01:09:50, Robert Hyatt wrote:
>>>
>>>>On June 16, 2001 at 22:59:06, Vincent Diepeveen wrote:
>>>>
>>>>>Hello,
>>>>>
>>>>>From Gian-Carlo i received tonight a cool version of crafty 18.10,
>>>>>namely a modified version of crafty. The modification was that it
>>>>>is using a small sense of Singular extensions, using a 'moreland'
>>>>>implementation.
>>>>>
>>>>
>>>>
>>>>Instead of modifying Crafty to simulate Deep Blue, why didn't you
>>>>modify Netscape?  Or anything else?  I don't see _any_  point in
>>>>taking a very fishy version of crafty and trying to conclude _anything_
>>>>about deep blue from it...
>>>>
>>>>Unless you are into counting chickens to forecast weather, or something
>>>>else...
>>>
>>>I don't agree here. It is fun. Maybe not extremely accurate, but it says
>>>*something* about the efficiency of their search, which I believe is horrible. I
>>>think using SE and not nullmove is *inefficient* as compared to nullmove. We
>>>don't need 100.0000% accurate data when it's obviously an order of magnitude
>>>more inefficient.
>>
>>May be you are right, if the program is running on a PC. However if you can
>>reach a huge depth anyway because of hardware, may be you can afford to use
>>this, because it doesn't matter too much wasting one ply depth ?
>
>I don't see why inefficiency becomes less of a problem at higher depths.
>Nullmove pruning reduces your effective branching factor to 2,5 where brute
>force gets 4,5. So you could suspect at higher depths the difference in search
>depths grows, starting with 2 ply, up till how much, 5 ply?
>
>Of course nullsearch has holes, but they are certainly not big enough to offset
>a couple of plies, or none would use nullmove! In practice a n ply nullmove
>search sees more than a n-2 ply BF search.
>
>Keeping that in mind, give Crafty 1000x faster hardware. It would search at
>least 20 ply (normally 13 average according to Bob plus at least 7). I can tell
>you DB does not search 18 ply BF. Therefore Crafty would in principle see more,
>given the same eval. The SE thing only makes it worse.
>
>>I rather doubt that you can really learn something about Deep Blue this way.
>
>I don't see why not. He simply shows how inefficient their search is. Where does
>Vincent's "emulated" search fundamentally differ from DB's, in your opinion?

Except for the authors, nobody knows. That's the problem.
We can't even be sure if they had some kinds of pruning.

If I got it right, their "engine" was a combination of software and hardware
implemented stuff. So, you cannot just scale the crafty results by some factor
and compare then with DB results. DB executed on a platform which is very
different from todays PCs.

IMHO, the idea of simulating DB by some modified crafty is just ridiculous. I
think, it's rather one of Vincent's jokes.

Best regards,
Uli

>Tell him, he will adjust it. He is not emulating DB, of course, just their
>search.
>
>
>Best regards,
>Bas.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.