Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Crafty modified to Deep Blue - Crafty needs testers to produce outputs

Author: Bas Hamstra

Date: 15:57:12 06/19/01

Go up one level in this thread


On June 19, 2001 at 13:30:20, Robert Hyatt wrote:

>On June 18, 2001 at 16:00:56, Bas Hamstra wrote:
>
>>On June 18, 2001 at 13:09:24, Robert Hyatt wrote:
>>
>>>On June 18, 2001 at 11:45:25, Bas Hamstra wrote:
>>>
>>>>On June 18, 2001 at 11:00:14, Ulrich Tuerke wrote:
>>>>
>>>>>On June 18, 2001 at 10:51:12, Bas Hamstra wrote:
>>>>>
>>>>>>On June 18, 2001 at 08:33:21, Ulrich Tuerke wrote:
>>>>>>
>>>>>>>On June 18, 2001 at 08:28:08, Bas Hamstra wrote:
>>>>>>>
>>>>>>>>On June 17, 2001 at 01:09:50, Robert Hyatt wrote:
>>>>>>>>
>>>>>>>>>On June 16, 2001 at 22:59:06, Vincent Diepeveen wrote:
>>>>>>>>>
>>>>>>>>>>Hello,
>>>>>>>>>>
>>>>>>>>>>From Gian-Carlo i received tonight a cool version of crafty 18.10,
>>>>>>>>>>namely a modified version of crafty. The modification was that it
>>>>>>>>>>is using a small sense of Singular extensions, using a 'moreland'
>>>>>>>>>>implementation.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Instead of modifying Crafty to simulate Deep Blue, why didn't you
>>>>>>>>>modify Netscape?  Or anything else?  I don't see _any_  point in
>>>>>>>>>taking a very fishy version of crafty and trying to conclude _anything_
>>>>>>>>>about deep blue from it...
>>>>>>>>>
>>>>>>>>>Unless you are into counting chickens to forecast weather, or something
>>>>>>>>>else...
>>>>>>>>
>>>>>>>>I don't agree here. It is fun. Maybe not extremely accurate, but it says
>>>>>>>>*something* about the efficiency of their search, which I believe is horrible. I
>>>>>>>>think using SE and not nullmove is *inefficient* as compared to nullmove. We
>>>>>>>>don't need 100.0000% accurate data when it's obviously an order of magnitude
>>>>>>>>more inefficient.
>>>>>>>
>>>>>>>May be you are right, if the program is running on a PC. However if you can
>>>>>>>reach a huge depth anyway because of hardware, may be you can afford to use
>>>>>>>this, because it doesn't matter too much wasting one ply depth ?
>>>>>>
>>>>>>I don't see why inefficiency becomes less of a problem at higher depths.
>>>>>>Nullmove pruning reduces your effective branching factor to 2,5 where brute
>>>>>>force gets 4,5. So you could suspect at higher depths the difference in search
>>>>>>depths grows, starting with 2 ply, up till how much, 5 ply?
>>>>>>
>>>>>>Of course nullsearch has holes, but they are certainly not big enough to offset
>>>>>>a couple of plies, or none would use nullmove! In practice a n ply nullmove
>>>>>>search sees more than a n-2 ply BF search.
>>>>>>
>>>>>>Keeping that in mind, give Crafty 1000x faster hardware. It would search at
>>>>>>least 20 ply (normally 13 average according to Bob plus at least 7). I can tell
>>>>>>you DB does not search 18 ply BF. Therefore Crafty would in principle see more,
>>>>>>given the same eval. The SE thing only makes it worse.
>>>>>>
>>>>>>>I rather doubt that you can really learn something about Deep Blue this way.
>>>>>>
>>>>>>I don't see why not. He simply shows how inefficient their search is. Where does
>>>>>>Vincent's "emulated" search fundamentally differ from DB's, in your opinion?
>>>>>
>>>>>Except for the authors, nobody knows. That's the problem.
>>>>>We can't even be sure if they had some kinds of pruning.
>>>>
>>>>As far as I know they only pruning they did was futility in the qsearch. At
>>>>least they seemed to have told Bob Hyatt FP was a win, therefore the probably
>>>>used it.
>>>
>>>Obviously so.  But what else did they use that we don't know about?  IE how did
>>>they get their effective branching factor under 4.0?  With so many unanswered
>>>questions, posing such a basically flawed experiment is really a waste of time.
>>
>>>>>If I got it right, their "engine" was a combination of software and hardware
>>>>>implemented stuff. So, you cannot just scale the crafty results by some factor
>>>>>and compare then with DB results. DB executed on a platform which is very
>>>>>different from todays PCs.
>>>>
>>>>But we can compare search model A and B and talk about it.
>>>
>>>But you don't know much about "B".  Which means you have no idea how
>>>close "A" and "B" are.  So you can "talk about it" yes.  But you can't
>>>learn anything usefule from it.
>>
>>I don't know that exactly Vincent tries to prove, suppose the modified Crafty
>>does worse on all suites as compared to the normal Crafty and it loses all games
>>against it. In my case that raises the suspicion that, though DB was good, it
>>probably could have been even better. Hsu is not God, he doesn't know everything
>>and nullmove wasn't as popular back then as it is now. And Hsu had no
>>competition with comparable nps, else he would have learned it pretty fast.
>>
>>
>>Best regards,
>>Bas.
>
>
>First you do know that null-move was used in the early 1980's?  I used it in
>the 1983 world championship tournament.  It was suggested to me by someone that
>had already experimented with it after reading something by Don Beal.
>
>It was around.  Hsu knew of it.  Which means that if he didn't use it, he had
>other reasons than "not knowing about it".  Hsu did have competition in NPS.
>In 1987 he was doing not quite 2M nodes per second.  We were doing 1/2M
>ourselves.  in 1989 he was still doing roughly 2M, we were doing about 1M on
>the C90 at the WCCC in Canada that year.
>
>IE he didn't have a huge NPS advantage against us.  In 1987 we probably should
>have won, but didn't.  From then on they were pretty convincing when we played
>them.

Ok, at the time you had a nps that was not too far behind, did you try recursive
nullmove against them? If you didn't, he would have learned it if you did! If
you did, I have some more questions :-)

Bas.











This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.