Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Crafty modified to Deep Blue - Crafty needs testers to produce outputs

Author: Robert Hyatt

Date: 21:38:34 06/18/01

Go up one level in this thread


On June 18, 2001 at 15:46:13, Bas Hamstra wrote:

>On June 18, 2001 at 13:05:38, Robert Hyatt wrote:
>
>>On June 18, 2001 at 10:51:12, Bas Hamstra wrote:
>>
>>>On June 18, 2001 at 08:33:21, Ulrich Tuerke wrote:
>>>
>>>>On June 18, 2001 at 08:28:08, Bas Hamstra wrote:
>>>>
>>>>>On June 17, 2001 at 01:09:50, Robert Hyatt wrote:
>>>>>
>>>>>>On June 16, 2001 at 22:59:06, Vincent Diepeveen wrote:
>>>>>>
>>>>>>>Hello,
>>>>>>>
>>>>>>>From Gian-Carlo i received tonight a cool version of crafty 18.10,
>>>>>>>namely a modified version of crafty. The modification was that it
>>>>>>>is using a small sense of Singular extensions, using a 'moreland'
>>>>>>>implementation.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>Instead of modifying Crafty to simulate Deep Blue, why didn't you
>>>>>>modify Netscape?  Or anything else?  I don't see _any_  point in
>>>>>>taking a very fishy version of crafty and trying to conclude _anything_
>>>>>>about deep blue from it...
>>>>>>
>>>>>>Unless you are into counting chickens to forecast weather, or something
>>>>>>else...
>>>>>
>>>>>I don't agree here. It is fun. Maybe not extremely accurate, but it says
>>>>>*something* about the efficiency of their search, which I believe is horrible. I
>>>>>think using SE and not nullmove is *inefficient* as compared to nullmove. We
>>>>>don't need 100.0000% accurate data when it's obviously an order of magnitude
>>>>>more inefficient.
>>>>
>>>>May be you are right, if the program is running on a PC. However if you can
>>>>reach a huge depth anyway because of hardware, may be you can afford to use
>>>>this, because it doesn't matter too much wasting one ply depth ?
>>>
>>>I don't see why inefficiency becomes less of a problem at higher depths.
>>>Nullmove pruning reduces your effective branching factor to 2,5 where brute
>>>force gets 4,5. So you could suspect at higher depths the difference in search
>>>depths grows, starting with 2 ply, up till how much, 5 ply?
>>
>>Several things here.  First a normal alpha/beta program does _not_ have a
>>branching factor of 4.5... it is roughly sqrt(n_root_moves) which is closer
>>to 6.
>
>Not at all. Have you never tried Crafty without nullmove? See below my engine
>output, rootposition. Where is the 6? I have never had 6, in my opinion 4,5 on
>average is normal.
>

The "norm" is sqrt(W) where W is the number of moves at a position in  the
tree.  In the middlegame, this is around 40 to 50.  In the endgame, much
less.  The simple formula for total nodes searched will convince you that
one more ply takes sqrt(W) more nodes.


>Hash 8 Mb
>PawnHash 4 Mb
>Hashing 524288 positions
>PHash 65536 records
>Evaluation learning 50
>Could not open Tao.bk for input
>Ok[book]
>Book[No]
>Ok[analyze]
>TargetTime set[4750]
> 1.     25        0        25  d2d4
> 2       0        0        52  d2d4 d7d5
> 2.      0        0        91  d2d4 d7d5
> 3      27        0       173  d2d4 d7d5 c1g5
> 3.     27        0       661  d2d4 d7d5 c1g5
> 4      -2        0      1219  d2d4 d7d5 c1g5 g8f6
> 4.     -2        0      2206  d2d4 d7d5 c1g5 g8f6
> 5      22        0      4611  d2d4 d7d5 g1f3 c8f5 c1g5
> 5.     22        0     13223  d2d4 d7d5 g1f3 c8f5 c1g5
> 6       0        0     24494  d2d4 d7d5 g1f3 c8f5 c1f4 g8f6
> 6       8        1     52306  e2e4 e7e5 g1f3 g8f6 f1c4 b8c6
> 6.      8        1     64702  e2e4 e7e5 g1f3 g8f6 f1c4 b8c6
> 7      19        2    185234  e2e4 d7d5 e4d5 d8d5 g1e2 c8g4 d2d4
> 7.     19        3    311857  e2e4 d7d5 e4d5 d8d5 g1e2 c8g4 d2d4
> 8      11        6    686116  e2e4 b8c6 d2d4 d7d5 e4d5 d8d5 g1f3 c8f5
> 8.     11        9    978711  e2e4 b8c6 d2d4 d7d5 e4d5 d8d5 g1f3 c8f5
>Ok[quit]
>
>>Second, if you look at DB's log files for the kasparov match, you will find
>>their branching factor is well below 4.0...
>
>So 4 is close to normal, I would say.
>
>>>Of course nullsearch has holes, but they are certainly not big enough to offset
>>>a couple of plies, or none would use nullmove! In practice a n ply nullmove
>>>search sees more than a n-2 ply BF search.
>>>
>>>Keeping that in mind, give Crafty 1000x faster hardware. It would search at
>>>least 20 ply (normally 13 average according to Bob plus at least 7). I can tell
>>>you DB does not search 18 ply BF. Therefore Crafty would in principle see more,
>>>given the same eval. The SE thing only makes it worse.
>>
>>Again, I can tell you that DB did search 16-18 plies deep.  We have that in
>>the log files and as direct quotes from the team members.  If you can get that
>>deep without null-move, is another couple of plies _really_ worth all the
>>nonsense that null-move search causes?  zugzwang problems.  fail high/fail low
>>problems.  Tactical oversights.  Etc.
>
>Not convincing. Several times you have said deeper is better, and there is no
>"tactical barrier". I agree. And nullmove gets you there which is more important
>than the sideeffects. What evidence do you have that beyond 15 ply suddenly
>different laws apply?
>
>About the 18 ply: are you saying that in all its Kasparow games DB did search 18
>ply full width at least all the time? At any time they completed at least 18
>full width iterations? Just making sure and very curious to your answer...
>
>>>>I rather doubt that you can really learn something about Deep Blue this way.
>>>I don't see why not. He simply shows how inefficient their search is. Where >>Vincent's "emulated" search fundamentally differ from DB's, in your opinion?
>>>Tell him, he will adjust it. He is not emulating DB, of course, just their
>>>search.
>>
>>It differs in _many_ ways.
>>
>>1.  Different move ordering approaches.
>>2.  different SE implementation.
>>3.  different search extensions.
>>4.  different quiescence search (drastically different).
>>5.  different evaluation.
>>
>>In fact, there is more different than there is similar.
>
>I am sorry, but none of this convinces me. I presume Vincent tries to prove DB
>did not get very deep with that search model. Then:
>
>1. Move ordering is not very important, as long as it's good, which Crafty is.
>2. SE is implemented as described by Hsu (or could be)
>3. There is no reason to assume that Crafty drastically extends more than DB
>4. Crafty's qsearch is smalles possible. So if that gives a lousy depth, even
>worse for DB.
>5. If the experiment gives a terrible depth, it's clearly not the eval thats the
>problem, it is the search model.
>
>>How can you conclude _anything_ from such a scientifically flawed experiment?
>>There is so much speculation on what is being tested, you could just as well
>>flip a coin to produce the answers and have an equal probability of getting it
>>right or wrong.
>
>I don't see the problem in comparing
>
>a) Standard Crafty
>b) Crafty with no pruning, SE (as described by Hsu) and latest 6 plies not
>hashed
>
>And then see which one performs better at extreme time controls. And see to what
>depths they get. None is expecting it to produce DB output. He could let it run
>some hard suites and they could play a couple of games, FUN!!
>
>Best regards,
>Bas.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.