Author: Christophe Theron
Date: 12:52:34 04/26/01
Go up one level in this thread
On April 25, 2001 at 15:55:52, Robert Hyatt wrote:
>On April 25, 2001 at 14:11:56, Christophe Theron wrote:
>
>>
>>
>>
>>Futility as a minor impact on BF, as it can be done only near the horizon.
>>
>
>Depends on the definition. I "read" it to mean what I do in my q-search,
>where I toss out captures that can't possibly bring the score within the
>alpha/beta window.
When say "near the horizon", I mean you start using futility at ply=depth-N, and
N can have any value (including negative ones). Depth is your current nominal
ply depth. Ply=0 is the root. Ply=depth is the first ply of QSearch.
In your case, N=0 (you apply it only when you have reached the horizon).
In the definition of extended futility pruning given by Heinz, N=3.
I'm not sure that values of N above 2 or 3 make any sense.
It does not matter much. As I said, it does not impact on the branching factor
for depths greater than N.
>>> Maybe that was something
>>>I concluded rather than something he said. I will try to find the original
>>>email and read it again carefully...
>>>
>>>I did specifically ask about "null-move" and got an absolute "no" from
>>>everybody.
>>
>>
>>
>>So here we have a real mystery, something more interesting than SE IMO.
>>
>>
>>
>>
>>
>>>>>>And it turns out that ply depth 13 can be routinely reached by today's PC
>>>>>>programs at standard time controls.
>>>>>
>>>>>Yes.. but don't forget DB was reaching depths of 15-18 in the middlegame,
>>>>>as their logs from 1997 clearly show...
>>>>
>>>>
>>>>
>>>>Yes, it looks like I was badly informed.
>>>>
>>>>Knowing that, I now believe that no micro program of today would have any
>>>>reasonnable chance against Deep Blue.
>>>>
>>>>Against a single chip maybe, not against the whole thing.
>>>>
>>>>
>>>>
>>>
>>>
>>>It was a monster, as I have often said. Of course, I got to see what it
>>>was doing on more than one occasion. It was enough to make me dream of 20
>>>years from now of course. :)
>>>
>>>
>>>
>>>>
>>>>
>>>>>>But there are 2 things I'm not really taking into account here:
>>>>>>1) selective search is less reliable than brute force search
>>>>>>2) Deep Blue uses something called "Singular extensions" which increases its
>>>>>>branching factor dramatically over the BF of a simple alpha beta brute force
>>>>>>algorithm.
>>>>>>
>>>>>>
>>>>>>Point 1 is hard to evaluate.
>>>>>>
>>>>>>About point 2 we have some data suggesting that "singular extensions" is an
>>>>>>extremely expensive algorithm: while PC programs have no problem to reach ply
>>>>>>depth 13 on current hardware, Deep Blue could not go beyond ply depths 11-12 in
>>>>>>the 1997 match. Of course in some lines it was computing much deeper.
>>>>>
>>>>>Not again. Look at the logs. 11(6) is a +seventeen+ ply search.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>It remains to be seen if "Singular extensions" is such an improvement. So far I
>>>>>>think that nobody has managed to prove that it is. Some people speculate it
>>>>>>could be effective only if you have a very fast computer, but only the Deep Blue
>>>>>>experiment suggests this, without further scientific proof.
>>>>>
>>>>>No. I used them in later cray blitz versions. HiTech used them as well.
>>>>>They have their good and bad points. Some micro programs have/do use them
>>>>>as well...
>>>>
>>>>
>>>>
>>>>I'm sorry but I still have to read a study about the success of this extension.
>>>>
>>>>Anyway, if it is such a small improvement that it is not even clear by now clear
>>>>if it is good or not, then in my opinion the main (and maybe only) strength of
>>>>Deep Blue is its huge NPS.
>>>
>>>partially. Don't forget that their NPS was a product of their very fast
>>>hardware. And that if they wanted to do something we consider time-consuming,
>>>they simply built hardware for it. IE king safety. They know how many pieces
>>>attack squares near their king, how many pieces are in battery behind the front
>>>attacker, etc. I did this in Cray Blitz as the vector hardware made it pretty
>>>efficient. I don't do it in Crafty yet. It is expensive enough that it makes
>>>me think about the depth loss vs the knowledge gain...
>>>
>>>So their NPS _was_ all-important. They could keep it up at > 200M nodes per
>>>second, while adding evaluation feature after feature... as the hardware could
>>>do most anything at no additional cost. That was the main reason Hsu redesigned
>>>the chip before the 1997 match. Benjamin convinced him that more evaluation
>>>was needed...
>>>
>>>He reported that they used less than 1/2 of the evaluation hardware in the
>>>chip for the 1997 match as they didn't have time to include more although
>>>it was in the hardware chips...
>>
>>
>>
>>I'm not really sure that evaluation makes such a big difference.
>>
>>I'm also not sure their evaluation was so much better than ours. Remember that
>>our evaluations are more flexible (we can change the terms and add new terms,
>>when all they can do once the chip is designed is change the weights of their
>>terms), and we have been able, for some of us, to work on our evaluations for a
>>longer time.
>
>Actually they can do much more. But the weights can _all_ be modifed at will,
>and features can be disabled/enabled at will to simulate much of what we do...
>Yes, if you find something totally new to evaluate, they might have to redo the
>chip again... but in the DB2 chip Hsu said that he included everything they
>could think of, even if it at first appeared to be useless (as the weights could
>be set to zero to not use them)...
>
>
>
>
>>
>>Maybe their evaluation was better, but I'm not sure. Maybe it could have been
>>much better due to their hardware wired eval, but they would have needed more
>>time.
>>
>
>
>I think it could have been _much_ stronger than what it showed against
>Kasparov. They got it put together just weeks prior to the match. I would
>personally prefer a year or two to tune the evaluation to match the search
>speed better...
>
>
>
>
>
>>
>>
>>
>>
>>
>>Of course. It is a nonsense IMO to have ignored null move.
>>
>>You can say they did not need it, but it is so simple to implement that I cannot
>>find a valid excuse for them.
>>
>>
>>
>>
>> Christophe
>
>
>It introduces error. That was Hsu's only complaint. And he was right.
>But he initially didn't like my q-search losing capture elimination, but
>he later did it. I'd bet that null-move would have eventually ended up in
>the thing (if it wasn't already in it in some form or another in 1997).
It does not introduce error.
If you take the playing strength as the measure of how many errors you make (and
I can hardly think of any other useful definition), then it is clear that brute
force makes much more errors than null move searches do.
Christophe
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.