Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The new version of Junior is significantly better

Author: Robert Hyatt

Date: 07:05:23 08/06/98

Go up one level in this thread


On August 05, 1998 at 08:38:28, Amir Ban wrote:

>On August 05, 1998 at 06:35:02, Robert Hyatt wrote:
>
>>On August 05, 1998 at 04:45:24, Amir Ban wrote:
>>
>
>>>
>>>Scale has nothing to do with it. Not all lines are of equal importance. Some
>>>lines have to be analyzed to great depth to decide on what they are worth,
>>>others need much less. It's a question of balance. Doing brute-force (all lines
>>>the same length) is not the right balance. Making the tree too unbalanced by
>>>looking at some lines too little and others too much is also wrong. All
>>>programmers know that, and found the right balance for them through theoretical
>>>arguments, but mostly by trial-and-error.
>>>
>>
>>
>>we may have a terminology problem, in that I (and several others) don't consider
>>"brute force" to be a full-width search following all branches to a fixed depth.
>>I've always considered "brute force" to be the opposite of "forward pruning"
>>and that's how it generally shows up in the literature I use...
>>
>>In the early days (Greenblatt's program is an example) heavy forward pruning
>>was used...  right on up through chess 3.x in 1975.  In 1976 you find another
>>reference to "brute force rather than forward pruning" in the slate/atkin paper,
>>even though they did the check extension...
>>
>
>>I was a "forward-pruner" until 1978, when hardware speeds made full-width work
>>for a change...
>>
>>>The technique used to unbalance the search tree is by extensions and forward
>>>pruning. These are just flip sides of the same coin. The right way to look at it
>>>is this: We are doing iterative deepening, so, with enough time, we are going to
>>>find everything there has to be found. We are extending this line because we
>>>want to find what happens here now, not at the next iteration. We are pruning
>>>this line because we can wait till next iteration to find more about it.
>>>
>>
>>I agree (to an extent) with your "flip side of the coin" in that this is
>>fiddling with the search depth, reducing some branches so that others can be
>>searched deeper.  But it is not the same as forward pruning, because there you
>>can miss a 2-mover even though you search 6 full moves deep along the shallowest
>>branches.
>>
>
>Not if you're doing anything sensible. The forward pruning methods that are in
>wide use today, null-move and razoring, rely on a reduced depth search, and they
>won't fall for that, unless you are talking about the null-move zugzwang
>weakness.
>
>>
>>
>>
>>>There are some bad pruning methods that have problems built into them. A pruning
>>>method that acts the same way in every iteration is bad, because it means that
>>>when you prune a move, you won't take a deeper look at it next iteration, you
>>>are pruning it for good. It's a bad idea to use such methods, because no matter
>>>how deep you search, you will not cover errors you make this way. Null-move
>>>pruning, for example, has such a problem with zugzwang positions, and every
>>>null-mover needs some practical solution to this.
>>>
>>>But, as long as you stay away from such pruning mistakes, the fact that you do
>>>pruning doesn't place any limit on how good your program can be, and if you
>>>select a reasonable algorithm, you will certainly be stronger than brute-force
>>>search at any scale.
>>
>>this I'm not sure about, because there's no evidence to support nor contradict
>>it.  IE what says we *must* do some sort of forward pruning, and that if we
>>do, we will be stronger than programs that don't?  In 1975 everyone was doing
>>forward pruning, in 1976 chess 4.0 walked over everyone by not making any 6
>>ply tactical mistakes at all.
>>
>
>They did pruning based on static data, such as rank-ordering the moves on static
>evaluation and other immediate information. You can say they based the pruning
>on a 0-depth search. This is a weak pruning method, that becomes even weaker
>with increasing depth. It only took one who didn't do that to prove that it is
>bad.
>
>Everybody is back to doing forward pruning now, but in a way that is more
>sensible.
>
>>


this wasn't the only method used "back then".  "Blitz" had a causality analysis
algorithm that used backed-up results from the search to control what was
pruned in the forward direction.  IE if a search showed a pawn being lost at
ply=5, then it used that to help select (and cull) moves at plies 1 and 3 as
needed.

Tactics were not the problem for my program, at least.  It was positional play.
Because the forward pruning would let it overlook positional "traps" that it
would recognize, but which it couldn't use the causality algorithm to avoid,
because of the time constraints...





>>
>>>
>>>The Deep-Thought/Deep-Blue programmers did not want to do any pruning, and this
>>>was an article of faith for them. To me this was an illogical stand, because
>>>they did do extensions (they even claimed to have invented them, quite falsely I
>>>think). If you do extensions, but otherwise brute-force, you are still pruning
>>>the entire tree in a sense, because you can search to less brute-force depth in
>>>the same time.
>>
>>
>>I'm not sure what you mean about "claimed to have invented them, quite
>>falsely..."
>
>There was hype around the publication of singular extensions, to the effect that
>this means the "end of the era of brute-force". This also refers to your
>statement above that brute-force does not mean "no extensions". Apparently they
>took an opposite view at the time.
>
>
>> but doing extensions and forward pruning are different things
>>entirely.  Because they affect two different dimensions of the search tree.
>>
>
>To see this, consider razoring which is the exact opposite of extending.
>Null-move is more indirect, but still a reduced-depth search.


I agree... but the affects are way different.  With depth limiting things like
razoring and null move, you can out-search the depth where something bad happens
and see it.  With "forward pruning" you don't search culled moves to reduced
depth, typically, you don't search them at all.  An example is the quiescence
search where anything but captures is tossed out...




>
>>
>>>
>>>DT/DB was mainly a hardware project, and that was its main achievement. There
>>>was never much to be impressed by their software, and I think their attitude was
>>>a waste of silicon. They were spoiled by lack of competition and isolation, and
>>>as a result they were also relatively inexperienced, something which shines
>>>through from every second sentence in which they talk about the subject.
>>>
>>>Amir
>>
>>
>>"Inexperienced" doesn't come close to the truth.  I've known em all.  Murray
>>was the one that first suggested the PVS algorithm to me while in Washington
>>DC at an ACM event in 1978.  We made the changes to "blitz" (at the time) and
>>tried them and it looked quite decent.  He's been around a long time now and
>>probably knows as much about "search" as any of us.  Hsu is another story alto-
>>gether.  Once you get to know him, you discover that he is not just bright, he
>>is bright as hell...
>>
>>What they did wasn't "trivial" and it sure wasn't "easy".
>
>I mean that they talked and acted as if they didn't go through enough
>situations, and didn't play enough games, even test games. You don't need to be
>bright for that, just experienced.
>
>
>> I only wish that
>>"DB Junior" would come out and play some on ICC so that everyone can see just
>>how strong that box is
>
>So would I.
>
>Amir


Perhaps one day...  would answer a lot of questions in a difinitive way...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.