Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Checks in the Qsearch

Author: Robert Hyatt

Date: 11:45:21 07/03/02

Go up one level in this thread


On July 03, 2002 at 13:39:44, Christophe Theron wrote:

>On July 02, 2002 at 17:54:27, Robert Hyatt wrote:
>
>>On July 02, 2002 at 16:02:05, Christophe Theron wrote:
>>
>>>On July 02, 2002 at 09:02:34, Robert Hyatt wrote:
>>>
>>>>On July 01, 2002 at 20:26:44, Christophe Theron wrote:
>>>>
>>>>>On July 01, 2002 at 12:21:09, Keith Evans wrote:
>>>>>
>>>>>>On June 30, 2002 at 23:59:59, Christophe Theron wrote:
>>>>>>
>>>>>>>On June 30, 2002 at 12:28:32, Robert Hyatt wrote:
>>>>>>>
>>>>>>>>On June 29, 2002 at 14:18:53, Christophe Theron wrote:
>>>>>>>>
>>>>>>>>>On June 28, 2002 at 17:54:56, Keith Evans wrote:
>>>>>>>>>
>>>>>>>>>>On June 28, 2002 at 16:33:10, Scott Gasch wrote:
>>>>>>>>>>
>>>>>>>>>>>Another idea that I read from was that generating non-capturing checks in the
>>>>>>>>>>>qsearch against a side that has had a chance to stand pat already is a waste.  I
>>>>>>>>>>>really don't understand this idea and disagree with it.  Imagine black has had
>>>>>>>>>>>an oppertunity to stand pat but instead plays RxN (N appears hung).  Well this
>>>>>>>>>>>looks really good unless white then generates Qd4+ forking blacks R and K and
>>>>>>>>>>>winning the R.  If you neglect to generate checks on a side who has already had
>>>>>>>>>>>the chance to stand pat you let him get away with RxN and like it.  If the only
>>>>>>>>>>>reason to add checks to the qsearch is to find mates then I agree -- checking
>>>>>>>>>>>after a side could stand pat is wasted.  But if the goal is to improve tactical
>>>>>>>>>>>play then I think this idea is not sound.
>>>>>>>>>>
>>>>>>>>>>I'll be very interested to see what responses this generates. Hsu took the time
>>>>>>>>>>to design and implement special logic to help generate checking and check
>>>>>>>>>>evasion moves in Deep Blue which I assume was used in qsearch. This was not a
>>>>>>>>>>trivial undertaking - it adds both additional logic and additional interconnect.
>>>>>>>>>>He probably had a good reason for doing it, since he could have used that time
>>>>>>>>>>for something else like implementing a small hash table.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>And maybe he had no good reason to do it.
>>>>>>>>>
>>>>>>>>>As far as I know there are many amateur programmers here that have spent much
>>>>>>>>>more time in trying and validating ideas (not even speaking of the commercial
>>>>>>>>>programmers) than Hsu.
>>>>>>>>>
>>>>>>>>>I think Hsu and his team have done a great job in implementing a chess program
>>>>>>>>>in a chip.
>>>>>>>>>
>>>>>>>>>However I think taking him and his team as a reference in chess programming is a
>>>>>>>>>big mistake.
>>>>>>>>>
>>>>>>>>>As I have said, I think there are many chess programmers here who are much more
>>>>>>>>>skilled than Hsu and his team in chess programming.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    Christophe
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>Hmmm.. I would _never_ make that statement.  Have you _ever_ talked with
>>>>>>>>Hsu or Campbell?  I suspect not because if you had, you would not think
>>>>>>>>them quite that incapable.
>>>>>>>
>>>>>>>
>>>>>>>I did not say that they are incapable. They have done things I will never be
>>>>>>>able to do.
>>>>>>>
>>>>>>>However I have read their description of the Deep genealogy (the document
>>>>>>>published last year and describing their creatures in details, in particular
>>>>>>>their evaluation function and search algorithms).
>>>>>>>
>>>>>>>I think it's probably as good as or even better than having a talk with them, or
>>>>>>>else what is the purpose of their publication?
>>>>>>
>>>>>>I assume that you're referring to an unpublished version of the paper "Deep
>>>>>>Blue" Artificial Intelligence 134 (2002) 57-83 (available from
>>>>>>www.elsevier.com/locate/artint)
>>>>>
>>>>>
>>>>>
>>>>>Yes.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>Do you have any opinion regarding the part related to checks in the qsearch?:
>>>>>>
>>>>>>"The main parameters of the hardware search are described below:
>>>>>>...
>>>>>>5. Number of ?mating? checks allowed for each side in the quiescence search.
>>>>>>A mating check is a checking move which allows zero escape squares for the king
>>>>>>or any checking move which is a ?contact? [15] check by the queen. This
>>>>>>parameter is used to control the size of the quiescence search.
>>>>>>6. Number of singular checking moves allowed in the quiescence search (king has
>>>>>>one escape square, or queen or rook contact check, or any check given while the
>>>>>>checked side has a hung 16 piece). This parameter is used to control the size of
>>>>>>the quiescence search.
>>>>>>...
>>>>>>
>>>>>>[15] A contact check is a checking move to a square immediately adjacent to the
>>>>>>opposing king."
>>>>>
>>>>>
>>>>>
>>>>>Yes I have an opinion.
>>>>>
>>>>>This paragraph as well as many other things I have read in their document shows
>>>>>that they had some ideas, but due to lack of time to develop and try them, they
>>>>>end up with something only half baked. So it turns out to be not working and/or
>>>>>just a waste of time.
>>>>>
>>>>>The paragraph above can be interpreted in different ways, but all of them end up
>>>>>uncovering an inefficiency somewhere:
>>>>>
>>>>>1) (most probable) do they call any contact of a queen with the opponent king a
>>>>>"mating check"? Even in the case where the king or any other piece can simply
>>>>>recapture the queen? If yes, generating them is in average a waste of time
>>>>>everywhere in the QSearch and the optimal number of mating checks for this case
>>>>>is 0.
>>>>>
>>>>>2) a contact of the queen and king is defined as a mating check only if the king
>>>>>has no escape and the queen cannot be captured. In this case it is a checkmate
>>>>>and if you can detect this in hardware there is no point in generating them.
>>>>>Just return +MATE and you are done. The optimal number of QSearch moves (plies)
>>>>>to apply this detection is +INFINITE (apply it everywhere in the QSearch).
>>>>>
>>>>>When I read what has been put in the chips (and most of the time has not been
>>>>>used), I can easily see that they had no idea about what would work and what
>>>>>would not.
>>>>>
>>>>>Any skilled chess programmer would have quickly known. That's what the job is
>>>>>all about: generate ideas, test them, start again.
>>>>>
>>>>>If they had invited a real chess programmer in the team, the last version of the
>>>>>chips would have been much more efficient.
>>>>>
>>>>>What this means is that Hsu and his team were at a very high, professional level
>>>>>when it is about hardware design. They were at a very amateurish level on the
>>>>>matter of chess search algorithms.
>>>>>
>>>>>
>>>>>
>>>>>    Christophe
>>>>
>>>>
>>>>What a review.  Have you _read_ anything they wrote?  "amateurish level
>>>>on the matter of chess search algorithms"?  From the people that wrote
>>>>_more_ about current search ideas than anybody else?  Null-move.  Singular
>>>>extensions.  Lots of other extensions.  You only have to read what they
>>>>have written to get a totally different view of what they are capable of.
>>>
>>>
>>>I have read that.
>>>
>>>I have only seen rehashing of old stuff, and also that:
>>>1) they do not use any efficient pruning technique (what a waste).
>>>2) they have tried to introduce an extension technique with the sole purpose of
>>>generating spectacular long lines (and to try to introduce something new), but
>>>which is most probably not efficient.
>>>
>>>So to sum it up, it was mainly about showbiz. Huge NPS, singular extensions to
>>>impress the public, but eventually no effort to make something really efficient.
>>>I mean as efficient as the good micro programs.
>>
>>I'm personally not a fan of forward pruning.  I lived thru it in the early
>>1970's, until someone noticed that pure brute-force produced a better
>>program.  I don't think forward pruning is a prerequisite for a strong
>>engine.  It just gains some depth.  As does faster hardware such as that
>>built for the DB project.  The advantage for the DB project is that the
>>faster hardware gives better depth _without_ any error whatsoever...  No
>>ugly oversights due to forward pruning at all...
>
>
>
>Then why don't you remove any pruning in Crafty?

Read thru the comments in main.c...  you will find I _did_ remove null-move
several times.  Then tried it again with different ideas to make it safer...


>
>Pruning makes a program stronger. The days of brute force are over since 20
>years.
>
>DB would have been much better with strong pruning.

How much better than "unbeatable by other programs" does it need to be?
I don't think programs 20 years from now will rely on "strong pruning"
either.  KISS is a good principle when possible.






>
>
>
>
>>>They just relied on their huge NPS to get chess strength. Once they had got
>>>that, they did not make significant efforts (or did not have the time) to get
>>>the best out of this hardware.
>>>
>>>They could have done it differently, given the time they had.
>>>
>>>After the first match against Kasparov, they could have spent some time
>>>improving the search algorithm. It would have been even better than multiplying
>>>the number of processors by ten.
>>
>>First, they didn't multiply the number of processors by 10.
>
>
>I know. What I said is that a good pruning system would have been like
>multiplying the number of processors by ten.
>
>Instead, they just physically multiplied the processors by 2.
>
>
>
>
>
>>  Hsu completely
>>re-designed the chess chip between the 1996 and 1997 match.  The number of
>>chess processors only doubled over that time-span, while the evaluation and
>>move generator on the hardware chip was made more sophisticated..
>
>
>
>I know.
>
>
>
>
>>>I repeat that their search was very amateurish and comparable to what was up to
>>>date twenty years ago.
>>
>>
>>Their search was so "amateurish" that they saw so many things that the
>>micros could not at the ACM events, it _really_ put them at a disadvantage.
>>Ask Don Daily what their amateurish search did to his program in Cape May
>>in 1994.
>
>
>
>I don't care.
>
>I can write some kind of special extension code that will make my program
>weaker, but that will produce spectacular combinations from time to time.


That isn't the point.  They were not "weaker" based on watching their output
for several years at ACM/WCCC events.  They just saw _more_ in the games.  Which
is where it counts...




>
>CSTal can do that sometimes, but it is not as strong as these bright moments
>would seem to indicate.
>
>And it's also easy to look very bright tactically when your program runs several
>times faster than your opponent.

So?  They _never_ intended to play on a "level NPS field".  That was the
justification for designing the hardware in the first place... to be faster
than everybody by a really significant margin.  Which they were.  Doing it
was not easy yet it gets totally written off as "they were fast but our
search is better."  That is _very_ short-sighted...  Their speed offset
the "good searches" by a huge margin.



>
>Just try to give your program a 2x time advantage when playing against itself
>and you will see what I mean.

Yes.  Then try to give it a 200x+ time advantage like Hsu did.  You will
_really_ see what _I_ mean...





>
>I say their search was amateurish based on the way it was designed, which they
>explain in the document I have read. It's basically brute force with an
>experimental, very expensive and probably inefficient, extension system.

Why "probably inefficient" without having tried their ideas?



>
>I'm not basing this judgement on a few bright moves I could have seen.
>
>All the regular readers of CCC know that it is not possible to evaluate the
>strength of a program based on a few positions. You know that Bob, don't you?


Of course.  I base my judgement on many _games_ they played, as I looked on
to see what their program/hardware was actually "seeing" during the game.



>
>
>
>
>
>>>There are many amateur programmers here on CCC who are much better than Hsu and
>>>his team on this matter.
>>>
>>
>>
>>:)
>>
>>Probably many better than Dr. John Kirkland here at UAB in doing heart
>>transplants too.  Even though he is over 75 and has done more than any
>>single doctor in the world...
>
>
>
>Most chess programmers reading CCC know how to write an efficient search.
>
>I'm sorry, but the DB team has not been able to do it, or did not have enough
>time to do it, or has not judged that it was important to do it, which is a
>mistake.


Based on what?  Selective search is _not_ a necessary condition for a very
strong chess program.  Rather than work on selective rules, they chose to
work hard on extension rules.  It sure seems to have worked well, based on
some of the wild tactics they managed in the Kasparov games...




>
>The result is that DB was using an outdated and inefficient search algorithm.
>


with sophisticated and very advanced extension algorithms.  Perhaps their
extensions offset the lack of selectivity...  Again, 1975 saw a revolution
_away_ from selective search.




>
>
>
>>>I would like to say to all the chess programmers reading this that they should
>>>not be impressed by Deep Blue. Its only strength was in the speed of the
>>>specialized processors. Apart from that, the rest was a very classical chess
>>>program with some really outdated parts (like the search algorithm).
>>>
>>>Many of us here are able to write a better search than what Deep Blue was using.
>>>
>>>Hsu's achievement is historical. But my eyes are still open and I can see where
>>>this project could have been improved. It's not a personal attack.
>>>
>>
>>
>>
>>Anything _can_ be improved.  However, when it stomps everything around it into
>>the mud, it is hard to complain that it was not...
>
>
>
>They have won the second Kasparov match, but it was close.
>
>Obviously, DB2 was just "good enough", but they could not have known on
>beforehand that they would win. Further, you could start the match again and DB2
>could simply lose.
>

absolutely...


>This is why they should have done everything possible to make it stronger.

When there are time constraints, you can't do "everything possible".  You
can only do what is possible within the given time constraints.  No doubt
DB would be impossible to handle today, had development continued for another
5 years.  But it didn't...




>
>Instead of doubling the number of processors for example, they could have
>designed a better search. This would have been much more efficient.


Doubling the number of processors costs _nothing_ but money.  No time
whatsoever.  So your remark leaves me not understanding what you mean.  If
I can go twice as fast for zero cost, I will take it every time...





>
>There are two possibilities:
>
>1) either it was useless, given the NPS they already had, to try to search any
>deeper, and it was better to invest time exclusively on the evaluation function.
>In this case, why did they invest any effort in doubling the number of
>processors? NOTE: doubling the number of processors, given the big number they
>already had and the poor scalability of parallel search, would not even give a
>1/2 ply improvement. Maybe not even 1/4 ply.

Probably 1/2 based on the overall efficiency numbers.  But 1/2 is 1/2...  and
in some positions it will be 1 or more.  But they didn't invest any "effort"
when they doubled the number of processors.  They just ordered 480 rather than
240 and waited for them to be delivered, while they worked on other things.

I don't get the "wasted effort" idea at all since there was no effort involved,
nor time...






>
>2) or it is very useful to be able to search deeper, even when your name is Deep
>Blue, and in this case improving the search would have been a MAJOR improvement.
>It is known that good pruning systems can improve your search depth by 3 or 4
>plies. It is also known that it is a superior approach over brute force.
>


I would _not_ say that is a "given".  It is an "assumption" at best, not a
proven fact...




>
>I insist in saying that their search was amateurish.
>
>Their choice to double the number of processors was only based in PR
>considerations.


Hardly.  It was based on "lets go as fast as we can..."



>
>You can impress the media by saying that you have doubled the number of
>processors (even if it eventually ends up being a waste). You cannot easily
>explain that you have "improved your search".


They could easily have hyped the "evaluation was completely re-done with
suggestions from our GM advisors" also...  As Hsu has said, the DB2 effort
was _totally_ on evaluation...



>
>IBM builds supercomputers and want to be known as the most capable manufacturer
>in this regard. For them it was important to stack up processors and NPS and to
>announce big numbers in the medias.
>
>So instead of making DB much more efficient in an area where it was really
>lacking (search algorithm), the DB team has been forced to double the number of
>processors and to make them run as fast as possible regardless of the real
>efficiency.



I think their search proved itself to be "up to the task of beating Kasparov"
in 1996.  But they made positional mistakes that the deep search could not
overcome.  For 1997 they chose to focus on this problem alone and they seemed
to have made progress...  based on internal GM comments as well as the game
results.





>
>Bob, you know what is the real benefit of doubling the number of processors from
>240 to 480.

Sure... you go roughly twice as fast...





>
>And you know that is abysmally less efficient than switching from brute force to
>a reasonnable pruning system (take an easy one: null-move).
>

You go deeper but make some obvious tactical mistakes that you would not
make without null-move.





>Please stop the knee-bending.
>
>DB2 was a terrific computer, but it was based on a very poor search algorithm.
>"Amateurish search" is really the word that comes to mind.
>
>
>
>    Christophe


I only wish I was so "amateurish" is what comes to _my_ mind...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.