Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Checks in the Qsearch

Author: Robert Hyatt

Date: 07:16:39 07/04/02

Go up one level in this thread


On July 04, 2002 at 03:40:53, Uri Blass wrote:

>On July 03, 2002 at 14:45:21, Robert Hyatt wrote:
>
>>On July 03, 2002 at 13:39:44, Christophe Theron wrote:
>>
>>>On July 02, 2002 at 17:54:27, Robert Hyatt wrote:
>>>
>>>>On July 02, 2002 at 16:02:05, Christophe Theron wrote:
>>>>
>>>>>On July 02, 2002 at 09:02:34, Robert Hyatt wrote:
>>>>>
>>>>>>On July 01, 2002 at 20:26:44, Christophe Theron wrote:
>>>>>>
>>>>>>>On July 01, 2002 at 12:21:09, Keith Evans wrote:
>>>>>>>
>>>>>>>>On June 30, 2002 at 23:59:59, Christophe Theron wrote:
>>>>>>>>
>>>>>>>>>On June 30, 2002 at 12:28:32, Robert Hyatt wrote:
>>>>>>>>>
>>>>>>>>>>On June 29, 2002 at 14:18:53, Christophe Theron wrote:
>>>>>>>>>>
>>>>>>>>>>>On June 28, 2002 at 17:54:56, Keith Evans wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On June 28, 2002 at 16:33:10, Scott Gasch wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>Another idea that I read from was that generating non-capturing checks in the
>>>>>>>>>>>>>qsearch against a side that has had a chance to stand pat already is a waste.  I
>>>>>>>>>>>>>really don't understand this idea and disagree with it.  Imagine black has had
>>>>>>>>>>>>>an oppertunity to stand pat but instead plays RxN (N appears hung).  Well this
>>>>>>>>>>>>>looks really good unless white then generates Qd4+ forking blacks R and K and
>>>>>>>>>>>>>winning the R.  If you neglect to generate checks on a side who has already had
>>>>>>>>>>>>>the chance to stand pat you let him get away with RxN and like it.  If the only
>>>>>>>>>>>>>reason to add checks to the qsearch is to find mates then I agree -- checking
>>>>>>>>>>>>>after a side could stand pat is wasted.  But if the goal is to improve tactical
>>>>>>>>>>>>>play then I think this idea is not sound.
>>>>>>>>>>>>
>>>>>>>>>>>>I'll be very interested to see what responses this generates. Hsu took the time
>>>>>>>>>>>>to design and implement special logic to help generate checking and check
>>>>>>>>>>>>evasion moves in Deep Blue which I assume was used in qsearch. This was not a
>>>>>>>>>>>>trivial undertaking - it adds both additional logic and additional interconnect.
>>>>>>>>>>>>He probably had a good reason for doing it, since he could have used that time
>>>>>>>>>>>>for something else like implementing a small hash table.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>And maybe he had no good reason to do it.
>>>>>>>>>>>
>>>>>>>>>>>As far as I know there are many amateur programmers here that have spent much
>>>>>>>>>>>more time in trying and validating ideas (not even speaking of the commercial
>>>>>>>>>>>programmers) than Hsu.
>>>>>>>>>>>
>>>>>>>>>>>I think Hsu and his team have done a great job in implementing a chess program
>>>>>>>>>>>in a chip.
>>>>>>>>>>>
>>>>>>>>>>>However I think taking him and his team as a reference in chess programming is a
>>>>>>>>>>>big mistake.
>>>>>>>>>>>
>>>>>>>>>>>As I have said, I think there are many chess programmers here who are much more
>>>>>>>>>>>skilled than Hsu and his team in chess programming.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>    Christophe
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>Hmmm.. I would _never_ make that statement.  Have you _ever_ talked with
>>>>>>>>>>Hsu or Campbell?  I suspect not because if you had, you would not think
>>>>>>>>>>them quite that incapable.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>I did not say that they are incapable. They have done things I will never be
>>>>>>>>>able to do.
>>>>>>>>>
>>>>>>>>>However I have read their description of the Deep genealogy (the document
>>>>>>>>>published last year and describing their creatures in details, in particular
>>>>>>>>>their evaluation function and search algorithms).
>>>>>>>>>
>>>>>>>>>I think it's probably as good as or even better than having a talk with them, or
>>>>>>>>>else what is the purpose of their publication?
>>>>>>>>
>>>>>>>>I assume that you're referring to an unpublished version of the paper "Deep
>>>>>>>>Blue" Artificial Intelligence 134 (2002) 57-83 (available from
>>>>>>>>www.elsevier.com/locate/artint)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>Yes.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>Do you have any opinion regarding the part related to checks in the qsearch?:
>>>>>>>>
>>>>>>>>"The main parameters of the hardware search are described below:
>>>>>>>>...
>>>>>>>>5. Number of ?mating? checks allowed for each side in the quiescence search.
>>>>>>>>A mating check is a checking move which allows zero escape squares for the king
>>>>>>>>or any checking move which is a ?contact? [15] check by the queen. This
>>>>>>>>parameter is used to control the size of the quiescence search.
>>>>>>>>6. Number of singular checking moves allowed in the quiescence search (king has
>>>>>>>>one escape square, or queen or rook contact check, or any check given while the
>>>>>>>>checked side has a hung 16 piece). This parameter is used to control the size of
>>>>>>>>the quiescence search.
>>>>>>>>...
>>>>>>>>
>>>>>>>>[15] A contact check is a checking move to a square immediately adjacent to the
>>>>>>>>opposing king."
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>Yes I have an opinion.
>>>>>>>
>>>>>>>This paragraph as well as many other things I have read in their document shows
>>>>>>>that they had some ideas, but due to lack of time to develop and try them, they
>>>>>>>end up with something only half baked. So it turns out to be not working and/or
>>>>>>>just a waste of time.
>>>>>>>
>>>>>>>The paragraph above can be interpreted in different ways, but all of them end up
>>>>>>>uncovering an inefficiency somewhere:
>>>>>>>
>>>>>>>1) (most probable) do they call any contact of a queen with the opponent king a
>>>>>>>"mating check"? Even in the case where the king or any other piece can simply
>>>>>>>recapture the queen? If yes, generating them is in average a waste of time
>>>>>>>everywhere in the QSearch and the optimal number of mating checks for this case
>>>>>>>is 0.
>>>>>>>
>>>>>>>2) a contact of the queen and king is defined as a mating check only if the king
>>>>>>>has no escape and the queen cannot be captured. In this case it is a checkmate
>>>>>>>and if you can detect this in hardware there is no point in generating them.
>>>>>>>Just return +MATE and you are done. The optimal number of QSearch moves (plies)
>>>>>>>to apply this detection is +INFINITE (apply it everywhere in the QSearch).
>>>>>>>
>>>>>>>When I read what has been put in the chips (and most of the time has not been
>>>>>>>used), I can easily see that they had no idea about what would work and what
>>>>>>>would not.
>>>>>>>
>>>>>>>Any skilled chess programmer would have quickly known. That's what the job is
>>>>>>>all about: generate ideas, test them, start again.
>>>>>>>
>>>>>>>If they had invited a real chess programmer in the team, the last version of the
>>>>>>>chips would have been much more efficient.
>>>>>>>
>>>>>>>What this means is that Hsu and his team were at a very high, professional level
>>>>>>>when it is about hardware design. They were at a very amateurish level on the
>>>>>>>matter of chess search algorithms.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    Christophe
>>>>>>
>>>>>>
>>>>>>What a review.  Have you _read_ anything they wrote?  "amateurish level
>>>>>>on the matter of chess search algorithms"?  From the people that wrote
>>>>>>_more_ about current search ideas than anybody else?  Null-move.  Singular
>>>>>>extensions.  Lots of other extensions.  You only have to read what they
>>>>>>have written to get a totally different view of what they are capable of.
>>>>>
>>>>>
>>>>>I have read that.
>>>>>
>>>>>I have only seen rehashing of old stuff, and also that:
>>>>>1) they do not use any efficient pruning technique (what a waste).
>>>>>2) they have tried to introduce an extension technique with the sole purpose of
>>>>>generating spectacular long lines (and to try to introduce something new), but
>>>>>which is most probably not efficient.
>>>>>
>>>>>So to sum it up, it was mainly about showbiz. Huge NPS, singular extensions to
>>>>>impress the public, but eventually no effort to make something really efficient.
>>>>>I mean as efficient as the good micro programs.
>>>>
>>>>I'm personally not a fan of forward pruning.  I lived thru it in the early
>>>>1970's, until someone noticed that pure brute-force produced a better
>>>>program.  I don't think forward pruning is a prerequisite for a strong
>>>>engine.  It just gains some depth.  As does faster hardware such as that
>>>>built for the DB project.  The advantage for the DB project is that the
>>>>faster hardware gives better depth _without_ any error whatsoever...  No
>>>>ugly oversights due to forward pruning at all...
>>>
>>>
>>>
>>>Then why don't you remove any pruning in Crafty?
>>
>>Read thru the comments in main.c...  you will find I _did_ remove null-move
>>several times.  Then tried it again with different ideas to make it safer...
>>
>>
>>>
>>>Pruning makes a program stronger. The days of brute force are over since 20
>>>years.
>>>
>>>DB would have been much better with strong pruning.
>>
>>How much better than "unbeatable by other programs" does it need to be?
>
>This is exactly the problem.
>
>They had no real competition because of their hardware
>advantage so they did not need to care to about improving the
>search algorithms.

Please see my previous response in the sub-thread right above this one.

You are making a classic mistake that nobody directly involved in AI
research would make:  Selective search is _not_ better than a full-width
search with good search extension rules.  Selective search and a full-width
search with good extension rules are absolutely identical.  I don't see
why anybody claims that if they don't do a selective search, they can't be
as good as someone that does.  That is simply _wrong_.




>
>>I don't think programs 20 years from now will rely on "strong pruning"
>>either.  KISS is a good principle when possible.
>
>I believe that programs 20 years from now will rely
>on strong pruning

Or on good search extensions.  Same thing.   Different approach.
Equal result.


>
>Pruning does not mean not seeing everything
>if you search deep enough and even today there are program that use
>zunzwang detection.
>
> >
>>
>>
>>
>>
>>
>>>
>>>
>>>
>>>
>>>>>They just relied on their huge NPS to get chess strength. Once they had got
>>>>>that, they did not make significant efforts (or did not have the time) to get
>>>>>the best out of this hardware.
>>>>>
>>>>>They could have done it differently, given the time they had.
>>>>>
>>>>>After the first match against Kasparov, they could have spent some time
>>>>>improving the search algorithm. It would have been even better than multiplying
>>>>>the number of processors by ten.
>>>>
>>>>First, they didn't multiply the number of processors by 10.
>>>
>>>
>>>I know. What I said is that a good pruning system would have been like
>>>multiplying the number of processors by ten.
>>>
>>>Instead, they just physically multiplied the processors by 2.
>>>
>>>
>>>
>>>
>>>
>>>>  Hsu completely
>>>>re-designed the chess chip between the 1996 and 1997 match.  The number of
>>>>chess processors only doubled over that time-span, while the evaluation and
>>>>move generator on the hardware chip was made more sophisticated..
>>>
>>>
>>>
>>>I know.
>>>
>>>
>>>
>>>
>>>>>I repeat that their search was very amateurish and comparable to what was up to
>>>>>date twenty years ago.
>>>>
>>>>
>>>>Their search was so "amateurish" that they saw so many things that the
>>>>micros could not at the ACM events, it _really_ put them at a disadvantage.
>>>>Ask Don Daily what their amateurish search did to his program in Cape May
>>>>in 1994.
>>>
>>>
>>>
>>>I don't care.
>>>
>>>I can write some kind of special extension code that will make my program
>>>weaker, but that will produce spectacular combinations from time to time.
>>
>>
>>That isn't the point.  They were not "weaker" based on watching their output
>>for several years at ACM/WCCC events.  They just saw _more_ in the games.  Which
>>is where it counts...
>
>There was no fair test to prove that they were not weaker
>in search algorithms because the opponents suffered
>from inferior hardware.
>
>You need to give the oppponents 200 times faster hardware
>and play in order to compare the search algorithms
>
>
>
>>>
>>>
>>>
>>>>>There are many amateur programmers here on CCC who are much better than Hsu and
>>>>>his team on this matter.
>>>>>
>>>>
>>>>
>>>>:)
>>>>
>>>>Probably many better than Dr. John Kirkland here at UAB in doing heart
>>>>transplants too.  Even though he is over 75 and has done more than any
>>>>single doctor in the world...
>>>
>>>
>>>
>>>Most chess programmers reading CCC know how to write an efficient search.
>>>
>>>I'm sorry, but the DB team has not been able to do it, or did not have enough
>>>time to do it, or has not judged that it was important to do it, which is a
>>>mistake.
>>
>>
>>Based on what?  Selective search is _not_ a necessary condition for a very
>>strong chess program.  Rather than work on selective rules, they chose to
>>work hard on extension rules.  It sure seems to have worked well, based on
>>some of the wild tactics they managed in the Kasparov games...
>>
>>
>>
>>
>>>
>>>The result is that DB was using an outdated and inefficient search algorithm.
>>>
>>
>>
>>with sophisticated and very advanced extension algorithms.  Perhaps their
>>extensions offset the lack of selectivity...  Again, 1975 saw a revolution
>>_away_ from selective search.
>
>There is no proof that their algorithms were good.
>The evidence say that they are not good.
>
>I do not know about a single programmer who
>use the algorithm that they used in Deep thought
>and programs have enough speed to compete with deep thought
>so the excuse that they are not fast enough to earn from
>the extensions is not convincing.
>
>Why does Bruce used  "inferior" singular extensions.
>
>The only possible reason that I can see
>is that their "superior" extensions do not help him
>
>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.