Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: No Go (was Re: Markoff -- Botvinnik -- Kaissa -- Hsu -- ABC -- Berli

Author: Omid David Tabibi

Date: 11:01:30 06/13/03

Go up one level in this thread


On June 13, 2003 at 07:01:08, Walter Faxon wrote:

>On June 12, 2003 at 19:22:21, Omid David Tabibi wrote:
>
>>On June 12, 2003 at 18:49:26, Uri Blass wrote:
>>
>>>On June 12, 2003 at 02:40:10, Walter Faxon wrote:
>>>
>>>>On June 10, 2003 at 19:19:41, Omid David Tabibi wrote:
>>>>
>>>>>On June 09, 2003 at 04:01:12, Walter Faxon wrote:
>>>>>
>>>>>>Musings on nonstandard computer chess techniques.
>>>>>>
>>>>>>What's new on the computer chess front?  I note that Sergei S. Markoff's new
>>>>>>program SmarThink (http://www.aigroup.narod.ru/detailse.htm) is supposed to use
>>>>>>(among many other things) some of former world chess champion M.M. Botvinnik's
>>>>>>ideas.  Botvinnik's "Computers, Chess and Long-Range Planning" (Springer, 1970)
>>>>>>and "Chess: Solving Inexact Search Problems" (Springer, 1983) described a method
>>>>>>that apparently only Botvinnik's programmer/protege Boris Stilman believed would
>>>>>>work, which Stilman later generalized in his own book "Linguistic Geometry: From
>>>>>>Search to Construction" (Kluwer, 2000).  Markoff's own on-line writings on chess
>>>>>>algorithms (http://www.aigroup.narod.ru/indexe.htm) are only in Russian, so far.
>>>>>> (I am assuming that the SmarThink download doesn't include source.)
>>>>>>
>>>>>>Markoff also writes that his first program included ideas from the authors of
>>>>>>"Kaissa".  Those authors published papers in the 1970's on "the method of
>>>>>>analogies" to reduce search work, but they did not use it in their competitive
>>>>>>program. If you recall, Hsu wrote in "Behind Deep Blue" (Princeton Univ. Pr.,
>>>>>>2002) that he had implemented a stripped-down version of the analogies method
>>>>>>for Deep Blue.  It is the unpublished intellectual property of IBM.
>>>>>>
>>>>>>Sometimes I wonder if chess program authors mention intriguing nonsense just to
>>>>>>throw their competitors off the track.  I recall someone once letting slip that
>>>>>>he had used Botvinnik's method for an early hardware-limited microcomputer
>>>>>>program.  That seems unlikely.  Nearly 15 years ago an author (Kittinger?)
>>>>>>dropped hints that he had adopted McAllester's 1988 method "conspiracy number
>>>>>>search" (aka conspiracy search) for his program, using the term "nodulation".
>>>>>>Published results indicate that plain conspiracy numbers don't work very well
>>>>>>for chess.  As far as I know, today only experiments on multiprocessor machines
>>>>>>are being conducted; no competitive microcomputer program uses it at all.  So
>>>>>>was it a mirage -- or a trick?
>>>>>>
>>>>>>David McAllester and Deniz Yuret did finally publish their revised work
>>>>>>(Alpha-Beta-Conspiracy Search. ICGA Journal, Vol. 25, No. 1 (March 2002), pp.
>>>>>>16--35), nearly ten years after their initial experiments with the
>>>>>>multiprocessor program Star-Socrates.  And ten years from now?...
>>>>>>
>>>>>>And what about Berliner's B* algorithm?  (Actually Palay's probabilistic B*
>>>>>>using a probability distribution for evaluation instead of a simple range, today
>>>>>>suggestive that techniques from fuzzy logic might be applied.)  The chess
>>>>>>machine Hitech was originally built for it in the early 1980's (equal first on
>>>>>>points but second on tiebreak, WCCC 1986) -- and finally began using it.  As of
>>>>>>mid-1993 it was "almost as good as regular Hitech".   In mid-1995 it was still
>>>>>>"not quite as good as brute force searching."   In the abstract of his last word
>>>>>>on the subject (Hans J. Berliner and Chris McConnell.  B* probability based
>>>>>>search.  Artificial Intelligence, Volume 86, Issue 1, September 1996, Pages
>>>>>>97-156) Berliner writes, "Analysis of the data indicates that should additional
>>>>>>power become available, the B* technique will scale up considerably better than
>>>>>>brute-force techniques."  Berliner is now retired.  More power is available.
>>>>>>Where are the later papers?  Where is B* today?
>>>>>>
>>>>>>My suggestion:  you are writing a chess program.  Go ahead, put in negascout,
>>>>>>null-move pruning, IID, everything everybody is already doing.  Then, look to
>>>>>>the literature and find some method that everybody is _not_ doing.  Implement
>>>>>>it, experiment with it, and _publish_ your results.  Please.
>>>>>
>>>>>A nice post.
>>>>>
>>>>>Junghanns gives a good overview of all the alternatives to alpha-beta at:
>>>>>
>>>>>Are There Practical Alternatives to Alpha-beta?"
>>>>>ICCA Journal, Vol. 21, No. 5, 1998. pp. 14--32.
>>>>>http://citeseer.nj.nec.com/junghanns98are.html
>>>>>
>>>>>Just take a look at all the chess related research published in ICGA in the last
>>>>>year:
>>>>>
>>>>>ICGA 25(1):
>>>>>            Alpha-Beta Conspiracy Search
>>>>>            (McAllester & Yuret)
>>>>>            [an interesting, but old article]
>>>>>
>>>>>            A Lockless Transposition-Table Implementation for Parallel Search
>>>>>            (Hyatt & Mann)
>>>>>            [a smart transposition table idea]
>>>>>
>>>>>ICGA 25(2):
>>>>>            Nothing!
>>>>>
>>>>>ICGA 25(3):
>>>>>            Verified Null-Move Pruning
>>>>>            (David Tabibi & Netanyahu)
>>>>>
>>>>>ICGA 25(4):
>>>>>            Nothing!
>>>>>
>>>>>ICGA 26(1):    [haven't received the issue yet, just looked at
>>>>>                http://www.cs.unimaas.nl/icga/journal/contents/content26-1.htm]
>>>>>
>>>>>            Nothing!
>>>>>
>>>>>
>>>>>I believe that all this lack of research stems from the Deep Blue - Kasparov
>>>>>match. Deep Blue's victory convinced many that nothing is left to be done in
>>>>>chess, so let's move on. The new trend seems to be Go; just take a look at the
>>>>>two latest ICGA issues: it's all about Go. Maybe that's the reason why the name
>>>>>ICCA was changed to ICGA ;)
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>-- Walter
>>>>
>>>>
>>>>Hello, Omid.
>>>>
>>>>First, thanks for you thoughtful response to my recent post.  I have been from
>>>>time-to-time disappointed that there is not more AI content on CCC.  It very
>>>>much seems to be a hobbyist board, playing with the current programs and
>>>>building duplicates of them, albeit with minor differences in emphasis and
>>>>implementation.  I've been guilty of this myself in the bitscan threads.  But
>>>>the few times I have tried to raise the level of discussion have been met with,
>>>>mostly, silence.  My last post was to an extent, a "last gasp".
>>>>
>>>>My response viz. Go.  In my opinion, tackling Go is a mistake for one reason and
>>>>one reason only:  very few Westerners play Go at better than a weak amateur
>>>>level, and most don't play at all.  In contrast, in the West chess is well-known
>>>>as an "intellectual" pastime, and hundreds of thousands of people play it well
>>>>enough to at least appreciate much of the practice of the masters.  Chess is a
>>>>good AI testbase because chess exhibits many understandable problems amenable to
>>>>AI methods while eliciting non-AI interest -- in both the readers and writers of
>>>>AI papers.
>>>>
>>>>I have not reread Andreas Junghanns' 1998 "Are There Practical Alternatives To
>>>>Alpha-Beta in Computer Chess?" but there is one obvious objection to the thesis
>>>>("no"):  a small number of human beings can (still) play chess at least as well
>>>>as the best chess computers.  That amounts to an existence proof that brute
>>>>force is not the only way.  And finding out how to emulate the best human
>>>>techniques in chess may be reasonably expected to have applicability in other
>>>>areas of problem solving -- maybe even Go!
>>>>
>>>>But of course we now know that chess can be effectively tackled using
>>>>(relatively smart) brute-force techniques.  Using any more complicated AI
>>>>methods tends to lose vs. the finely-tuned fast searchers, so the papers
>>>>experimenting with new methods tend to be of the "one-shot" variety, with no
>>>>follow-ups.
>>>>
>>>>Thus, my solution (mentioned in a CCC post some months ago):  a competition in a
>>>>new field of study:  Limited Search Computer Chess (LSCC), with rules similar to
>>>>that of the "RoboCup" international robot soccer competitions, the most
>>>>important of which is:  all code is made public after the tournament is over.
>>>>First cut:  100 positions per second, max.
>>>
>>>I think that limiting the number of positions  is not a good idea.
>>>
>>>Nodes are not the same for every program(some count null moves as nodes and some
>>>do not count them).
>>>It is more logical to decide about a competition at 1 second per game when the
>>>hardware is equal for all participants.
>>>
>>
>>
>>>I also do not think that trying to do the same as humans is a good idea.
>>>computers have the potential to do things better than humans(for example in
>>>backgammon computers found evaluation that is better than what humans knew).
>>
>>This was exactly the point Jonathan Schaeffer made in the Haifa symposium
>>(report: http://www.cs.biu.ac.il/~davoudo/pubs/haifa.pdf). Intelligence should
>>be measured by behavior, not by thought process.
>>
>>
>>>
>>>
>>>Uri
>
>
>Gentlemen:
>
>I guess my point is being lost so I have to reiterate it.
>
>First of all, I have nothing against further research on refinements to the
>brute-force approach to computer chess.  Good methods combined with ever-faster
>hardware are producing grandmaster-level results; this is chess-specific
>"intelligence" by any reasonable definition.
>
>The problem is that as far as chess serving as a general testbed for AI
>research, the fact that fast searchers do so well is actually a drawback.  Any
>experimental method that takes more time -- mimicking human thought processes,
>or that of monkeys or martians or whatever -- will almost certainly lose vs. the
>now well-tuned searchers.  But that is not true of Go, where effective search
>without a specific goal and likely path is simply impossible.  Go requires other
>methods, other sources of "intelligence".  That's why Go threatens to usurp
>chess as the "drosophila" of AI.
>
>Like most westerners, I know and like chess a lot more than Go.  I would like to
>be able to study AI and chess together, not Go.  But I already understand the
>basics of superfast search and huge endgame tables.  I know there's more to
>chess than that.  I know there is more to learn, there is more to be done.
>
>My proposal for Limited Search Computer Chess (LSCC) is that we deliberately tie
>one hand behind our backs, so to speak, by limiting the search aspect of the
>computer's intelligence.  This forces us to adopt other methods, perhaps those a
>lot closer to that of the best human players, perhaps not.  But this would
>certainly reinvigorate chess as far as AI is concerned.  And the methods that we
>discover and invent for chess will almost certainly have additional utility
>outside our game -- and perhaps the best new methods can be added back into
>traditional fast-search programs to help create that super-grandmaster we seek.
>They might even prove useful for computer Go! ;)
>
>According to published studies, humans look at 1-2 chess positions per second
>(Anand claims 5).  Well, we're just starting this so that's why I suggested 100
>positions per second, with the idea of reducing that over time.  Perhaps I
>should have said 10 or less.  But the idea is the same.
>
>Anybody could do what I propose already.  You could write a program and write a
>paper, etc.  So why have a LSCC competition?  Well, why have computer chess
>competition of any sort?  It's one thing to solve problem sets and play the odd
>human or standard chess program.  Head-to-head competitions help show which
>methods are really better in practice.  The researchers can get together and
>compare notes and swap stories.  It's fun!
>
>A series of LSCC competitions might well exhibit the "competitive evolution" we
>saw in the early years of computer chess.  (In layman's terms, it's hard to
>learn if you're completely outclassed.)  Following the RoboCup plan of making
>all software public after each tournament should supercharge this process.
>
>Anyway, that's my idea for bringing computer chess out of its now isolated
>backwater and back into the mainstream of AI.  It's not a criticism of what
>we're doing so much as a plea to do more.

One thing that everyone would agree on is that the current scientific computer
chess research is almost nonexistant, and this situation is unacceptable. I
believe that this should be remedied by turning to more chess-specific research,
and publishing even the smallest improvements.

For example take Hyatt and Mann's "Lockless transposition table" article
publishd last year in ICGA 25(1). One could call this a mere transposition table
"trick", but this short paper (3 pages) was one of the most interesting articles
published last year in ICGA.

I believe that this kind of small improvements should be encouraged and
published. If you find a way to generate better opening books, publish it; if
you find a way to better evaluate the pawn structure, publish it; publish
everything, no matter what it is, it would be definitely more interesting than
those Go articles...

In computer chess we have advanced far beyond any other AI field. Shall we
"punlish" ourselves for this advancement and desert this area? No, in fact we
can afford the luxury of researching more chess-specific problems.

>
>-- Walter



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.