Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The best program of all the times

Author: blass uri

Date: 07:39:30 04/16/99

Go up one level in this thread



On April 16, 1999 at 01:19:01, Dave Gomboc wrote:

>On April 15, 1999 at 23:56:39, Milton Zucker wrote:
>
>>On April 15, 1999 at 18:29:11, Dave Gomboc wrote:
>>
>>>On April 15, 1999 at 12:35:57, Milton Zucker wrote:
>>>
>>>>
>>>>On April 15, 1999 at 09:45:53, Bruce Moreland wrote:
>>>>
>>>>>
>>>>>On April 15, 1999 at 09:34:12, Christophe Theron wrote:
>>>>>
>>>>>>On April 15, 1999 at 09:27:03, Bruce Moreland wrote:
>>>>>
>>>>>>>What is "knowledge based" ?
>>>>>>>
>>>>>>>bruce
>>>>>>
>>>>>>Good remark, Bruce. Here is another one that has no idea of what he is talking
>>>>>>about.
>>>>>
>>>>>It's an interesting term and I'd like to explore its meaning.  I am not trying
>>>>>to blast anyone.
>>>>>
>>>>>The programs that have this term applied to them are either extremely weak
>>>>>ancient research programs, or very strong commercial programs that nobody knows
>>>>>anything about.
>>>>>
>>>>>bruce
>>>>
>>>>I will propose a naive definition of a "knowledge-based" chess program, which I
>>>>invite others to knock down.  If two programs A and B have the same rating, the
>>>>slower program that searches less ply per unit of time is the more "knowledge
>>>>based" in the sense that it plays at the same strength as the faster program
>>>>without seeing the longer term tactical consequences of its move. Presumably its
>>>>decisions are based on more positional knowledge and less on tactical
>>>>consequences.
>>>>...Milton...
>>>
>>>Alright, here's the truck to knock it down with. :)
>>>
>>>The proposal measures the amount of knowledge in software by search depth.  This
>>>has some intuitive appeal, the logic behind it being that programs that "know
>>>more" will take longer at each node, so they will search less deep.
>>>Unfortunately, there are several difficulties with the proposal:
>>>
>>>1) Search depth is not uniform throughout a search tree.
>>>2) Programs search differently-shaped trees, so their search depths are not
>>>(usefully) directly comparable.
>>>3) What is considered "knowledge" is left unclear.  Does this include measures
>>>such as futility pruning -- the understanding that one's position is so good
>>>that there is no need to finish expanding the last couple of ply here?  This
>>>"knowledge" increases the speed of your search.
>>>4) The purpose of chess-specific terms in the evaluation is to guide the search.
>>> This has no more claim to knowledge than non-chess-specific features that guide
>>>the search.  A hash table has nothing to do with chess, but it does more to
>>>guide the iteratively deepening search than any chess-specific term.
>>>5) Software that searches less ply per time unit may be doing so because they
>>>are extending certain continuations further.  So, they might well understand the
>>>long-term tactical consequences of a move even though their reported depth is
>>>shallow relative to some other chess software.
>>>6) What constitutes a "node" for reporting purposes varies from program to
>>>program.  Therefore, node count is not an acceptable substitute for search depth
>>>as a measurement of knowledge.
>>>
>>>I could go on, but the point has (I hope) been made.  No doubt, a correlation
>>>between search depth and search effort exists, but this relationship is
>>>individual to within a program, and should not be misleadingly generalized.
>>>
>>>With regards to the state of the art in computer chess today, "Knowledge-based"
>>>is a marketing buzzword, nothing more.  IMO, M-Chess, Hiarcs, Rebel, CSTal2, and
>>>any other mainstream commercial chess product have about as much claim to
>>>"knowledge-based" as a hole in the ground.
>>>
>>>Dave
>>
>>  OK, so maybe my definition has some holes in it :-). But there are
>>individuals, including respected chess programmers, who talk about the degree to
>>which chess programs possess chess knowledge.  There was a thread in rgcc not so
>>long ago in which Ed Schroder alludes to the fact that he removed some chess
>>knowledge from Rebel 10c to speed up its search and help it play better against
>>computers. He also felt that this had some adverse effects on the program's
>>playing style.  I am not a chess programmer and will not attempt a serious
>>attempt at defining a knowlege-based chess program. However, intuitively, I feel
>>that some programs seem to "know" more about chess positions than other programs
>>and use that information to good effect.  The definition I proposed of a
>>knowledge-based chess program may be bad (your response convinced me), but it
>>seems too easy to summarily dismiss the concept as a "marketing buzzword".
>
>I think that when programmers like Ed talk in this manner, they are talking
>about how well their static evaluator assesses positions.  They are focusing on
>chess-specific "knowledge" that the program uses to end up with a scalar value.
>Another thing they might or might not include in "knowledge" is the
>understanding of when it is safe and when it is not safe to stop searching and
>perform a static evaluation.
>
>Ed removed chess "knowledge" to speed up the search, and reported a strength
>increase against computers.  I don't recall him saying definitively that Rebel
>10.0c plays worse against humans, but I don't read the Rebel web board
>regularly.  It's not clear that 10.0c would play worse against humans than
>10.0b: it might understand less, but the search speed may compensate.
>
>And it is true that Ed's static evaluator is very good.  Compared to other chess
>software I have used, Rebel is the best at having a "real score", that is, one
>that accurately reflects what is happening in the position.

I do not understand what is the meaning of accurately reflects what is happening
in the position.

Is there a way to prove it?

It is possible to prove that one static evaluation function is better than the
other by doing a match between programs when both programs choose the best move
based on 1 ply search with no extensions.

It does not prove that the winner is better in positional knowledge because you
can hide tactical knowledge in the static evaluation(for example one program
can identify stalemate in the static evaluation when the other program is not
identifying stalemate.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.