Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Fritz5 cooking at SSDF and Nunn test set

Author: Vincent Diepeveen

Date: 15:58:09 02/24/99

Go up one level in this thread


On February 24, 1999 at 18:22:10, Dann Corbit wrote:

>What you are referring to here is another issue -- the autoplayer controversy.
>I have no idea whether hanky-panky is involved here, but I do not see a
>connection to the cooked position problem.

I'm talking to policy as connection.

>If programs are reporting a wrong outcome that is a serious bug.  But don't we
>have the game listings for the SSDF to know really for sure what the outcomes of
>the games are?  If any outcomes are reported wrong, then these can clearly be
>found and reported.  After the reporting, the ELO scores will be adjusted
>accordingly, I am sure.

I'm not sure how the SSDF can prevent this, anyway if the opponent cannot
show anything as it cannot store games, then i'm sure it will happen.

>If one game learns better than another, that does not seem to be a fault but
>rather a strength.  What am I missing?

How many tournaments do you play where you meet your opponent 20 times
the same times, without your opponent able to learn?

i see a few things
  a) playing the same things more than 1 time
  b) dissallowing opponent to learn while doing a

  c) human learning can't be compared to computer learning
  d) playing thousands of games at home, only to play a few games at SSDF.

And d is something no human can do. So we cannot compare computers to
humans, so we shouldn't allow them to do the same like humans.

I'm not blaming SSDF here, as they are volunteers who do their best.

Main fact is that the things done by commercial programs at SSDF is clear.
Not only chessbase, but also mchess, rebel, nimzo, genius. They are more
or less forced to do it.

Suppose i now join with DIEP in Sweden, what would happen?

I would get slaughtered as i don't have top-down learning implemented.
It would get slaughtered, and as i don't collect
the scores, the scores given to Karlsson get even bigger, as humans
make read faults from the screen. Probably that reading problem i will
lose the most points with.

So it doesn't matter how well or good it plays. What matters are other
things:
   a) interpretation of the tester what programs print out to the
      screen

   b) killerbook (so playing way more games than the opponents
      tournament books contain lines. and this is very nonhuman as you
      can not play your human opponents at home, and then get into
      a tournament and beat them the same way as you did at home)

   c) learning

   d) tricks to win by using the protocol (dissallowing opponents
      to learn for example, and the timeout trick where you simply don't
      move if you get bad out of opening for example)

   e) then finally the strength of the program

So that's 3 things in my eyes before the strength of a program
becomes an issue.

Now for the nunntest set it's obvious that 5.32 has been tuned for this.
Such score differences are no coincidence at commercial programs.

Now it's very important to not forget that b has a lot to do with c.
The smaller your book, the more important your learning becomes. this is
clearly what happened in SSDF. First we get the small homeprepared and
homeproven killerbooks, then we get the learning to take advantage of
your own killerbook.







This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.