Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: CSS WM TEST - a technical view

Author: Vincent Diepeveen

Date: 05:09:01 06/17/04

Go up one level in this thread


On June 16, 2004 at 16:49:28, Steve Glanzfeld wrote:

To get hundreds of points more in WM test all i need to do is next at the end of
my evaluation:

 score -= (lightpiecescores>>1);
 score += (passedpawnscore>>1)+kingsafety+mobility;

Please tell me whether you understand what i mean above.

I feel this is the fundamental problem you do not want to understand.

In 99% of all games you win when you capture a piece.
In testests you solve the position by giving away a piece.

It is the fundamental problem why testsets do not work.

WMtest is the biggest patzer test in that respect.

Look at Tiger1 versus Tiger2 score in endgame.

Tiger1 was at the first release of the WM test the best endgame program
according to the test.

What is the difference?

Tiger1 just gives 3 pawns bonus for a passer at 6th row :)

So very stupid and simple bluffing knowledge solves everything there.

That's what the entire WM test is about.

The proof has been delivered already by Tiger1.

>On June 15, 2004 at 17:28:38, Vincent Diepeveen wrote:
>
>>On June 15, 2004 at 16:26:09, Steve Glanzfeld wrote:
>>
>>>No normal program will choose an unusual move (i.e. a queen sac) "out of the
>>>blue" in a normal position. Except, the program is completely broken.
>>>
>>>You guys are argueing as if it would be DOWNRIGHT BAD when a chess program finds
>>>good moves (quickly)... I wonder what a chess program looks like, when it is
>>>based on that philosophy :)) Does it try to avoid the good moves? So, if there's
>>>a lack of success, the chances are good that we have found a major reason here
>>>:)
>
>>"I created a version that was tactical brilliant. It solved *everything* in the
>>testsuites. Then i started playing with it and it was hundreds of points weaker
>>in games." Stefan Meyer Kahlen a few months ago.
>
>No engine can solve everything in every testsuite. There are not only tactical
>tests, for example (big surprise eh? :)))
>
>>
>>So the answer to your question is: The version that scores hundreds of points
>>more onto testsuites is NOT the version to play with at tournaments, because in
>>testsuites all those patzermoves work as we know and they do not in tournaments.
>
>Again, don't you understand that those moves HAVE WORKED in games? :) These are
>World Champion's winning moves! What are you talking about "do not work in
>tournaments"...???
>
>Which program, in several versions, do you think ranks #2, #5 and #7 in the WM
>test results? Shredder! :)) Note, that the version ranking #2 has the same
>number of solutions as the leader. Ranks #1/3/6/8/9/10 are Fritz versions. Next
>best are CM versions, Hiarcs 9, and Deep Juniors. At the bottom of the list we
>find oldies and weaker freeware.
>
>So, we find the same engines in the top of that test's ranking list (from a
>total of 230 results in the currently available download), which we do as well
>find in many ranking lists based on games.
>
>I wonder why some people here have so much trouble understanding or accepting
>this. Strange.
>
>Steve



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.