Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Everything you know is wrong

Author: Vincent Diepeveen

Date: 05:47:41 12/18/02

Go up one level in this thread


On December 18, 2002 at 08:29:17, Sune Fischer wrote:

>On December 18, 2002 at 07:47:01, Vincent Diepeveen wrote:
>
>>Right he has wrongly split it in order to avoid even the worst proofreader
>>from smelling the truth.
>
>Or to hide the real truth, that it stabilises the search and will be really
>really great in games :)
>
>>>First he designs an algorithm to make a smaller tree and then he verifies that
>>>it's also better (solving more positions).
>>
>>In neither of the 2 tests the algorithm is superb. So he could not
>>draw the conclusion that verification search is better. *no way*.
>
>IIRC the conclusion was that it was better than R=2 and that R=2 was better than
>R=3. Nothing more, nothing less.
>
>>>Those are very though demands, you _can_ get an overall improvement by say,
>>>searching 5% more nodes but in return get far less instability in your search.
>>
>>He doesn't hash whether he has done a verification, so his implementation
>>for sure is buggy. I wonder why proofreaders didn't see that.
>
>??

please read what i answerred very clearly to a reply Uri wrote here.
I do not like repeating myself 10 times here.

I very clearly explained it to Uri what the bug is elsewhere in a
writing here.

it is a crucial bug which many so called experienced game researchers
miss quickly. but it is crucial because you can proof on a simple paper
that the thing is incorrect now.

>>>Such an improvement wouldn't never be discovered by this method, because more
>>>nodes is bad _by definition_.
>>
>>Not always. time to solution counts. Nothing else counts.
>>Of course there is other conditions that must be satisfied too
>>like testing at the same machine.
>>
>>If i do not nullmove in DIEP at 1 ply depthleft, then diep needs less
>>nodes to complete a search, but more time. Number of nodes is trivially
>>bad way to measure things when we talk about nullmove and such methods
>>that all put stuff into the hashtable.
>>
>>Nothing as cheap as a hashtable transposition. Just like 500 clocks or
>>so. There is plenty of other algorithmic changes to figure out which
>>reduce number of nodes and increase time to search. Good other example of
>>what decreased my node count *considerably* but is just too expensive
>>too do is ordering at every position in the main search all moves based
>>upon evaluation of the position after it (of course also using a SEE
>>with it in combination). So ordering it by evaluation.
>
>There are several ways to do this research.
>You are the end user, you don't care about the photoelectic effect, you just
>want your television to work.
>
>What Omid here is showing, is that if you do this, or if you do that, then this
>will happen to the node count.
>
>Secondary, did it _improve_ the program?
>
>You can do a not-very-scientific-but-highly-user-orientated-paper on "if you
>have a top notch program with a low branch factor and very good search and all
>the prunings and extensions you can think of, then this method X is good..."
>
>Tell me you don't see a problem with that kind of article?
>
>>Even if i do not do that at depthleft == 1 ply, then it is slowing me
>>down a factor 2 or so nearly but it reduces node count with around
>>30% or so?
>
>I suggest you do a paper on:
>"Heavy use of evaluation will lower branch factor".
>
>That's perfectly fine, only thing is that it is a bit obvious and has been done
>by everybody. However, there might be a paper in it somewhere, if you eg.
>examine things like, is there diminishing returns on more evaluation? With all
>the knowledge in Die this might actually be a very relevant question to you.
>
>-S.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.