Author: Vincent Diepeveen
Date: 14:36:44 07/19/99
Go up one level in this thread
On July 19, 1999 at 17:00:32, Dave Gomboc wrote: >On July 19, 1999 at 16:06:49, Vincent Diepeveen wrote: > >>On July 19, 1999 at 13:41:37, Dave Gomboc wrote: >> >>>On July 19, 1999 at 06:14:45, Vincent Diepeveen wrote: >>> >>>>On July 19, 1999 at 00:14:28, Dave Gomboc wrote: >>>> >>>>>On July 18, 1999 at 21:11:20, Vincent Diepeveen wrote: >>>>> >>> >>>[snip] >>> >>>>>>That's not true, as in the worst case your window is a pawn off. >>>>>>So the first 5 researches are searching space which is completely useless, >>>>>>where in PVS the search overhead only depends upon move ordering. >>>>> >>>>>The more useless the search, the quicker it terminates. If you're a pawn off, a >>>>>cutoff should be extremely quick. Searches that are close to the proper value >>>>>take longer, precisely because they are searching space that is not "completely >>>>>useless", as you put it. And that effort will be caught in the hash table, so >>>>>you don't have to search it twice. >>>> >>>>That's not entirely true. >>>> >>>>a) it's overhead which PVS doesn't search >>> >>>mtd(f) searches minimal trees to prove what it needs to prove. PVS also has a >>>lot of overhead that mtd(f) doesn't search. >> >>You really eat that? > >Yeah, I do. Of course, pointing out that Tony _invented_ PVS, that Jonathan >worked a lot with Aske on mtd(f), and that both of them are my professors >wouldn't affect your opinion of what is reasonable for me to eat, would it? > >I'm not trying to say that I know everything because of these people or >something. What I am saying is that I am learning from people who really know >what they are talking about. So when I'm told in my Heuristic Search class that >over the long run, mtd(f) should do better than PVS, and our class discusses the >various issues involved over a period of time, and several programs developed at >the U of A back up the claim of the professor, then I think I have a pretty good >reason to believe that "PVS also has a lot of overhead that mtd(f) doesn't >search." > >Compare that to say, some of your claims. You claim "improvements to cray-blitz >parallel search", even though Diep's parallel search is completely buggy, by >your own admission. So when you say that mtd(f) sucks, it's no wonder that I'm >going to take it with a grain or three of salt. > >>>>b) you do not get it from hashtable as you search with a way lower bound >>>> next iteration, and then another lower research, where nullmove >>>> has the habit to give for alfa and beta scores back which are simply >>>> equal to alfa and beta. So you do need to research and can't >>>> use hashtable cheap >> >>>When the score doesn't move far, a ton of the re-search is in the hash table. >>>When it does, the first search didn't take very long anyway. >> >>Please reread what i wrote. Your assumption is not true. Score is namely >>far of. > >I read what you wrote. You still haven't quantified the performance loss of >mtd(f) vs. PVS for Diep. I think that if that loss is huge, something in your >software is seriously broken (i.e. not the algorithm itself, which works fine.) > >Dave "quantified" Write yourselve a decent program and see: - MTD needs more researches than PVS (fact) - MTD only has 1 bound, which means that something is either better than it or something is worse than it, which means that bounds in hashtable are usual not very reliable, so they ask for a research which is not necessary in PVS (fact) - MTD worst case is obviously in positions where you get a fail low, and where score is getting a lot down even *though* it might be temporary. (fact) - MTD works only well for programs with a simple evaluation that doesn't have many terms that produce big scores(my claim). That worst case behaviour is a big pain for me to swallow, apart from the fact that it's on average not needing less nodes. Needing up to 6 times are cases which i have seen. Note that if i test something, i usual test it at analysis level, so not at blitz level, nor at bullet level. I don't care for blitz or bullet performance. Tests that are done using only searches of 6/7 ply are kind of laughable i think. Sure, any change at a search algorithm can cause me to use 15120 nodes instead of 15121 nodes at 6 ply. I'm not interested in that. I'm interested in how does it do at bigger depths than that. I'm also interested in: *when* does MTD act different from PVS for example? MTD works for example a little bit less worse when not using nullmove. I guess because it profits more from hash then, which gives more cuts then, where nullmove would have taken care that the stored scores of hash are near the bound. Cilkchess at 64 processors in Dutch Open searched 10-12 ply. So worst case 10 ply. Occam (Don's own program, at 280k nodes a second), gets already 10 ply in blitz at a PII266! Talking about differences in search depth! Greetings, Vincent
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.