Author: martin fierz
Date: 20:05:56 09/10/02
Go up one level in this thread
On September 10, 2002 at 20:50:01, Vincent Diepeveen wrote: >On September 10, 2002 at 18:23:51, martin fierz wrote: > >i always overwrite too, but if you can chose from 8 entries >that's a major diff with 2. i just checked my code and noticed i had actually turned off that it always replaced. hmm. i ran some tests, with N=2 or N=8 probes, and the always replace scheme was better than the don't replace scheme, by about 9% (N=2) and 12% (N=8). i wonder why i turned it off :-( aloha martin > >>On September 10, 2002 at 17:49:52, Vincent Diepeveen wrote: >> >>>On September 10, 2002 at 17:45:55, Vincent Diepeveen wrote: >>> >>>>On September 10, 2002 at 17:36:49, martin fierz wrote: >>>> >>>>>On September 10, 2002 at 17:16:00, Vincent Diepeveen wrote: >>>>> >>>>>>On September 10, 2002 at 16:06:51, martin fierz wrote: >>>>>> >>>>>>I need to refer to extensive proof i wrote down at CCC >>>>>>which refuted that you overwrite the mainline. >>>>>> >>>>>>For a search of 20 ply with a loading factor which is pretty >>>>>>high, it is still true that with near sureness you have a >>>>>>19 ply line at least (assuming no extensions otherwise >>>>>>the line is longer). >>>>>> >>>>>>That's true for bounds too of course. >>>>>> >>>>>>The chance you overwrite a search depth of 1 ply left >>>>>>is considerably smaller than you overwrite something of >>>>>>0 ply left. >>>>>> >>>>>>In fact i do 8 probes. >>>>>> >>>>>>What loading factor do you talk about here, then fill in the >>>>>>chances. >>>>> >>>>>i'm talking about doing a search of ~10N nodes for a hashtable with N entries. >>>> >>>>First of all i don't know the level in checkers, but in international >>>>checkers it's about 60 moves in 1 hour. Or 90 seconds a move >>>>initially (of course exchanges are for free). >>>> >>>>I don't see how i get a loading factor of 10 there at all, which is >>>>a *huge* loading factor. With 12 bytes an entry at my dual k7 >>>>i have about 35 million entries. Well i don't get 350 million nodes >>>>in a search of 90 seconds at all. >>>> >>>>Anyway, to use your doom scenario >>>> >>>>But for 8 probes it means you have at 0 ply a chance it isn't overwritten. >>>>But how many nodes with depthleft of 1 ply do you have? right way less, >>>>we're not counting qsearch here obviously. >>>> >>>>how many nodes with depthleft 2 ply do you have? right even less than >>>>1 ply. In fact about a bit more than your branching factor less about. >>>> >>>>Get the math idea? >>>>For a full written out chance see my writings elsewhere. >>> >>>I hope you realize this is only if you do 1 probe. For 8 probes >>>there is an additional thing that gets the chance smaller: >>> >>>at depthleft==1, >>> >>>suppose your hashtable is filled with about 50% searches of 1 ply >>>left (which would be an insane filled hashtable already). >>> >>>What is the chance i overwrite my search result here over another >>>1 ply left situation? >>> >>>Right that's another 1/2 ^ 8. >>> >>>So >>> a) chance that bigger depths get overwritten is real small >>> b) chance it is still in the hashtable is huge. >>> >>>loading factor = 10 is insane high. >>> >>>You use very small hashtables always or play at 10 minutes a move? >>> >>>If so why? >> >>thanks for the explanation. i understand what you mean now. >> >>the reason i use smaller hashtables than you think i should is that in checkers >>you have an 8-piece database, which is 4GB in compressed form. you want to probe >>that in ram, so you load most of your ram with that database - you just get more >>out of it than from more hashtable. >> >>i only use 2 probes in the HT instead of 8 as you do. i tested this a lot and it >>gave better results speedwise on normal time controls. for longer time controls >>it probably would be a good idea to adjust that... >> >>my hash replacement scheme seems to be different than yours. when i do an N-fold >>probe of my table, i ALWAYS overwrite the least important of the N stored >>values. even if i replace it with a less valuable entry. i tested this and it >>worked better than not storing if you don't find a less valuable entry. i found >>that weird, but that was what my test said... obviously, under this scheme i can >>overwrite pv nodes easily under the high load conditions described. maybe you >>are right and my hashtable management sucks. maybe my test was broken. let me >>repeat it...i have a compile switch for these two replacement schemes :-) >> >>aloha >> martin >> >>>>>say again - why wouldn't i overwrite mainline nodes under these circumstances? >>>>> >>>>>aloha >>>>> martin >>>>> >>>>>>>On September 10, 2002 at 15:41:42, Vincent Diepeveen wrote: >>>>>>> >>>>>>>>On September 10, 2002 at 15:19:21, martin fierz wrote: >>>>>>>> >>>>>>>>>On September 10, 2002 at 14:45:27, Omid David wrote: >>>>>>>>> >>>>>>>>>>On September 10, 2002 at 14:30:56, martin fierz wrote: >>>>>>>>>> >>>>>>>>>>>On September 10, 2002 at 09:26:14, Eli Liang wrote: >>>>>>>>>>> >>>>>>>>>>>>A couple of chess programming questions: >>>>>>>>>>>hmm, i only wrote a checkers program, but here's my take: >>>>>>>>>>> >>>>>>>>>>>>(1) Are there any uses for ProbCut and/or Multi-ProbCut in chess positions where >>>>>>>>>>>>the variance of leaf-nodes is low? >>>>>>>>>>> >>>>>>>>>>>i've tried multi-probcut and it works well in checkers. i never tuned it as much >>>>>>>>>>>as my own pruning algorithm, and it doesn't perform quite as well - but it is BY >>>>>>>>>>>FAR better than no pruning. i'll be trying to tune it in the near future. for >>>>>>>>>>>games where the eval doesnt swing wildly, MPC is a fantastic algorithm. >>>>>>>>>>> >>>>>>>>>>>>(3) Reading Aske Plaat's search & re-search paper, it really seems like mtd(f) >>>>>>>>>>>>is something of a magic bullet. But I note it seems that more programs don't >>>>>>>>>>>>use it than do (for example Crafty). What is wrong with mtd(f) which Plaat >>>>>>>>>>>>doesn't say? >>>>>>>>>>> >>>>>>>>>>>i'm using MTD. i tried windowed search, PVS and MTD. in my tests, in long engine >>>>>>>>>>>matches, MTD performed marginally (no statistical significance...) better than >>>>>>>>>>>PVS. it typically searched a low 1-digit % less nodes for a given depth than >>>>>>>>>>>PVS. >>>>>>>>>>>i don't know how to get a PV out of MTD. in normal searches, a pv node is where >>>>>>>>>>>the value is > alpha but < beta. in MTD, you never get this condition. >>>>>>>>>>>retrieving a PV from the hashtable is possible, but in all probability, you will >>>>>>>>>>>not get the full PV. which is real bad for debugging if you want to know what >>>>>>>>>>>the program was thinking at the time... i once asked here how to get a pv from >>>>>>>>>>>MTD but got no answer - and if you can't get the pv, then that is a major >>>>>>>>>>>drawback. >>>>>>>>>> >>>>>>>>>>I haven't tried getting the PV out of MTD(f), but just a thought: why should >>>>>>>>>>there be any problem in getting the PV out of hash table? Play the first move, >>>>>>>>>>update the position, get the next best move from hash table, and so on... ?! >>>>>>>>> >>>>>>>>>there's no problem with that except that on any reasonably deep search, you will >>>>>>>>>not have been able to store all pv nodes in the hashtable. so you end up with a >>>>>>>>>search which says it was 23 ply deep and have e.g. 15 pv moves. if you just want >>>>>>>>>to display it for the user, that's fine. but if your program plays a bad move, >>>>>>>> >>>>>>>>but then your hashtable management sucks ass, sorry to say so. >>>>>>> >>>>>>>but you don't use MTD! which means you *know* when you have a pv node, because >>>>>>>"pvnode <=> alpha<value<beta". and then you can make sure it doesn't get >>>>>>>overwritten in the hashtable. if you use MTD, you don't have this information - >>>>>>>all your hashtable entries are either lower or upper bounds... so how do i know >>>>>>>which ones i have to keep? i'd really glad to learn how to do this :-) >>>>>>>so if you can tell me how to do it instead of saying i suck (well possible...), >>>>>>>i'd love to try! >>>>>>> >>>>>>>aloha >>>>>>> martin >>>>>>> >>>>>>> >>>>>>>>I get in Napoleon also only mainlines out of hashtable (with pvs) >>>>>>>>wasting system time in the search to update all kind of stupid >>>>>>>>arrays for it is a waste of time, and the next iteration you get >>>>>>>>true bounds, so you can't get the mainline in arrays anyway (mtd >>>>>>>>is different here). finding a win in 50 ply is no problem to display... >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>>and you want to know what line it was considering as being best, e.g. because >>>>>>>>>you want to know if your static eval is bad in the final node of the pv, you >>>>>>>>>can't do it. IMO debugging your program and finding eval problems like this is >>>>>>>>>MUCH more important than something like 5% more speed. >>>>>>>>> >>>>>>>>>aloha >>>>>>>>> martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>(6) Has anyone found any real "practical" benefits to fractional-ply extensions? >>>>>>>>>>> >>>>>>>>>>>yes. i tried recapture extensions of different depth, and half a ply gave the >>>>>>>>>>>best result. don't ask me why, it's just an observation. >>>>>>>>>>> >>>>>>>>>>>aloha >>>>>>>>>>> martin
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.