Author: Omid David Tabibi
Date: 07:13:27 04/13/03
Go up one level in this thread
On April 13, 2003 at 10:07:27, Vincent Diepeveen wrote: >On April 13, 2003 at 10:01:48, Omid David Tabibi wrote: > >>On April 13, 2003 at 07:41:00, Vincent Diepeveen wrote: >> >>>On April 13, 2003 at 00:14:52, Omid David Tabibi wrote: >>> >>>may i remind you that many programs use R=3 basically with exception sometimes >>>of the nullmove of depthleft == 4. >>> >>>I'm doing more in qsearch than you do. >>> >>>Further your verification search is using R=3 too with a bug in the hashtables >>>management. Because of that bug which is that you do not store in hashtables >>>whether a search result is based upon a verification or not, the worst case >>>performance of verification search is R=3. >> >>One of the reviewers of the article asked several questions regarding the use of >>hash tables in conjunction with verified null-move pruning. I told him exactly >>what I told you, i.e., the depth stored in the hash table is the depth after >>reduction (e.g., if we were in depth 6, have a fail-high report and reduce the >>depth to 5, then the final stored depth in the hash table is 5). That might be >>too conservative a method, but it guarantees that no hash table bugs (of the >>form you mention) are encountered. > >That is not fixing it at all of course. You need to write to hashtable for >*every* position that gets stored whether you did or did not do a verification. > >Then based upon this bit and whether you already did in the current search a >verification, you can decide whether you can give a cutoff or not. > >So your method does not fix the problem at all. It fixes it for a *few* >positions only. Your method might yield greater savings; feel free to try it. The method I used however, *guarantees* that no hash table bugs are encountered. In other words, I always take a hash table result with a grain of salt. > >>However, I did not explicitly mention it in the article, as I preferred that it >>remain as an exercise to the reader (and the reviewer also agreed that it might >>be best to leave it to the reader to experiment). The method I used might be too >>conservative, and so other programmers might achieve even better results by >>using a more aggressive use of hash tables. > > > >> >>> >>>Best regards, >>>Vincent >>> >>>>On April 12, 2003 at 22:45:19, Vincent Diepeveen wrote: >>>> >>>>>On April 12, 2003 at 13:20:51, Omid David Tabibi wrote: >>>>> >>>>>>On April 12, 2003 at 10:02:43, Uri Blass wrote: >>>>>> >>>>>>>On April 12, 2003 at 09:17:31, Omid David Tabibi wrote: >>>>>>> >>>>>>>>Quite an interesting read Vincent. >>>>>>>> >>>>>>>>I'm afraid you are investing too much in the parallel speedup though. Any >>>>>>>>hardware speedup will be linear (at best) while algorithmic enhancements are >>>>>>>>exponential. If you manage to search one ply deeper by an algorithmic >>>>>>>>improvement, the gain will be more than any parallel speedup can yield. >>>>>>> >>>>>>>I agree that the hardware speedup from parallel search will be linear at best >>>>>>>but linear improvement is not always less than one ply. >>>>>> >>>>>>Diep is already parallel. I assume that he will get far less than a 4x speedup >>>>>>for his latest work on massive parallelism. Assuming an effective branching >>>>>>factor of 4, that speedup will equal one ply. >>>>> >>>>>b.f. = 2.9 >>>> >>>>Because you are using standard R=3; but is the search reliable? That bf will not >>>>be of much use if it causes Diep to find the correct move two plies later in >>>>comparison to its competitors. When was the last time you compared Diep's >>>>performance to other engines using test suites? >>>> >>>>BTW, I can get even a smaller branching factor than yours in no time. I will >>>>just use standard R=6 :) >>>> >>>> >>>> >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>>If the number of processors is big then it can be more than one ply. >>>>>>> >>>>>>>I believe that it is possible to get a lot more than one ply by pruning and >>>>>>>extensions but I decided that I prefer first to improve movei's evaluation and >>>>>>>only later to improve movei by better pruning and extensions because evaluation >>>>>>>is one of the things that is used in decisions about pruning and extensions. >>>>>>> >>>>>>>I believe that Movei's main problem in games with programs at similiar strength >>>>>>>is in the endgame so I will probably do some improvement in that stage before >>>>>>>going back to search. >>>>>>> >>>>>>>Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.