Author: Bruce Moreland
Date: 15:43:26 12/18/02
Go up one level in this thread
On December 18, 2002 at 13:13:09, Robert Hyatt wrote: >Actually I found it "non-conclusive" as I reported. It helped in some places, >hurt in others, and the end-of-term stuff here (and then the SMT stuff on this >new machine) side-tracked me for a while... I still have plans to play with it >further. So you took an initial stab at repeating this and failed, if I can read between those lines. I implemented this in Gerbil last night and ran it. I found that this was inferior to both R=2 and R=3 at every one second interval, with ECM, between 1 and 20 seconds. Meaning, that it never produces more solutions in a given number of seconds. General R=3 also is never better than R=2, given this testing methodology, in Gerbil. It is possible that I implemented it wrongly. There are a couple of things that I don't understand and had to guess about: 1) When the null-move search comes back with fail high, and verify is true, I will do a regular search with reduced depth. If this search fails high, I cut off like normal. If this search does not fail high, I have to re-search with the original depth. What I don't understand is what I do if the initial reduced depth search modified alpha. I am assuming that I put alpha back the way it was and start over. 2) I don't know if this implementation allows two consecutive null moves or what. In specific, I don't know what "null_ok()" does. I am assuming that if I don't allow two null moves in a row already, I can continue to not allow two null moves in a row. 3) I am assuming that if the search cuts off when doing the reduced depth searches, that the depth record in the hash table should be the original depth, and not the reduced depth. I can't find any bugs in my implementation, if my assumptions were correct. bruce
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.