Author: Robert Hyatt
Date: 20:32:01 06/16/02
Go up one level in this thread
On June 16, 2002 at 13:19:21, Christophe Theron wrote: >On June 15, 2002 at 23:59:45, Robert Hyatt wrote: > >>On June 15, 2002 at 11:36:29, Christophe Theron wrote: >> >>>On June 15, 2002 at 00:20:48, Robert Hyatt wrote: >>> >>>>On June 13, 2002 at 23:58:46, Christophe Theron wrote: >>>> >>>>>On June 13, 2002 at 09:13:43, Robert Henry Durrett wrote: >>>>> >>>>>>On June 13, 2002 at 06:00:13, Jorge Pichard wrote: >>>>>> >>>>>>>Hiarcs 8 was NOT made for slow computer such as an AMD 450 Mhz as the SSDF >>>>>>>decided to test it against Nimzo 8. >>>>>> >>>>>>What, exactly, causes this problem? >>>>>> >>>>>>Do other chess engines have this same problem too? >>>>>> >>>>>> >>>>>>Bob D. >>>>> >>>>> >>>>> >>>>>The problem is that the problem described above does not exist. >>>>> >>>>> >>>>> >>>>> Christophe >>>> >>>>Here we disagree significantly. >>>> >>>>One trivial case... Take a program that uses null-move R=2 or 3, and run >>>>it on a very slow machine. Then on a very fast machine. The slow machine >>>>will make significant blunders because the R=2 or R=3 depth reduction will >>>>be a killer. But as the depth increases, the tactical oversights go away >>>>and the null-move program benefits more from the extra speed than what you >>>>might see from a non-null-move program. >>>> >>>>I watched this happen personally. I almost gave up on R=2 for that very >>>>reason, until suddenly the P6/200 came along and bumped the depth up enough >>>>so that suddenly the R=2 or R=3 didn't cause tactical blunders nearly as >>>>often. >>>> >>>>That is but _one_ example. Other obvious cases come to mind. At a specific >>>>depth, you need some tactical evaluation to avoid blunders. IE if you can't >>>>search 2 plies, you need to evaluate forks statically. But as the depth >>>>increases, suddenly the search handles this and doing it in the eval simply >>>>slows the program down. But not doing it in the eval will kill it on slow >>>>hardware. >>>> >>>>The list goes on and on... >>> >>> >>> >>>I agree with you on the principle (some changes are needed for a program to >>>perform not too badly at shallow search depths), but not on the examples you >>>give. >>> >>>I think you are basing your remark about null move on impressions. It is true >>>that you can notice dramatic tactical blunders because of it, but you do not >>>take into account the number of games where it helped more than it hurt. >> >>Null-move helps of course. But R=2 or R=3 hurt at shallow depths. And >>this wasn't an "impression" it was actual observation based on lots of >>games played on ICC back when I was using an original pentium/133... I >>punted on R=2 for a good while until hardware made it more acceptable... >> >> >> >>> >>>You would need to play long matches with some statical reliability to be sure >>>(and I have done it). >> >>So did I back then. Both on ICC against other computers, and on my machine >>using xboard to run a long match... R=1 beat R=2 consistently (for me) back >>then... >> >> >>> >>>Also, null move as it is done in Crafty has some big weaknesses because of your >>>QSearch. Your philosophy is that the QSearch should be as fast and simple as >>>possible, and it amplifies the null move weaknesses. >> >> >>Of course.. But that is beside the point here. It definitely was weaker >>on slower hardware, which was what was being discussed... > > >I think that is very much to the point. > >Your results showed that null move was bad because you do not have the right >kind of QSearch. > >My results were different because my QSearch helped. > > > > >>>About the "fork" example: I would not treat it by evaluation. I have tried and >>>it does not work. The solution of this problem must be found in search or >>>QSearch improvements. >> >>Not if you can only search 1 ply full-width. That is the problem. A bit >>deeper and you don't need that static evaluation term. The same happens with >>other ideas as the search gets better... > > >If you can't get deeper than 1 ply I agree with you. > >But if you can reach 3 plies (which is almost always possible even on slow >hardware), then you can stop at 1 or 2 plies in some lines and go deeper (5, 6 >or even 7) in some other lines and take care of forks and other tactics this >way. However, if you can only get to three plies, you miss my "threat" to fork you. Because you need another two plies to see my forking move, and then my winning material. The further you push this from the root, the less likely you are to see the problem, and can therefore claim "search has solved it." But it does take depth... > >On the other hand I agree that some evaluation terms can be factored out slowly >as search depth increases. > >It's just that I believe that tactics should not be solved by evaluation, in >general. Most tactics, I agree. Passed pawns running are a tactical idea however that _must_ be done in the eval or serious trouble will show up. > >That being said I must add that I solve some tactics by evaluation, but only in >the endgame (passed pawns evaluation). > > > > >>>I know it very well because of all the programmers out there I must be the only >>>one to have an up to date program targeted to run on both very slow computers >>>and very fast ones (Chess Tiger and Chess Tiger for Palm share exactly the same >>>engine code). >> >>Nothing wrong with the idea. But, in my opinion, the speed of the search >>is one parameter of the engine. It seems unlikely that you can dynamically >>change one parameter alone and not require adjustments elsewhere. At least >>for an "optimal setting"... > > >That's right. > >I have such adjustements in Chess Tiger to make it able to deal with a wide >range of search depths. > > > > > Christophe
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.