Author: James T. Walker
Date: 05:30:36 12/15/99
Go up one level in this thread
On December 14, 1999 at 23:55:51, Dann Corbit wrote: >On December 14, 1999 at 23:38:55, Christophe Theron wrote: >[snip] >>Both subjects are related in my opinion. Here is what I think. It is not the >>word of god of course, but I have this in mind when I work and so far it works >>rather well for me: >> >>* I think that a good balanced program will be good at any time control >> >>* I think that playing with long time controls gives results much closer to 50% >>than playing with short time controls >> >>These points mean that (amongst other things): >> >>* You must also pay attention to the results your program gets at fast time >>controls. I have heard many people saying that blitz games mean nothing, but if >>a program does much better at long time controls than in blitz, in my opinion it >>is not a balanced program, and it has a structural weakness somewhere >> >>* At long time controls, "outside" effects will have a bigger impact on the >>results and must be worked out carefully: book and learning come to my mind. >> >>I have nothing to back up these points, except that so far it has been my >>philosophy and at least it is not worse than other points of view. At least it >>stands the test of real life... >If the algorithms employed are very similar, the above will be mostly true. But >there are some exceptions. > >0. If the fundamental algorithm employed by program A has a better O(f(n)) >behavior than program B, at *some* point program A will overtake program B. For >instance, if I have a hash table implemented as a linked list (stupid, I know) >as long as there are only 100 entries in both tables, there won't be a >noticeable difference. But if there are one hundred thousand it will be >noticeable and if there are twenty million it will dominate. > >1. Similarly, a program may have some clever extensions that don't even kick in >until we get to ten ply. So at short time controls, they don't even happen. >But at long time controls they take effect. > >2. The effect of time might even be parabolic. Take the example of sorting by >multiple list insertion. Shell-sort will clobber it at tiny set size. Then it >is fast as greased lightning. But as set size increases, the extra memory and >memory management sets cause it to be overtaken by counting sort variants. So >algorithms are optimal for a certain domain. > >The biggest reason that programs level out at long time controls is the >exponential nature of chess. If program A can get to ply 9 in 8 seconds and ply >9 in 16 seconds and ply 10 in 32 seconds... at some point the advancement to the >next ply will be a really long interval. So some inferior program may catch up >as far as ply completion because of the nature of the time control used. E.g. >program A gets to ply 15 in 1024 seconds (17 minutes) but takes 34 minutes to >get to ply 16. In the meantime, program B gets to ply 15 in 30 minutes and the >time for the move is up at 1/2 hour per ply, just before A finishes ply 16. As >you can see, the "slop" gets bigger and bigger. Eventually, both programs will >virtually hang -- accomplishing nothing. Hello Dann, This sounds good to me but isn't it true that if you sort your moves so you search the most promising moves first then a 15.5 ply search may be just as good as a 16 ply searcch. That is, the moves at the end of the list are usually just "Give away" moves or wild sac's. I'm obviously not a programmer so excuse my ignorance on the subject. Jim Walker
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.