Author: Quenton Fyfe
Date: 06:32:50 11/22/98
Go up one level in this thread
On November 21, 1998 at 14:53:29, Richard A. Fowell (fowell@netcom.com) wrote: >On November 20, 1998 at 23:42:35, Dann Corbit wrote: > >>On November 20, 1998 at 04:15:14, Mike Stoker wrote: >>>Hi Dann, >>> >>>I just heard that S.E.T.I. (Search for Extra-Terrestrial Intelliegence) have >>>produced a screen saver which ploughs through their data and produces results >>>whilst their machine is otherwise inactive. They intend to distribute this >>>screen saver to a wide community. >>> >>>I thought this might be a good way to get faster results on your project. It >>>could possibly be expanded to automatically request further data to process and >>>send results to you as they were obtained, removing the need for human >>>intervention. >>> >>>What thinks you? >>I read about this and thought about it, but I don't know how I would implement >>it. First of all, it takes a long piece of uninterrupted time just to analyze >>one position. The machine will be much less repsonsive during this interval, >>possibly causing annoyance. Also, I don't know how I could coordinate the >>workload. I have found, further, that end users are _very_ apprehensive about >>downloading an executable. I can cajole into downloading Crafty because it is >>well known and the source is available. People are very fearful of compter >>virii and for good reason. Since I only send data and people can use their own >>programs, there is less of a problem in that regard. >> >>However, I am sure that all of the problems and obstacles could be worked out >>for this approach. Does anyone want to do it for me? I'm pretty sure I don't >>have the time. If we could have a tool like this, it would be yet another >>avenue to gather data. >> >>Any tools or utilities or ideas that anyone has to contribute I would be more >>than welcome to entertain. > >Here's a bunch of observations/ideas. It seems to me that suggestion >4A could be implemented immediately with no changes to user software - your >book-keeping structure might be impacted. > >Ideally, you'd like the software to soak up almost every free cycle without >noticably interfering with the computer responsiveness to the computer, >and without sitting idle because it has run out of problems to work on. > >Here's a list of desirable features - note that they don't all have >to be implemented to be helpful, and they don't even have to be in >the chess program itself. > >(1) A program that can run in the background, while the user uses the > machine in the foreground. Several on the Mac will do this, I presume > some on other platforms. > >(2) A feature where you can set analysis time to the total MACHINE time the > chess program gets, rather than the total ELAPSED time. I think Crafty > has this feature - other programs may, but I haven't seen it. The point > is, you want problems that are worked on when other programs are using > most of the cycles to be as thoroughly analyzed as those that aren't. > > *** Immediately implementable suggestion here > Most programs do have a "analyze to N ply" feature. There are many > advantages to switching your analysis yardstick from (normalized) > thinking time to "analyze to N ply (plus extensions)", and also > two disadvantages - here they all are. > Pros: > In the case where all users are using the same program, same settings > (e.g., hash table) and randomize turned off, you should always get > the same results. Benefits: > - any analysis questioned can be checked easily by duplicating it > - you don't need to "normalize" problems between machines or settings > - people can run the analysis in the background without fear of > lowering the quality of results. > - !!! No "wasted" computer time. Right now, much of the machine time > is "wasted", because the computer is in the middle of exploring > a branch when it is timed out. So the last N minutes are wasted - > you have the same conclusion that you would if you timed out N minutes > earlier. By analyzing to a fixed Ply (plus extensions) the result > reported by the computer will incorporate the information from all > work it has done. > Cons: > - you will have to normalize between computer programs to equalize > between analysis from "fast searchers" and "deep thinkers". > - the appropriate number of ply for a given quality analysis will > likely vary between various positions - in a simple position, more > ply can be searched in a given time. > - it is harder to plan batch analyses - with N problems at M seconds > each, you know exactly when the problem will be completed. With > N problems at M ply, you won't. > - you already have a large database analyzed to the "normalized time" > metric. > >(3) Something to give the machine less time when the user is present, and > more time when the user is absent. This amounts to the sort of "activity > detector" that screensavers use, and the ability to vary the fraction > of the processor taken by the chess program - like "nice" in Unix". > > *** Note for chess software programmers *** > This is a feature that can be built into your chess software (together > with a "machine time" vs. "elapsed time" selection on the time controls) > that would make your software more attractive to any user that wants > to use your software for position/game analysis. That way, after the > typical 5-round U.S. Swiss, the games can be loaded for post-mortem > analysis and run continously until done, and the user can use the > computer in the mean time without worrying that the responsiveness > will be low, or that the quality of the chess analysis will be compromised. > >(4) Some convenient C.A.P. procedure for checking out "N blocks" of problems > without human intervention. > > When I was on the project (and I should be back before Christmas) I had > lots of machine time go to waste because of dead time when: > > - the computer had analyzed the problem set, but I was asleep/at work > and wasn't there to send in the results and get a new problem set > > - I had sent in my results, and was waiting to recieve a new problem > set. Two suggestions: > > (A) let a user have two uncompleted blocks checked out at a time. > That way, they can check out blocks A and B, and after completing > block A, they can turn it in and have the computer work on block > B while waiting for you to provide block C. > > (B) provide a way where the user can get a new block as soon as > they run in a completed block - like a ftp site or a web > interface (compatible with my text-only Lynx browser, please). > >Richard A. Fowell Some good points here Richard - some of which I (and possibly some other CAPers are already doing: 1) I'm running Crafty on NT, and can therefore set it to low priority - it therefore runs without any noticable impact on my foreground apps. 2) I have Crafty set to use CPU time instead of elapsed (thanks Bob!) with the "time cpu" command - thus ensuring that other stuff I'm doing in the foreground doesn't degrade the quality of the analysis I'm submitting. That leaves my personal wish-list being some method for automatically retrieving new blocks to work on, and sending back completed ones, like "Distributed Net" do. I tend to just run the stuff at night (apart from the big weekend batch), because if I need to re-boot my machine, much of the work is lost, and there's no easy way to re-start (without a lot of messing about with text editors). Of course you can always return a partial batch - but that's just more wasted CPU cycles...... I reckon I'm currently contributing about 30% of the CPU cycles that I could, with a more automatic method of getting blocks / returning results. If it was easier to participate, people with a more casual interest might be tempted. It would be far too much effort for someone with a lab-full of computers to run this on them all (three's enough for me!); so with a transparent system, we'd get a lot more analysis done. Of course, I recognise that it's one thing to say this - and quite another to do something about it...... Regards Quenton Fyfe
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.