Author: Uri Blass
Date: 19:04:24 09/25/03
Go up one level in this thread
On September 25, 2003 at 20:52:26, Christophe Theron wrote: >On September 25, 2003 at 13:37:49, Uri Blass wrote: > >>I decided to take back the changes that I did in the last months and start again >>from version 8080a. >> >>I am going to do part of the changes again but all the changes about incremental >>data are probably not going to be done. >> >>The changes that I did included information for every square about the queen >>directions that it is blocked by white and the queen directions that it is >>blocked by black when the information calculated incrementally. >> >>It also included directions that every piece can go. >> >>I thought that this information is good because it helped me in faster >>generation of moves(I do not need to check moves of the rook to direction that >>it cannot go) and I thought about it as an investment for the future because it >>may be a good idea to do the code slower if later a lot of things that I am >>going to do are going to become faster(like evaluating of trapped pieces). >> >>I decided that it is not a good idea because the speed of the program is not >>proportional to the number of calculations and it seems that having more arrays >>also does other things that I want to add slower because the program cannot use >>the fast memory for them. >> >>It caused me to change my opinion about slow searchers. >>I had the conception that slow searchers is a better design decision because it >>is possible to add knowledge with almost no price but this was based on a wrong >>assumption(the assumption that the speed is proportional to the number of >>calculations). > > > >The wrong conception is to make a difference between slow and fast searchers. > >Knowing that a program is a fast or slow searcher tells you absolutely nothing >about what it is doing inside. It also tells you absolutely nothing about its >strength and weaknesses. I guess that I did not explain it well. I will explain my theory in other words. 1)Suppose some calculation takes 1/1,000,000 second per node. This 1/1,000,000 second per node may be small price if you search 10000 nodes per second(only make your program 1% slower) but is a big price if you search 1,000,000 nodes per second(make you twice slower). I made my program slower in nodes per seconds by calculating some data incrementally that I did not calculate before it. My theory was that by searching less nodes per second I can do future calculations relatively faster(or in the same speed or faster thanks to the data that is calculated incrementally). I agree to pay linear price for the near future of making the program 50% slower if I know that later I only earn speed and I assumed that later I may earn speed thanks to using arrays to do less calculations. 2)The only problem is that I cannot say that a calculation takes 1/1,000,000 second per node and for a slow searcher the calculation can takes more time because the slow searcher may need to use slow memory for it or for other things when the fast searcher can use the fast memory. 3)I think that i understand now better the reason for your rule not to calculate now things that you can calculate later. It can be more than the time that it takes to calculate now. The problem is that calculation now may do other things slower when the program becomes bigger(for example if calculation now use big arrays). Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.