Author: Robert Hyatt
Date: 17:19:35 12/23/02
Go up one level in this thread
On December 23, 2002 at 13:14:30, Vincent Diepeveen wrote: >On December 23, 2002 at 12:06:58, Eugene Nalimov wrote: > >Crafty has more influences by Diepeveen than DIEP by Hyatt. >Let's be clear about it. Absolute garbage. Crafty has _no_ influences by diepeveen. Of course, I might ask you questions about your parallel search. You only bugged me for six months trying to understand what I did in Cray Blitz. Of course, my "bad memory" probably has that confused and you were really responsible for all the stuff in Cray Blitz but I forgot about it. > >The only thing i have from crafty in DIEP is the assembly lock() >function. > >it is typical that that contained a bug too :) Didn't contain _any_ bug. You have the most difficult of times trying to follow simple threads here. The "bug" I mentioned last week was _not_ in any distributed code... > >I was referring to 1991 statements with regards to compiler passes. No you weren't, you botched even that. > >But the influences in crafty by statemetns of me are *big*. Care to name a few??? > >wac 002 is a good example of it. I said for years that crafty >solved it for the wrong reason. Then some years later bob fixes it. Again simply egotistically wrong. Crafty doesn't solve wac 2 because over time, I found exceptions to the rule it was using, and I slowly refined it to be conservative rather than optimistic. Nothing you said had _anything_ to do with it. > >I am posting now regurarly that doing 4 probes within 1 cache line >is faster than doing a 2 table probes which eats 2 cache lines. > >Will be probably take another 2 years then bob puts it in crafty. I did it years before _you_ did, in fact. I just didn't like it and I don't intend to use it again unless memory changes in some way unexpected... > >I remember another 10 algorithmic things and ideas i contributed which >are all worked out in crafty. Please point them out... > >I remember a very dubious last ply pruning which crafty was doing and >hurting its performance. After extensive testing i concluded it to not >work for DIEP. i told bob and also why it hurted crafty. He took it out. What last ply pruning? Futility? It has been in for quite a while _again_. Contributed by someone that did a decent job of testing and debugging it. > >That was done silently. When I removed the futility pruning, it was _not_ done silently. It was discussed at length on the Crafty mailing list, and here, and you had exactly nothing to do with it. > >There is another dozens of things. > Right... :) >But obviously i never focussed too much upon implementation details >like the hashtable implementation of crafty is now. > >This where you contributed probably that much by now to crafty that >you should get co-authored :) > >There is another thing that should get taken out of crafty by the way >which will probably be done pretty soon. > >In not a single top program history tables work. Only exception is >crafty. Matter of time before it's taken out. I just posted it 10 >times here so that might not be enough. i didn't check though whether >it's already taken out of crafty :) It isn't out and it won't be out. It works for me. It worked for Bruce the last time we talked... > >But to be clear, crafty is doing so little last few plies before the >qsearch that with 1 afternoon of work i would be capable of losslessly >search 1 ply deeper with it and also get a better speedup than it >currently gets (of course bob and i are disagreeing about his >actual speedup at 4 processors. 30 testpositions from GCP indicated >it was 2.8 and 3.0 when without nullmove). His own positions >indicated something else. But i would get it better with just an >afternoon of toying. If your skills only matched your ego, you would probably have a world-champion program... > >Yet i would have done those changes already years ago if such versions >would not get publicly posted with source code. > >That's the main reason why crafty is still in the 80s with regards to >algorithms. > >>I'd recommend you to read something before saying that it agrees with you. The >>web page which URL you posted contains > >>"In the case of the example multiple-pass compiler, >> >>Pass 1: The compiler driver calls the syntactic analyzer (which in turn makes >>use of the lexical analyzer) which reads the original source program, parses it, >>and constructs an abstract syntax tree (AST). >>Pass 2: The compiler driver then calls the semantic analyzer which traverses the >>AST, checks it for errors, and annotates it. >>Pass 3: The compiler driver then calls the code generator which traverses the >>annotated AST and generates the code" >> >>As you see, syntax analyzer is called exactly once, for the first pass. All >>other passes work on the ASTs. >> >>The definition of "single-pass" and "multi-pass" compiler had not changed for a >>long time. I just checked 1959 paper I have at home, and it is exactly the same. >> >>I have my own opinion about yours and Bob's algorithmic computer chess >>knowledge. For example, I thought that Bob teached you and helped you the last >>two years, not vice verse, am I right? >> >>Thanks, >>Eugene ("f***ing idiot") >> >>On December 23, 2002 at 09:28:30, Vincent Diepeveen wrote: >> >>>On December 21, 2002 at 22:54:05, Eugene Nalimov wrote: >>> >>>>Wrong. >>>> >>>>In all compiler textbooks number of passes means "how much times compiler goes >>>>through the program code" regardless of the program's representation -- be it >>>>source or some intermediate form (quads, tuples, triades, ASTs, etc.). >>>> >>>>Thanks, >>>>Eugene >>> >>>That's a different form of passes which have more to do with the difficulty >>>of optimizing high level languages. >>> >>>Note i just quoted a statement from some researchers in the field >>>of compiler optimizations. >>> >>>Of course that was from a few years ago. Let's be clear there. My knowledge >>>is of course very limited with regards to todays compilers, like Bob's >>>algorithmic computerchess knowledge is too. >>> >>>>On December 21, 2002 at 21:20:26, Vincent Diepeveen wrote: >>>> >>>>>On December 21, 2002 at 17:45:43, Matt Taylor wrote: >>>>> >>>>>>On December 21, 2002 at 17:29:11, Vincent Diepeveen wrote: >>>>>> >>>>>>>On December 21, 2002 at 14:32:18, Matt Taylor wrote: >>>>>>> >>>>>>>checkout the compiler faq at : >>>>>>> >>>>>>>http://www.cs.strath.ac.uk/~hl/classes/52.358/FAQ/passes.html >>>>>>> >>>>>>>[off topic nonsense removed] >>>>>> >>>>>>Ok, the FAQ explains to me principles which were self-evident. When you read the >>>>>>FAQ, you realize that an optimizing single-pass C compiler is not possible. >>>>>> >>>>>>"Optimization: Only really possible with a multi-pass compiler" >>>>>> >>>>>>It also reaffirms what I'd already stated -- multi-pass compilers are EASIER to >>>>>>write because the code is more modular and has less coupling. Just about the >>>>>>only data structure that you're going to rely on to go between stages is the >>>>>>AST, and that's not that difficult. >>>>>> >>>>>>This is quite familiar for me as I've been working on a compiler implementation >>>>>>for a C-like language. (Actually it's more like C++, but it lacks multiple >>>>>>inheritance and templates.) >>>>>> >>>>>>-Matt >>>>> >>>>>If you have 'so much' experience with compilers, whereas i consider myself >>>>>a layman; i just wrote a few very very primitif compilers (and no assembly >>>>>output of them even); i wonder why you do not know what 'single pass >>>>>compiler' means. It has to do with how many times a compiler reads >>>>>the source code. Not so much how many high level optimizations >>>>>you apply to it. >>>>> >>>>>So now you learned again something. >>>>> >>>>>Best regards, >>>>>Vincent.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.