Author: James Swafford
Date: 14:32:52 06/02/04
Go up one level in this thread
On June 02, 2004 at 17:25:55, Dann Corbit wrote: >On June 02, 2004 at 16:58:01, Dann Corbit wrote: > >>On June 02, 2004 at 16:07:14, James Swafford wrote: >> >>>On June 02, 2004 at 16:03:10, James Swafford wrote: >>> >>>>On June 02, 2004 at 10:06:22, Dann Corbit wrote: >>>> >>>>>On May 29, 2004 at 16:13:24, James Swafford wrote: >>>>> >>>>>>On May 29, 2004 at 14:15:37, Frank Phillips wrote: >>>>>> >>>>>>>On May 29, 2004 at 12:53:54, James Swafford wrote: >>>>>>> >>>>>>>>On May 29, 2004 at 11:53:29, Frank Phillips wrote: >>>>>>>> >>>>>>>>>On May 29, 2004 at 11:44:58, James Swafford wrote: >>>>>>>>> >>>>>>>>>>On May 29, 2004 at 11:35:07, Frank Phillips wrote: >>>>>>>>>> >>>>>>>>>>>On May 29, 2004 at 04:00:31, Gian-Carlo Pascutto wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>I don't think so. The program still has weaknesses that a bit of >>>>>>>>>>>>extra hardware will not overcome. >>>>>>>>>>>> >>>>>>>>>>>>GCP >>>>>>>>>>> >>>>>>>>>>>What are these weaknesses? >>>>>>>>>>> >>>>>>>>>>>Bob may even be able to fix them before the event. >>>>>>>>>> >>>>>>>>>>He was talking about his program, not Crafty. >>>>>>>>> >>>>>>>>>Thanks. I misread the post. >>>>>>>>> >>>>>>>>>But I am still interested in the weaknesses being referred to by GCP, which are >>>>>>>>>resistant to faster hardware. I have so many myself. If only I knew what they >>>>>>>>>were :-) >>>>>>>> >>>>>>>> >>>>>>>>As in, "I can't seem to mate Shredder, even with faster hardware!" ?? :) >>>>>>>> >>>>>>>>-- >>>>>>>>James >>>>>>> >>>>>>> >>>>>>>I guess the answer is yes, although I have never had better hardware - and am >>>>>>>not SMP, so probably never will. >>>>>>> >>>>>>>See you tonight at ICC author's only tournament ? :-) >>>>>> >>>>>>NOt as a competitor-- my thing is nowhere near strong enough >>>>>>to compete yet. I'm hoping to be able to compete in the next >>>>>>CCT, though. >>>>> >>>>>Are you still doing the learning stuff? >>>> >>>>I've been working with TDLeaf quite a bit. At some point I'll >>>>post something with some meat to it, but to sum it up, I'm >>>>not nearly as optimistic about it as I once was. >>>> >>>>In my experience, TDLeaf can train the material weights, and it >>>>can even produce an evaluation vector that's superior to a >>>>'material only' vector. I am not convinced it's useful for >>>>training a complex vector, nor am I convinced it does a better >>>>job than hand tuning. For that matter, I am not even >>>>convinced it converges to the optimal vector! >>>> >>>>Caveat: it's possible (though I think it's unlikely) that >>>>my implementation is flawed. My engine will become open source >>>>at some point (maybe after the next CCT), so you can judge >>>>for yourself then. >>>> >>>>Will Singleton and I had a bet on this... I conceited defeat >>> >>> >>>Gah! I "conceded" defeat. >>> >>>>the other day. THe original bet was for the loser to fly >>>>the winner and spouse across country for drinks. :) I'm >>>>pretty sure Will's decided he'll forego that if I show up >>>>at a tourney, but that's his call. >>>> >>>>I'm still very interested in learning algorithms, but I'll >>>>be focusing on improving my evaluation for a while. >>>> >>>>Again- I will post some data at some point. >> >>I am doing a computer guided optimization for Beowulf. >> >>It takes ~12,000 positions from super-GM games and SSDF games among the top >>computers where all the participants chose the same move (no other moves chosen >>for that position). >> >>For each of about 100 parameters, I vary the value from too small up to too >>large (e.g. a knight might go from 200 centipawns to 450). At some optimal >>point, the largest number of positions will be chosen. I fit a parabola >>throught the data ans solve for the maxima (if any). >> >>Often, the variance of the parameter has no effect on the solution scores (for >>instance, I might get 5500 solutions no matter what the parameter is, or the >>number of solutions may vary randomly). So I also solve for the minima of the >>time curve. As an example, a depth 4 search using NULL MOVE will probably solve >>a few LESS positions than not using NULL MOVE, but it will take 1/3 of the time >>at some optimal prune level. >> >>I have had lots of bugs in my curve analysis, but I am slowly working it out. >> >>Before, I solved for a smaller subset of tactical positions which made it great >>at solving those tactical positions but lousy at playing. I am hoping for a >>better result this time (especially since some of my result calculations were >>backwards, making the fits enormously unstable). > >Here is the binary and source for the current project: >ftp://cap.connx.com/pub/chess-engines/new-approach/beocurve.zip Alright: I'll take the bait... I'll download it and check it out. > >There is one more correction in the file compared to my last runs -- It now >compares the minumum of time fitted by the curve with the absolute minimum found >in the raw data (before, that bit was wrong). > >Here is the curve for Bishop piece value: > >bishop_score=352 at 4; stddev=16.409341 : -0.0665458*x^2 + 46.9104*x + -2711.28 >(x=291.000000, y=5298.000000), t=1137.000000 >(x=307.000000, y=5431.000000), t=1141.000000 >(x=323.000000, y=5504.000000), t=1143.000000 >(x=339.000000, y=5519.000000), t=1145.000000 >(x=355.000000, y=5566.000000), t=1145.000000 >(x=371.000000, y=5539.000000), t=1146.000000 >(x=387.000000, y=5473.000000), t=1145.000000 >(xmax=355.000000, ymax=5566.000000), xmax seen verses curve xmax [355 352.968] > >I believe that the score will reduce at deeper plies (I have seen this trend at >least for shallower plies so far). >The result "bishop_score=352 at 4" means a biship_score of 352 centipawns is >optimal for this test set at 4 plies deep searching. How long does it take to complete a test set at 4 ply? And- you only vary one parameter at a time, right? -- James
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.