Author: martin fierz
Date: 07:42:21 08/06/03
Go up one level in this thread
On August 06, 2003 at 05:04:38, Uri Blass wrote: >On August 06, 2003 at 03:23:12, martin fierz wrote: > >>On August 05, 2003 at 12:37:52, Keith Evans wrote: >> >>>On August 05, 2003 at 03:42:56, martin fierz wrote: >>> >>>>On August 04, 2003 at 14:55:07, Keith Evans wrote: >>>> >>>>>A page describing it is at: >>>>> >>>>>http://www.digenetics.com/products/chess/about.htm >>>>> >>>>>They seem to imply a connection with Fogel's work on checkers which is described >>>>>in the book "Blondie24: Playing at the Edge of AI." Is this really true, or is >>>>>this more about grafting something like the Microsoft paperclip onto a chess >>>>>program? I don't know too much about checkers, so maybe Fogel's work doesn't >>>>>even amount to much as far as playing strength goes. (I just got that book as a >>>>>present and haven't read it yet.) >>>> >>>>the checkers program "blondie24" is very weak compared to any decent checkers >>>>program out there. the book is an interesting read and all that, but the thing >>>>really can't play checkers! i'd be surprised if it was any different with >>>>chess... >>>> >>>>cheers >>>> martin >>> >>>How much knowledge would you need to add to a checkers program for it to match >>>the strength of blondie24? And roughly how long would it take to add that >>>knowledge? Is it one day's work? (I haven't read the book, but I assume that >>>Fogel was just addressing the evaluation function.) >>> >>>Thanks, >>>Keith >> >>fogel & co used a plain alpha-beta search IIRC, and did the eval with a neural >>network which self-tuned itself. they report one (IMO faulty) experiment in the >>book which is a comparison of their program with one which has a material-only >>eval, their program coming out on top (as was to be expected...). the fault in >>the experiment is that their search engine is very slow. if your eval is VERY >>VERY slow as theirs is, that doesn't matter since your speed is limited by the >>eval. if your eval is very fast (material only), then the slow search is a >>serious problem. if they had a decent search, then their neural-network version >>would search to the same depth but the material-only version would search much >>deeper. >>i don't own the final version of blondie24, i just looked at the games given in >>the book. my own checkers program thinks those games are full of errors. i >>assume that if you give me one hour to write an evaluation function, my program >>would beat blondie24 easily. >> >>cheers >> martin > >I do not understand. >I know nothing about blondie24 but do you say that you think that you can do >something clearly better in one hour by only modifying the evaluation without >changing the slow search engine? no. i said if i took my checkers program which has a fast search, and wrote a new eval for it in one hour, my program would beat blondie. even if i remove all search improvements (pruning, extensions), and remove the endgame databases. >If I understand correctly blondie24 has problem of slow search that has nothing >to do with the search algorithm and when you say slow search you mean that it is >easy to do the same algorithm faster and not that it is using bad search >algorithm(for example not using null move pruning or another known pruning >algorithm if null move is not good in checkers). > >Uri let me elaborate a bit: blondie has a VERY slow eval, because it's a complicated neural network (NN). because this eval is so slow, it doesn't matter how slow the search of blondie is (a perft for checkers if you like). therefore, the search of blondie was never optimized. now if you remove the very expensive neural net eval, and replace it with a material-only eval, you are suddenly limited by your slow search. which means you're in fact cheating the material-eval-only version of program by not letting it run at full speed. if you used a fast search with both blondie and the material-only-version, blondie would not gain a bit, because the NN eval is so slow. the material-only version might run at 10 times the speed seeing 3 ply deeper... there are a LOT of things fogel could (or even should!) have done: FYI: the cool part about blondie is that it uses a self-learned neural network to evaluate checkers positions. the search is taken as it is, a plain alpha-beta search. now, what would have been really interesting is for example: - if you replace the NN eval function with a simple eval (like what i would write in one hour) and play that program against blondie, which one wins? i.e. is the NN eval really any good compared to a human eval? you can do this for fixed search depth to really compare the eval only. - you can do the same with constant search time, to see whether it pays off to do the more expensive NN eval. instead, they compared their program with weak humans who usually blundered pieces. then they compared their program with what they call "a commercial checkers playing program" in their paper. that's some kind of a package with games for kids in it, which is being sold in the US, but since it's for kids, the games are not good at all, else the kids would not have any fun with them. they could have compared it with a good free program, like mine. i know why they didn't :-) to be fair on fogel & co: they wanted to demonstrate that a NN eval can learn by itself, nothing else. they did that. however, it would have been really interesting to compare the quality of their eval with that of a human expert. cheers martin
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.