Author: Ed Schröder
Date: 04:10:18 07/22/00
Go up one level in this thread
On July 21, 2000 at 07:38:23, Graham Laight wrote: >On July 20, 2000 at 15:15:23, Ed Schröder wrote: > >>On July 20, 2000 at 08:57:21, Graham Laight wrote: >> >>>On July 20, 2000 at 08:22:57, Ed Schröder wrote: >>> >>>>On July 20, 2000 at 06:56:59, Graham Laight wrote: >>>> >>>>>On July 20, 2000 at 02:38:07, Ed Schröder wrote: >>>>> >>>>>>On July 19, 2000 at 18:36:28, Graham Laight wrote: >>>>>> >>>>>>>On July 19, 2000 at 15:02:14, Amir Ban wrote: >>>>>>> >>>>>>>>On July 19, 2000 at 11:06:13, Alvaro Polo wrote: >>>>>>> >>>>>>>>>I'll believe that adding new knowledge to Junior is almost free. I have then two >>>>>>>>>questions. >>>>>>>>> >>>>>>>>>1.- Why isn't then Junior's evaluation much better? Please don't misunderstand >>>>>>>>>me. I am sure it has a great evaluation but, one may think that when things are >>>>>>>>>almost free you could just add any bit of knowledge that you might consider >>>>>>>>>useful under any circumstance and have a really astounding, hypergreat, out of >>>>>>>>>this world evaluation. >>>>>>>>> >>>>>>>> >>>>>>>>Because the problem is not writing evaluation terms but deciding which one's are >>>>>>>>right or formulating them correctly. Not to mention giving them correct weight. >>>>>>>> >>>>>>>>I don't know where many posters in this newsgroup get the idea that "knowledge" >>>>>>>>in chess is obvious and it's just a matter of coding it. In fact the opposite is >>>>>>>>true: the *true* and *correct* rules of evaluation ar more like a hidden mystery >>>>>>>>that programmers look for like the mathematicians looked for the proof of >>>>>>>>Fermat's theorem. I can easily add 20 new elements to my evaluation, but I >>>>>>>>expect 19 of them will prove to be wrong, or (what amounts to the same thing), >>>>>>>>badly tuned. >>>>>>>> >>>>>>>>The best way to become familiar with this problem is to write a chess program. >>>>>>> >>>>>>>I've seen programmers complain about this problem before. They also complain >>>>>>>that they apply knowledge to fix a problem in one type of position, and it plays >>>>>>>worse in another type of position. >>>>>>> >>>>>>>I've long believed that the solution is to be more systematic in the creation, >>>>>>>storage, and selection of knowledge. >>>>>>> >>>>>>>I'm afraid I'm not going to write a chess program in the forseeable future, but >>>>>>>here's my idea to store knowledge systematically, then address the problem of >>>>>>>getting to the right knowledge in the right positions: >>>>>>> >>>>>>>* write a large number of evaluation functions (eval fn), and store them in a >>>>>>>database (db) so that they can be managed and maintained (one eval fn per db >>>>>>>record) >>>>>>> >>>>>>>* for each record, as well as the eval fn, also store some "indexes" to indicate >>>>>>>the type of position where this eval fn will be useful >>>>>>> >>>>>>>* generate a chess position to evaluate (we're playing a game now) >>>>>>> >>>>>>>* compare this position with the indexes of the eval fns, creating a >>>>>>>(percentage?) match value between each eval fn, and the present position >>>>>>> >>>>>>>* sort the eval fns into order of match value, and pick the best matching eval >>>>>>>fns >>>>>>> >>>>>>>* apply each of the selected eval fns to the present position, weighting them >>>>>>>according to their match value >>>>>>> >>>>>>>* do all this with some sort of "in memory" database with fast APIs, and do some >>>>>>>tricks with the database indexing, so that it can all be done quickly in "real >>>>>>>time" (I can post a web site of a product that can do this, if anyone is >>>>>>>interested). >>>>>>> >>>>>>>I hope this stimulates further discussion - I personally think that this is a >>>>>>>very important issue. After all - we're trying to do "Artificial Intelligence" >>>>>>>here - and you can't really be intelligent without having both plenty of >>>>>>>knowledge, and the ability to select the right sliver of that knowledge for the >>>>>>>present circumstances. >>>>>>> >>>>>>>-g >>>>>> >>>>>>You are missing some main points which Amir explained quite well. I will >>>>>>give a very simple example about one of the main problems implementing >>>>>>new chess knowledge. >>>>>> >>>>>>Suppose you are adding new chess knowledge concerning rook moblity on open >>>>>>files. It will have effect on all the other chess knowledge you have written >>>>>>once before. You for instance will get unwanted side effects concerning pawn >>>>>>structure, king safety etc. Unwanted negative side effects can be: a) the >>>>>>engine will accept double pawns too easy because the rook wants to have >>>>>>the open file b) making unsound moves regarding king safety as open rook >>>>>>file is rewarded too high in respect to king safety. >>>>>> >>>>>>The opposite is also true for (a) and (b). Often a position requires to >>>>>>accept the weakness of a double pawn, often a position requires an open >>>>>>rook file above king safety. >>>>>> >>>>>>This is just one of the simple cases. CC is not about just coding new chess >>>>>>knowledge (that is the most easy part) but how it correlates with all the >>>>>>other stuff you have. And the more you have the more difficult it becomes >>>>>>to add new chess knowledge because of these unwanted side effects. If you >>>>>>for instance forget to check (test) chess knowledge you have programmed 5 >>>>>>years ago if it still correlates with the new stuff and it doest not correlate >>>>>>you end-up with a program that is weaker without realizing it (for years). >>>>>> >>>>>>The main problem of adding new chess knowledge is the correlation between >>>>>>all the chess knowledge you have and your suggested database system will >>>>>>have the same trouble. >>>>>> >>>>>>Ed >>>>> >>>>>I have to disagree STRONGLY with this assessment of my idea!!! >>>>> >>>>>The problem you are describing is the classic problem of "knowledge management" >>>>>when you have a huge amount of knowledge. My thinking is explicitly designed to >>>>>address this very issue. >>>>> >>>>>I'll admit right now that there is a maintenance problem with my scenario, that >>>>>I quietly neglected to mention: every time you realise that you need new indexes >>>>>in your knowledge db, you are going to have to "retro-fit" those indexes to all >>>>>the knowledge records in the database that you already have. That's a >>>>>maintenance problem which I haven't conceptually solved, yet. Even though this >>>>>is "model" chess program, not a "real" one, I'd be interested to hear if anyone >>>>>has any ideas about this. >>>>> >>>>>However, when it comes to keeping the knowledge under control, well organised, >>>>>maintainable, and selected for use on just the right occasion, my idea is the >>>>>tops! (imho) >>>> >>>>Weighting is a matter of practise, playing games and then notice the >>>>thousands and thousands exceptions. Just do yourself a favor and write >>>>a QBASIC or JAVA program (easy to learn) and then get the shock of your >>>>life after you have gone through the initial trouble (an engine that >>>>plays legal chess only) and experience the exceptions till you can't >>>>stand it any longer pick a hammer and smash the thing that is torturing >>>>you into the number of pieces of exceptions you have noticed. Then write >>>>a posting in CCC with the header, "You are all idiots" to those who we >>>>usually call chess programmers here. >>> >>>We're not getting a little bit upset here, are we? :) >> >>I am still climbing the walls eating the wallpaper if that is what you mean :) >> >> >>>Actually, the serious point I want to make in reply to your paragraph above is >>>that the whole point of my idea is to address the very issue that knowledge is >>>all about "exceptions". >>> >>>If a few simple, generalised rules could give you appropriate behaviour in >>>almost every situation, there would be no point in us having large brains, and >>>evolution would not have given them to us. >>> >>>My design (above) is all about getting to the RIGHT piece of knowledge at the >>>RIGHT time. >> >>The "right piece of knowledge at the right time" as you call it requires a >>REFERENCE. >> >>Where is your reference? From the chess books? Your own interpretation >>of chess knowledge? Both are based on previous learned things (reference) > >The way I see it, a "reference" is a piece of knowledge, or expertise. Of >course, you need plenty of this to make an "expert system". > >This is only a model chess program we're discussing, not a real one. If I may, >I'd like to assume that I can get knowledge from the same sources as everyone >else (books, tutorials, human experts, analysis of why the program lost previous >games). > >>something a computer isn't aware off. >> >>In 2 almost identical positions the first having a white pawn on a3, the >>second having the white pawn on a4 can be a matter of 1-0 or 0-1 and we >>humans see it at a (single) glance based on a reference (previous learned >>things). So how to get a reference for each piece of knowledge you program? > >Yes - this type of situation is tremendously important! > >A simple example would be whether a king can catch a pawn before it queens. So >in this instance, hopefully, the eval fn that would be top of the ordered list >would be the one that checks whether a king can catch a pawn. Assuming you are talking about pawn-endings I can say that some programs use the so called "quadrant rule", that is if the king is outside the quadrant of the enemy pawn the enemy pawn is counted as a queen. Rebel does this since the beginning of times others use other ways. >As you point out, there are thousands of such pieces of knowledge required >(50,000 to become a GM, according to "Chess Skill In Man And Machine"). This is >why I believe that programmers need to think about how to build and organise >this knowledge methodically. Why do you believe programmers don't organize knowledge methodically? It is rule number one if you want to have a strong program and there are many roads that lead to Rome (dutch expression). >>The only way to get a "reference" is creating (a set of) positions which will: >>a) proof the knowledge to do as intended by the programmer; >>b) proof the knowledge not to bite other stuff you have programmed. >> >>(a) is easy, (b) is a royal pain. Database or not you have to go through this >>process anyway and most of the time this is the most consuming part of the >>job. If you do a good job on (a) and especially (b) you have a reference say >>70% reliable. > >In my model program, you can very easily check which eval fns are going to have >an impact (and with what weighting) in a given position. If a particular eval fn >comes up too high (or too low), you can inspect the indexes to find out what's >gone wrong, and adjust them accordingly. > >>Then you are ready for the real work (c) playing games which will show you >>the incompleteness of (a) and the exceptions of (b). Then back to work, fix >>it, me changing some table values (see below), you updating your database. > >Absolutely! I'm under no illusion that building a chess knowledge base will be >easy. It takes a GM about 13 years - and they can only get to that level if they >start young, and study chess to the exclusion of developing social skills in >their formative years. > >I meerly stipulate that I have designed a good way to manage all this knowledge. Okay. >>>Steadily, as you build the knowledge database, you'll get to the point where you >>>know as many "exceptions" as a (good) human player. >>> >>>In short - my plan is based on the view that the "exceptions" are the most >>>important thing - and we must have a method of getting to them! >>> >>>>Don't you think programmers don't have a system as you describe? Of course >>>>they have only that they have other names for it (data-structure, (pre- >>>>computed) tables, structures). >>> >>>Unfortunately, there tends to be too much discussion of searching methods in >>>CCC, and not enough (especially at a high level) about how to manage the >>>knowledge in the evaluation. However, I suspect that my idea goes much further >>>in both easing the burden of building knowledge in a methodical way, and in >>>terms of maintaining it - both the old and the new. >>> >>>Also, as far as I know, I'm the first person to suggest using a third party >>>database product with good APIs for storing evaluation knowledge, so that the >>>programmer can have a substantial chunk of the work of building this system (and >>>a lot of good, built in features like searching, viewing and sorting)already >>>done for them. >> >>We all do this in our programs and we don't need a third party database >>system. All of the knowledge is in fast memory and therefore is accessed >>as fast as possible. A third party program would slow down the access by >>a factor of 20-30 and this is softly speaking. > >Oh - I agree wholeheartedly! > >I could argue along the lines that you've exaggerated the cost in time in >various ways (or that some of the time lost will be made up in other ways), and >I might do that at a later date, but for now lets not go there. > >Instead, I would like to make it perfectly clear to everyone reading this that >if you're going to do the kind of thing I'm talking about, you're going to take >a big hit in time. Not really. As said many roads lead to Rome. I know that my setup is good. Others do it in another good way. >Looking forward, I believe that there will be much more parallel processing on >computers, so the cost of doing this kind of thing will be bearable. > >Unless we try something like this, we'll never know how capable a computer is of >"intelligently" understanding a chess position. > >The system we're up against has awesome parallel processing power - 10^11 nerve >cells, each connected, on average, to 10^4 other nerve cells. > >Also, my system, unlike other tools out there, has the potential to be part of a >high level training system. Why shouldn't a computer, like a human, be able to >look at a chess position and quickly say, in order of importance, what the >important factors are? As I see it there are 2 ways a) continue in the way we are doing now speeding up search (hardware & software) + adding new knowledge or b) create a real artificial model that automatically learns. Some have tried (b) with poor results which is no surprise as this not a one or two man's job. Estimations on (b) vary from 15-100 man year work before you might get good results. (b) certainly sounds more convincing to me than (a) but nobody has the power to invest here. >>More... a good chess program has all the knowledge implemented in a very >>accurate way, easy to maintain. All evaluations are table driven so to >>modify weights you only have to change some data (tables) and no single >>piece of code. No third party program can compete with this. > >Now we're talking!!! > >Now, I don't know how much you're willing to reveal about how you organise your >eval knowledge, so I am clearly at a disadvantage here, but in the interests of >keeping an excellent discussion going, I'm going to carry on regardless. > >I apologise if the following section comes across as a "smart alec" who's never >written a chess program telling an experienced old hand how it should be done. >I'm going this way because you have introduced your methodology into the >discussion. > >Having a table of knowledge is clearly an improvement on just having line after >line of C code (although I would guess that even by doing this, you're already >losing some speed). > >This begs some questions, however: > >* do you run each line of the table in every position? No only one. Tables are accessed by an index that points to 1 value of the table. Tables may contain different values (a positional score, a pointer to another table or even a routine, a weight value etc.) But in computer chess you are not going to read a whole table as as you say yourself this guarantees a giant speed-loss. So you work with indexes that only reads one element of a table. Ed >This would become awfully slow once you've got a few thousand lines in the table >- and would certainly explain why your comp v comp score goes down when you turn >the "knowledge level" setting up > >In the case of my model program, you would have to calculate all the indexes for >the general position type, and then match them against the values for these >indexes in those db records for which these indexes existed. Then you would, >from this matching, pick the best eval fns, and just run those (the "matching" >process would also give you your weightings). > >Once you've got several thousand eval fns, my method could be faster than yours >(I assume that your "eval table rows" number in the hundreds, not the >thousands). > >Moreover, if your eval knowledge is in the form of a table of values and >weightings, I further assume that your evaluations are done in some subroutines >(or functions), for which the data is provided in these tables. > >This is fine (a good idea, even!), but if you find that you want to get some >knowledge that your subroutines don't provide, you have to write more >subroutines - all becoming a bit unwieldy to manage. > >My proposal (I admit I haven't worked out the details of how yet...) is to >somehow store the evaluation function itself in the eval fn record, making it >all wonderfully well organised! > >-g > >>Ed >> >> >>>>I think you really overestimate the possibilities of a computer. A computer >>>>is a stupid device that knows nothing, it can not learn (like we) unless you >>>>teach it, it can't recognize even the most simple pattern (like we), all it >>>>is able to is add / subtract / move data / compare data and do that 100-500 >>>>million times a second without making a mistake. That's all you have as a >>>>starting point. >>> >>>I disagree. You can buy good database products, for example. What you've said in >>>the above paragraph is like saying that when you teach a child, all you've got >>>is 10^11 neurons as a starting point. Fortunately, humans have a significant >>>amount of pre-programmed ability to learn. >>> >>>If you want a computer to be able to learn without having to program it >>>yourself, there are products you can buy to do that as well. >>> >>>This is the 21st century. You don't have to do it all yourself anymore! >>>Collaboration will enable us all to achieve more... >>> >>>-g >>> >>>>Ed >>>> >>>> >>>>>For every piece of knowledge, because it is in a database, you could search for >>>>>it in many ways. Without losing anything in the game, you could even categorise >>>>>the db records (e.g. category level 1 = pawn structure, king safety, etc, >>>>>category level 2 would be a list of records pertaining to king safety, etc). >>>>> >>>>>If I did build this system, I would certainly include a good text description >>>>>for each knowledge record in the db, and a short, concise title. >>>>> >>>>>Best of all, when you want to know what's happening with your eval in a >>>>>particular position, you've got a ready made function which will list the >>>>>knowledge records the system is going to use in the eval, ordered by their >>>>>weightings! >>>>> >>>>>You'd also be able to see, also in order of weighting, those knowledge records >>>>>that didn't quite make it into the eval. >>>>> >>>>>What could be better than that? >>>>> >>>>>Do you really believe that my ideas have nothing to contribute to the knowledge >>>>>management problem? >>>>> >>>>>-g
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.