Author: Dann Corbit
Date: 14:42:53 06/03/02
Go up one level in this thread
On June 01, 2002 at 15:49:24, Rolf Tueschen wrote: >On May 31, 2002 at 21:07:17, Dann Corbit wrote: > >>On May 31, 2002 at 21:00:44, Rolf Tueschen wrote: >> >>>On May 31, 2002 at 20:35:38, Dann Corbit wrote: >>> >>>>On May 31, 2002 at 20:24:35, Rolf Tueschen wrote: >>>> >>>>>On May 31, 2002 at 20:02:37, Dann Corbit wrote: >>>>> >>>>>>On May 31, 2002 at 19:22:27, Rolf Tueschen wrote: >>>>>> >>>>>>>On May 31, 2002 at 19:01:53, Dann Corbit wrote: >>>>>>> >>>>>>>>Since people are so often confused about it, it seems a good idea to write a >>>>>>>>FAQ. >>>>>>>>Rolf's questions could be added, and a search through the CCC archives could >>>>>>>>find some more. >>>>>>>> >>>>>>>>Certainly the games against the old opponents is always a puzzle to newcomers >>>>>>>>who do not understand why calibration against an opponent of precisely known >>>>>>>>strength is of great value. >>>>>>> >>>>>>> >>>>>>>No pun intended, but excuse me, you can't mean it this way! Are we caught in a >>>>>>>new circle? How can the older program be precisely known in its strength? >>>>>>>Of course it it isn't! Because it had the same status the new ones have today... >>>>>>> >>>>>>>And the all the answers from Bertil follow that same fallacious line. It's a >>>>>>>pity! >>>>>>> >>>>>>>Also, what is calibration in SSDF? Comparing the new unknown with the old >>>>>>>unknown? No pun inded. >>>>>>> >>>>>>>Before making such a FAQ let's please find some practical solutions for SSDF. >>>>>> >>>>>>The older programs have been carefully calibrated by playing many hundreds of >>>>>>games. Hence, their strength in relation to each other and to the other members >>>>>>of the pool is very precisely known. >>>>>> >>>>>>The best possible test you can make is to play an unknown program against the >>>>>>best known programs. This will arrive at an accurate ELO score faster than any >>>>>>other way. Programs that are evenly matched are not as good as programs that >>>>>>are somewhat mismatched. Programs that are terribly mismatched are not as good >>>>>>as programs that are somewhat mismatched. >>>>>> >>>>>>If I have two programs of exactly equal ability, it will take a huge number of >>>>>>games to get a good reading on their strength in relation to one another. On >>>>>>the other hand, if one program is 1000 ELO better than another, then one or two >>>>>>fluke wins will drastically skew the score. An ELO difference of 100 to 150 is >>>>>>probably just about ideal. >>>>> >>>>>I don't follow that at all. Perhaps it's too difficult, but I fear that you are >>>>>mixing things up. You're arguing as if you _knew_ already that the one program >>>>>is 1000 points better. Therefore 2 games are ok for you. But how could you know >>>>>this in SSDF? And also, why do you test at all, if it's that simple? >>>> >>>>No. You have a group of programs of very well known strength. The ones that >>>>have played the most games are the ones where the strength is precisely known. >>> >>>I can't accept that. >> >>Mathematics cares nothing about your feelings. > >Dann Corbit, will you please realize that maths won't help the validity loch! >;-) > >> >>>>Here is a little table: >>>> >>>>Win expectency for a difference of 0 points is 0.5 >>>>Win expectency for a difference of 100 points is 0.359935 >>>>Win expectency for a difference of 200 points is 0.240253 >>>>Win expectency for a difference of 300 points is 0.15098 >>>>Win expectency for a difference of 400 points is 0.0909091 >>>>Win expectency for a difference of 500 points is 0.0532402 >>>>Win expectency for a difference of 600 points is 0.0306534 >>>>Win expectency for a difference of 700 points is 0.0174721 >>>>Win expectency for a difference of 800 points is 0.00990099 >>>>Win expectency for a difference of 900 points is 0.00559197 >>>>Win expectency for a difference of 1000 points is 0.00315231 >>>> >>>>Notice that for 1000 ELO difference the win expectency is only .3%. >>> >>>I see. So, that is the Elo calculation of Elo for human chess, right? What is >>>giving you the confidence that it works for computers the same way? >> >>The math does not care at all about the players. Human, machine, hybrid, >>monkey. > >Sure? And what is if strength is _not_ following the socalled normal >distribution for machine, hybrid and an awful lot in monkeys? Maths is one thing >and reflection another (if and how maths should be started). > > >> >>>>Therefore, if one thousand games are played between two engines with 1000 ELO >>>>difference, any tiny discrepancy will be multiplied. So if in 1000 games, >>>>instead of winning 3 points (as would be expected to the 997 for the better >>>>program) 5 points or no points were won it would be a 100 ELO error! >>>> >>>>Hence, if the program we are testing against is exactly 1000 ELO worse than the >>>>one of known strength, we will have problems with accuracy. The upshot is that >>>>it is a tremendous waste of time to play them against each other because very >>>>little information is gleaned. >>> >>>This is all ok. >>> >>> >>>> >>>>On the other hand, when the programs are exactly matched, then the win >>>>expectancy is that they will be exactly even. However, because of randomness >>>>this is another area of great trouble. Imagine a coin toss. It is unlikely >>>>that you will get ten heads in a row, but sometimes it happens. So with exactly >>>>matched programs, random walks can cause big inconsistencies. Therefore, with >>>>evenly matched engines it is hard to get an excellent figure for strengths. >>> >>> >>>I see. Here it goes again. You want to get validity through matching tricks. But >>>excuse me another time, that won't function! I never heard of such tricky magic. >>>You don't have any numbers for strength out of SSDF until now. >>>Just a little thought game from my side. I would suppose that the better program >>>in computerchess will always have a 100% winning chance or expectancy. The only >>>thing that'll disturb this is chess itself. The better prog could have bad luck. >>>But it is the better prog nevertheless. >> >>Kasparov can lose to a much weaker player. But against dozens of weaker players >>with hundreds of games it is not going to happen. Similary for computers. > > >Sure? And what is if the behaviour of machines is deterministic? > >21, 22, 23... Bingo! > >That is why I'm talking about the fallacies in SSDF! It's uninteresting. In >human chess however we could still wait for the exceptions. But Kasparov still >won't lose against a _much_ weaker human player. Against a machine, yes, >perhaps. :) If the behaviour of machines were perfectly deterministic, then they would always play the same games, over and over. Program 'A' (if it won a single time) would win *every* game against program 'B' because they would be in a lock-step identical dance on every game. To solve this problem, computer programmers introduce randomness. Usually, it starts like this... ... srand((unsigned)time(NULL)); ... The above call uses the system clock to get a new seed for the random number generator. This means that a different sequence of random numbers will be used each time someone runs the program (actually, probably only 4 billion different sequences, but the hard drives will run out long before they are exhausted). Then, as the program is played, decisions will be made with calls to the rand() function with assigned probablility weights as a function of goodness. Hence, from a group of (perhaps) the top 5 moves, the one that looks best will happen 75% of the time, but the others will happen sometimes too. Now, as we go from move to move if the other moves even have a 5% chance of happening, then the chance of repeating a long game is basically zero. In addition, computer chess programs often learn as they play. Not smart learning of principles like humans do, but rote memorization of mistakes. So they won't make the same mistakes over and over. That is one reason why GM's access to computers won't mean instant destruction for the computers. >>>Now please take my experiment for a >>>moment into your reflections. You see what I mean with the nonsense in SSDF? In >>>SSDF you can't differentiate what is strength and what is chess because you have >>>no validity. Know what I mean? If yes, please do you explain it for Bertil and >>>SSDF? >> >>I am afraid that you aren't going to get it. I would suggest an elementary >>statistics text. > >Ok, we can stop it at the instant. Just say that word, please. You have the >power! But until then I'll claim that you haven't got what I'm talking about, >speaking about validity! Either tell me please where you see validity in SSDF or >stop please the continual applause for SSDF. It's not justified. The model is mathematically valid. You don't understand that. Fine. A model is valid if it can predict outcomes. In fact, the model accurately predicts outcomes on a broad basis. In other words, if one program is 100 ELO above some other programs, it will win about 64% of the points in a very long match against the weaker groups. For this purpose, the SSDF results are (in fact) most excellent. The SSDF results do not predict how the machines will do against people. On the other hand, we can deduce that there will be *some* sort of correlation between computer/computer strength and computer/human strength. Unfortunately, we cannot say what that correlation is without expermentations and data. We can only guess. The SSDF list produces this: If you play a sequence of computer programs taken from the SSDF list under the exact conditions of the SSDF matches, you will get similar behavior. If anyone thinks it produces more than that, they are mistaken. We can suppose that correlations of computer strength translate to games against humans, but that is an untested hypothesis. > >> >>>>On the other, other hand, if the strength differs by 100 ELO or so, a pattern >>>>will quickly form. This is an excellent way to rapidly gain information. >>>> >>>>>That has nothing to do with correct testing. At first we must secure that >>>>>everyone is treated equally, with equal chances. Each program must have the same >>>>>chances to play _exactly_ the same other programs, under the same conditions, >>>>>etc. >>>> >>>>I agree that this is the ideal experimental design. But we must be very careful >>>>not to move the knobs on our experiment or the data becomes worthless. >>> >>>Apart from the game scores, I'm afraid, the whole SSDF rankings have no meaning >>>at all, Dann! >> >>They have meaning for the data set involved under the precise conditions of the >>tests. If they lack meaning for you that simply means that you do not >>understand it. > >Again, this is not _my_ invention. Without validity you have nothing at all, but >fine rankings, yes, it's looking nice. You do not understand what you are saying. Or the mathematics escapes you. In any case, there are no difficulties with the validity of the SSDF data beyond what is normally seen in experimental setups. >>>> For >>>>instance, you mentioned that the books have been updated so why don't we use the >>>>new books with the old programs? The reason is because it completely >>>>invalidates our old data! We have changed a significant variable and now we can >>>>draw no conclusions whatsoever about the strength of our new combination. We >>>>will have to calibrate over from scratch. In addition, it would be necessary >>>>for an end-user to have both the new and old version of the program in order to >>>>replicate the tests. Furthermore, there are a huge number of possible hybrid >>>>combinations. Who is going to spend the centuries to test them all? >>> >>>Hopefully nobody! It doesn't make sense. The only calibration is the one with >>>human chess. Otherwise your Elo has no meaning in SSDF. The actual calibrating >>>is more like homeopathy. >> >>Are you going to pay the millions of dollars to pay the GM's to play tens of >>thousands of games at 40/2 against computers? > >This is, excuse me, nonsense. BTW the companies will do what they can, but they >won't pay for the revelation that their progs are just average masters, but not >IM neither GM. (Please read my contribution to Andrew Dados' joke. I'm talking >about real group-like fight GM vs comp to develop real anti-computerchess, not >just a few cooked lines.) This is an interesting hypothesis and it certainly has merit. However, there is no connection whatever to my statement. If they *DID* run the experiments then we would get good data. I make no statements, guesses or extrapolations as to whether anyone ever will actually do it. >But here is how I would advise SSDF to proceed. Invite a good human expert with >comp experiences. He has a defined Elo. Let him play the new progs, but I'm >talking about the "hard" version of play. Why only anti-comp experts and not a broad field? Don't you think this will skew the result? >Not PR bogus! Then they have +/- >something their Elo. So, all players all over Sweden are invited to take part. >Comp vs comp is bogus and won't produce valid Elo numbers, BTW no matter how >much genius you might put into your broadness debate about pools... They produce perfectly valid comp/comp numbers. They do not produce comp/human correlations. > >> >>>>>But I must apologize if this sounded as if I wanted to teach you stats. You know >>>>>that yourself. No? >>>> >>>>I'm afraid I don't understand this last sentence at all. >>> >>>I thought that you studies statistics. So I don't like teaching you. >> >>My degree is in Numerical Analysis. If you can teach me something about >>statistics I will be only too happy to learn it. > >I already tried. I tried to explain, how important the reflections are prior to >mere calculating in stats. I'm talking about statistical design. If there you >begin with nonsense then the best maths (and maths itself is always GOOD!) won't >help you out of the mess. That is the important thing to understand if you want >to learn something about stats. The collection of hundreds of formulas is not >really important. But most people are confusing the topics. Once someone wrote >to me on a posting: but maths is maths, and if it functions in human chess then >it's also functioning in computerchess. Well, this is simply wrong. Of course >it's functioning but what is the meaning of the results? That is the question. > >Please do not take me for arrogant if it may sound so. For me this is so trivial >that I can't explain it in the didactically best way. I find the discussions with you both interesting and fruitful.
This page took 0.02 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.