Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: note

Author: Ralph Stoesser

Date: 10:56:15 07/07/03

Go up one level in this thread


On July 07, 2003 at 09:24:58, Vincent Diepeveen wrote:

>On July 07, 2003 at 07:14:39, Ralph Stoesser wrote:
>
>I remember a time that in computerchess some very scientific university programs
>won the world title.
>
>That was when the programs could search like up to 9 ply or so.
>
>Sincethen Rebel won the open world title and it was a PC program from then on.
>This year the only scientific program joining at more than 1 processor will be
>DIEP. however because there has been fulltime work performed at the engine for
>quite some time and i doubt whether you can see it as only scientific.
>
>Not using the money generating ICGA definition, but the time invested, then the
>world title will be won by a pro for sure this year. It was like that the last
>13 years too.
>
>That is because they can earn their living creating a chess engine.
>
>In short when search depths will go deeper a bit than they are now, and when it
>would be possible to make a living selling a backgammon engine, then all the
>amateuristic ANN crap will be gone for sure.

One problem for backgammon programming is that there is not so much theory about
what good play means available like for chess. For chess we have many patterns,
rules, books, databases on how to play a decent game of chess. For backgammon
it's harder to find such things. Especially for a position type called
'backgame', which is very complicated to handle accurately, there are no good
theory books available if I'm not wrong. I think that's the main reason for
beeing ANNs successfully there. To be able to hand tune a backgammon evaluation
you would need such concrete informations. But nevertheless I can see your point
also. In backgammon programing is not much commercial competition like it's the
case in chess programming, that's true.

>
>Note that sometimes you can only sell your stuff saying you do the same like the
>competitor. I do not know to which amount that is the case in the wordings of
>the backgammon people.
>
>Just like Ed Schroder sold Rebel one day using 'anti-GM' feature. Until today we
>do not know what it is, except that he denied it being a simple thing like
>opening the position a bit more. It possibly is a commercial vehicle to sell
>something already existing, that's all.
>
>Note that other sports people claim that just search depth solved chess,
>referring to deep blue. So i get impression you are the same type of guy in this
>case.

I don't see your point from your last paragraph. I've never claimed something
like that, but to answer your implicit question:
No I wouldn't claim something like this, but I think basically it's a question
of belief since we don't have enough games from Deep Blue.
By the way, before I wrote my first message here, I've read very much threads.
Surely I've read some hundreds of the Deep Blue reffering posts and it was a lot
of fun to read it.

greetings,
Ralph


>
>>On July 07, 2003 at 05:50:43, Vincent Diepeveen wrote:
>>
>>>On July 07, 2003 at 01:49:50, Ralph Stoesser wrote:
>>>
>>>Go to an average backgammon tournament and you'll see in the top many
>>>chessplayers there. of course don't try that in Greece. Not enough chessplayers
>>>there. But even there the few chessplayers will be winning the tournament
>>>nearly.
>>
>>Is this true for the very best bg players in the world, let's say for the top
>>20?
>>
>>>
>>>The doubling cube action is something that hand tuning will completely outgun of
>>>course.
>>
>>It gets a bit offtopic since we are talking about backgammon all the time, but
>>to clarify one thing about the cube handling: Accurate doubling cube action
>>depends strictly on accurate evaluation of the related position. The math for
>>the cube action _behind_ the board evaluation is relatively simple and for sure
>>nothing for a NN based tuning, but as a precondition to calculate an accurate
>>cube handling you need an accurate evaluation number of the current position,
>>where evaluation number means the average outcome in winning points when playing
>>the position in question infinite times. And in the field of evaluation of
>>backgammon positions it has been found (so far) that NN based evaluation tuning
>>does it better than hand tuned evaluation. This is obvious because otherwise the
>>best bg programs would not do it NN based.
>>
>>greetings,
>>Ralph
>>
>>>
>>>You must compare it with casino games. As soon as there is a lot of money
>>>involved all the chances there get written down by *hand* even. Also when it's
>>>thousands of possibilities.
>>>
>>>Now in games of backgammon, a lot of money at the topboards is involved, but
>>>that's because of people betting at their games, not getting paid for an engine.
>>>
>>>Cheers,
>>>Vincent
>>>
>>>>On July 06, 2003 at 21:26:31, Vincent Diepeveen wrote:
>>>>
>>>>>On July 06, 2003 at 17:51:53, Ralph Stoesser wrote:
>>>>>
>>>>>>On July 06, 2003 at 17:38:01, Vincent Diepeveen wrote:
>>>>>>
>>>>>>>On July 06, 2003 at 16:21:05, Uri Blass wrote:
>>>>>>>
>>>>>>>>On July 06, 2003 at 15:42:25, Vincent Diepeveen wrote:
>>>>>>>>
>>>>>>>>>On July 06, 2003 at 08:00:48, Uri Blass wrote:
>>>>>>>>>
>>>>>>>>>>On July 06, 2003 at 03:04:07, Christophe Theron wrote:
>>>>>>>>>>
>>>>>>>>>>>On July 06, 2003 at 01:15:41, Uri Blass wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On July 06, 2003 at 00:25:49, Uri Blass wrote:
>>>>>>>>>>>><snipped>
>>>>>>>>>>>>>>Maybe using it for the evaluation is not the most efficient use of a neural
>>>>>>>>>>>>>>network in a chess program. It seems that the way human players manage to search
>>>>>>>>>>>>>>the tree is vastly underestimated.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>    Christophe
>>>>>>>>>>>>>
>>>>>>>>>>>>>I agree with you that search is underestimated in chess but I also believe
>>>>>>>>>>>>>that search and evaluation are connected because a lot of search decisions are
>>>>>>>>>>>>>based on evaluation of positions that are not leaf positions so you cannot
>>>>>>>>>>>>>seperate them and say search improvement gives x elo and evaluation improvement
>>>>>>>>>>>>>gives y elo.
>>>>>>>>>>>>>
>>>>>>>>>>>>>Uri
>>>>>>>>>>>>
>>>>>>>>>>>>I know that you did not try to seperate between them but my point is that if you
>>>>>>>>>>>>want to do the same as humans in the search then changing the search is not
>>>>>>>>>>>>enough.
>>>>>>>>>>>>
>>>>>>>>>>>>Humans may search position for some seconds and decide that this position is not
>>>>>>>>>>>>good and later search the same position but decide that it is good for them not
>>>>>>>>>>>>because they search deeper but because they learned to change their evaluation
>>>>>>>>>>>>based on searching other lines that leaded to a similiar position.
>>>>>>>>>>>>
>>>>>>>>>>>>Uri
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>Well my point is just that when people talk about an application of ANN in chess
>>>>>>>>>>>they always talk about implementing the evaluation with an ANN, or tuning the
>>>>>>>>>>>evaluation with them.
>>>>>>>>>>>
>>>>>>>>>>>I think it tends to show that the application of ANN to chess has never been
>>>>>>>>>>>done by a "real" chess programmer. Because evaluation is only a part of a chess
>>>>>>>>>>>program. And maybe not the one that can be improved dramatically, or that needs
>>>>>>>>>>>them in order to be improved. Personally I would not use ANNs in the evaluation
>>>>>>>>>>>first, because I think they would be much more efficient somewhere else.
>>>>>>>>>>>
>>>>>>>>>>>On the other hand, you are right. If one could design an ANN to perform the
>>>>>>>>>>>evaluation, it would be wise to use the same ANN (or an extension of it) to
>>>>>>>>>>>guide the search.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>    Christophe
>>>>>>>>>>
>>>>>>>>>>I believe that the biggest advantage that can be achieved in evaluation is not
>>>>>>>>>>in changing the initial static evaluation but in learning to change the
>>>>>>>>>>evaluation during the game based on the results of the search.
>>>>>>>>>>
>>>>>>>>>>I also do not believe that what humans know is the target and the target should
>>>>>>>>>>be better than what humans know.
>>>>>>>>>>
>>>>>>>>>>programs found better evaluation than humans in backgammon and program may find
>>>>>>>>>>better search rules than humans in chess not because programs are smarter but
>>>>>>>>>>because programs may do trillions of calculation to learn and humans cannot do
>>>>>>>>>>it.
>>>>>>>>>>
>>>>>>>>>>Uri
>>>>>>>>>
>>>>>>>>>This is the same utter nonsense crap that i keep seeing AI people write. Yet on
>>>>>>>>>average they even have less experience than you and keep believing in something
>>>>>>>>>they can never proof to be made. If they would have even *toyed* with ANNs a bit
>>>>>>>>>they will understand more about the impossibilities about it.
>>>>>>>>
>>>>>>>>I only say that I believe that it can be done.
>>>>>>>>It does not mean that I know how to do it.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>Show me a backgammon program with an ANN that beats a 5 turns fullwidth
>>>>>>>>>searching backgammon program :)
>>>>>>>>>
>>>>>>>>>Of course show it at a machine that you and i have at home.
>>>>>>>>
>>>>>>>>Very easy
>>>>>>>>the 5 turns fullwidth searching backgammon program is going to lose on time
>>>>>>>>every game.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>The average ANN expert is assuming he has to his availability something doing
>>>>>>>>>10^1000 calculations.
>>>>>>>>
>>>>>>>>I am not ANN expert and I did not suggest ideas how to do it.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>That is the major problem when talking to these guys.
>>>>>>>>>
>>>>>>>>>Of course you can optimize an ANN for chess in 10^1000 calculations.
>>>>>>>>>
>>>>>>>>>But you will then be beaten by a database of just 10^43.
>>>>>>>>>
>>>>>>>>>I am however sure that 99% of all ANN interested will not understand what i
>>>>>>>>>write here above, simply because they do not know the running time of the learn
>>>>>>>>>methods applied. If they would read themselves into that, then less crap would
>>>>>>>>>leave their mouth.
>>>>>>>>
>>>>>>>>I did not say that the learning methods that are used in backgammon can work in
>>>>>>>>chess and it is possible that people need to invent different learning methods.
>>>>>>>>Uri
>>>>>>>
>>>>>>>If there was money to earn by programming a backgammon engine, i am sure some
>>>>>>>guys who are good in forward pruning algorithms like Johan de Koning would win
>>>>>>>every event there. It's like making a tictactoe program and then claiming that
>>>>>>>an ANN is going to work.
>>>>>>
>>>>>>Version 4 Professional edition, full version USD 380
>>>>>>from http://www.snowie4.com/
>>>>>>
>>>>>>Do you know the rules of Backgammon? Remember, you have to consider two dices in
>>>>>>your search tree. If it's so easy to do better without NN, do it and you will
>>>>>>earn a lot of USD. Usually backgammon players have more mony in their pocket
>>>>>>than chess players ;)
>>>>>
>>>>>There is so little backgammon players however.
>>>>
>>>>
>>>>
>>>>If you go to a backgammon
>>>>>tournament i pay like 250 euro entry fee. it is sick. Every good chessplayer can
>>>>>play backgammon very well trivially.
>>>>>
>>>>>It is a matter of a good % calculation and chances. this is trivial stuff.
>>>>
>>>>It isn't trivial. How do you explain that all top backgammon programs use NNs?
>>>>Shouldn't be some trivial statistically calculation enough? In backgammon you
>>>>have not only the problem to find the best move (what is also not trivially),
>>>>but to find the right cube action for the doubling cube and that's very very far
>>>>from beeing trivial. And why a good chessplayer should be able to play very well
>>>>backgammon trivially? I would agree that it can help learning beackgammon to be
>>>>a good chessplayer, but there is nothing like the implication you gave about it.
>>>>
>>>>
>>>>If
>>>>>there was to earn big bugs with just ENGINE (so i do not mean interface) then
>>>>>there would be much chessprogrammers writing such an engine ;)
>>>>
>>>>Btw:
>>>>Is there big bucks to earn with a chess engine?
>>>>
>>>>>
>>>>>>Ralph
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>As we have a saying here: "In the land of the blind, one eyed is King".
>>>>>>>
>>>>>>>That's why i focus upon chess.
>>>>>>>
>>>>>>>In contradiction to you, i know how to do it with ANNs (just like many others
>>>>>>>do), i just don't have 10^1000 system time to actually let the learning
>>>>>>>algorithm finish ;)
>>>>>>>
>>>>>>>Any approximation in the meantime will be playing very lousy chess...
>>>>>>>
>>>>>>>Hell, with 10^1000 runs, even TD learning might be correctly finding the right
>>>>>>>parameter optimization :)
>>>>>>>
>>>>>>>TD learning is randomly flipping a few parameters each time. It's pretty close
>>>>>>>to GA's in that respect.



This page took 0.04 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.