Author: Vincent Diepeveen
Date: 18:26:31 07/06/03
Go up one level in this thread
On July 06, 2003 at 17:51:53, Ralph Stoesser wrote: >On July 06, 2003 at 17:38:01, Vincent Diepeveen wrote: > >>On July 06, 2003 at 16:21:05, Uri Blass wrote: >> >>>On July 06, 2003 at 15:42:25, Vincent Diepeveen wrote: >>> >>>>On July 06, 2003 at 08:00:48, Uri Blass wrote: >>>> >>>>>On July 06, 2003 at 03:04:07, Christophe Theron wrote: >>>>> >>>>>>On July 06, 2003 at 01:15:41, Uri Blass wrote: >>>>>> >>>>>>>On July 06, 2003 at 00:25:49, Uri Blass wrote: >>>>>>><snipped> >>>>>>>>>Maybe using it for the evaluation is not the most efficient use of a neural >>>>>>>>>network in a chess program. It seems that the way human players manage to search >>>>>>>>>the tree is vastly underestimated. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Christophe >>>>>>>> >>>>>>>>I agree with you that search is underestimated in chess but I also believe >>>>>>>>that search and evaluation are connected because a lot of search decisions are >>>>>>>>based on evaluation of positions that are not leaf positions so you cannot >>>>>>>>seperate them and say search improvement gives x elo and evaluation improvement >>>>>>>>gives y elo. >>>>>>>> >>>>>>>>Uri >>>>>>> >>>>>>>I know that you did not try to seperate between them but my point is that if you >>>>>>>want to do the same as humans in the search then changing the search is not >>>>>>>enough. >>>>>>> >>>>>>>Humans may search position for some seconds and decide that this position is not >>>>>>>good and later search the same position but decide that it is good for them not >>>>>>>because they search deeper but because they learned to change their evaluation >>>>>>>based on searching other lines that leaded to a similiar position. >>>>>>> >>>>>>>Uri >>>>>> >>>>>> >>>>>> >>>>>>Well my point is just that when people talk about an application of ANN in chess >>>>>>they always talk about implementing the evaluation with an ANN, or tuning the >>>>>>evaluation with them. >>>>>> >>>>>>I think it tends to show that the application of ANN to chess has never been >>>>>>done by a "real" chess programmer. Because evaluation is only a part of a chess >>>>>>program. And maybe not the one that can be improved dramatically, or that needs >>>>>>them in order to be improved. Personally I would not use ANNs in the evaluation >>>>>>first, because I think they would be much more efficient somewhere else. >>>>>> >>>>>>On the other hand, you are right. If one could design an ANN to perform the >>>>>>evaluation, it would be wise to use the same ANN (or an extension of it) to >>>>>>guide the search. >>>>>> >>>>>> >>>>>> >>>>>> Christophe >>>>> >>>>>I believe that the biggest advantage that can be achieved in evaluation is not >>>>>in changing the initial static evaluation but in learning to change the >>>>>evaluation during the game based on the results of the search. >>>>> >>>>>I also do not believe that what humans know is the target and the target should >>>>>be better than what humans know. >>>>> >>>>>programs found better evaluation than humans in backgammon and program may find >>>>>better search rules than humans in chess not because programs are smarter but >>>>>because programs may do trillions of calculation to learn and humans cannot do >>>>>it. >>>>> >>>>>Uri >>>> >>>>This is the same utter nonsense crap that i keep seeing AI people write. Yet on >>>>average they even have less experience than you and keep believing in something >>>>they can never proof to be made. If they would have even *toyed* with ANNs a bit >>>>they will understand more about the impossibilities about it. >>> >>>I only say that I believe that it can be done. >>>It does not mean that I know how to do it. >>> >>>> >>>>Show me a backgammon program with an ANN that beats a 5 turns fullwidth >>>>searching backgammon program :) >>>> >>>>Of course show it at a machine that you and i have at home. >>> >>>Very easy >>>the 5 turns fullwidth searching backgammon program is going to lose on time >>>every game. >>> >>> >>> >>>> >>>>The average ANN expert is assuming he has to his availability something doing >>>>10^1000 calculations. >>> >>>I am not ANN expert and I did not suggest ideas how to do it. >>> >>>> >>>>That is the major problem when talking to these guys. >>>> >>>>Of course you can optimize an ANN for chess in 10^1000 calculations. >>>> >>>>But you will then be beaten by a database of just 10^43. >>>> >>>>I am however sure that 99% of all ANN interested will not understand what i >>>>write here above, simply because they do not know the running time of the learn >>>>methods applied. If they would read themselves into that, then less crap would >>>>leave their mouth. >>> >>>I did not say that the learning methods that are used in backgammon can work in >>>chess and it is possible that people need to invent different learning methods. >>>Uri >> >>If there was money to earn by programming a backgammon engine, i am sure some >>guys who are good in forward pruning algorithms like Johan de Koning would win >>every event there. It's like making a tictactoe program and then claiming that >>an ANN is going to work. > >Version 4 Professional edition, full version USD 380 >from http://www.snowie4.com/ > >Do you know the rules of Backgammon? Remember, you have to consider two dices in >your search tree. If it's so easy to do better without NN, do it and you will >earn a lot of USD. Usually backgammon players have more mony in their pocket >than chess players ;) There is so little backgammon players however. If you go to a backgammon tournament i pay like 250 euro entry fee. it is sick. Every good chessplayer can play backgammon very well trivially. It is a matter of a good % calculation and chances. this is trivial stuff. If there was to earn big bugs with just ENGINE (so i do not mean interface) then there would be much chessprogrammers writing such an engine ;) >Ralph > > > >> >>As we have a saying here: "In the land of the blind, one eyed is King". >> >>That's why i focus upon chess. >> >>In contradiction to you, i know how to do it with ANNs (just like many others >>do), i just don't have 10^1000 system time to actually let the learning >>algorithm finish ;) >> >>Any approximation in the meantime will be playing very lousy chess... >> >>Hell, with 10^1000 runs, even TD learning might be correctly finding the right >>parameter optimization :) >> >>TD learning is randomly flipping a few parameters each time. It's pretty close >>to GA's in that respect.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.