Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Artificial Intelligence in Computer Chess - *DETAILS* as promised

Author: Anthony Cozzie

Date: 08:14:17 03/29/04

Go up one level in this thread


On March 29, 2004 at 11:05:17, martin fierz wrote:

>On March 29, 2004 at 10:27:19, Anthony Cozzie wrote:
>
>>On March 29, 2004 at 10:01:55, martin fierz wrote:
>>
>>>On March 29, 2004 at 09:46:48, Uri Blass wrote:
>>>
>>>>On March 29, 2004 at 08:54:09, martin fierz wrote:
>>>>
>>>>>On March 29, 2004 at 07:04:13, Uri Blass wrote:
>>>>>
>>>>>>On March 29, 2004 at 06:25:28, martin fierz wrote:
>>>>>>
>>>>>>>On March 28, 2004 at 18:55:53, Artem Pyatakov wrote:
>>>>>>>
>>>>>>>hi artem,
>>>>>>>
>>>>>>>[snip]
>>>>>>>
>>>>>>>>>>On the other hand, I think a lot of researchers have been overly ambitious and
>>>>>>>>>>have tried to replace Alpha-Beta & tricks with a neural network or some totally
>>>>>>>>>>different approach. I think that with the current state of AI tools, these
>>>>>>>>>>efforts are bound to fail.
>>>>>>>>>
>>>>>>>>>A lot of researchers?  Other than myself, I don't know of any other workers who
>>>>>>>>>are attempting a complete and competitive chess playing program that doesn't
>>>>>>>>>tread the oft traveled A/B bag-of-tricks road.
>>>>>>>>
>>>>>>>>That's good a point (will probably include in paper) - not many are working on a
>>>>>>>>complete chess program, since most just address a particular section, like the
>>>>>>>>evaluation function. But one old effort - MORPH - comes to mind, and I think I
>>>>>>>>noticed a couple of other failed Neural Net efforts (don't have references
>>>>>>>>handy, but I can look).
>>>>>>>
>>>>>>>steven is doing a completely radical departure from A/B. but many others are
>>>>>>>applying more gradual changes to the A/B-model, with reasonable success
>>>>>>>(smarthink, kaissa, gothmog come to mind, and probably most commercial engines
>>>>>>>too). at least these engines make considerably larger attempts to shape their
>>>>>>>game tree than a "classic" A/B searcher like crafty does.
>>>>>>>
>>>>>>>[snip]
>>>>>>>
>>>>>>>>As I mentioned before, I decided to concentrate my research on move ordering for
>>>>>>>>a couple of reasons:
>>>>>>>>1) Move ordering promises big payoffs if done right.
>>>>>>>
>>>>>>>how much deeper can you search if you reach 95% move ordering success (defined
>>>>>>>as cutoff @ 1st / all cutoffs) instead of ~90% that most get today? how much if
>>>>>>>you reach 100%?
>>>>>>
>>>>>>I cannot answer the 95% question but I can answer the 100%.
>>>>>>
>>>>>>If you reach 100% then always choosing the first move is enough because if the
>>>>>>first move does not generate a cutoff no move is going to generate a cutoff so
>>>>>>you can finish to search to the end of the game very quickly because you search
>>>>>>only one move in every ply.
>>>>>>
>>>>>>Uri
>>>>>
>>>>>you do not answer the question here - and what you answer is wrong too :-)
>>>>>always choosing the best move first to search means you search sqrt(N) moves per
>>>>>ply on average, not 1 move per ply.
>>>>
>>>>No
>>>>
>>>>In that case you do not need to search the other moves because you know that
>>>>they are worse.
>>>>
>>>>If you always start from the best move than the pruning strategy of not
>>>>searching moves except the first move is simply correct and I see no reason not
>>>>to use it.
>>>>
>>>>Uri
>>>
>>>come on uri, this is ridiculous. of course you can use that shortcut IF you know
>>>that your move ordering is 100% perfect. in that case you need no search at all
>>>either. obviously this is an academic question and has nothing to do with
>>>reality, and is not what i'm asking... the question remains: in the extremely
>>>unlikely case that you have 100% perfect move ordering (by accident), what do
>>>you gain compared to 90% perfect (in the usual sense of move ordering %age)?
>>>
>>>cheers
>>>  martin
>>
>>It would be possible to determine this experimentally - just keep track of each
>>position and the best move at each ply on the hard disk.  It wouldn't be
>>practical for use but it would be pretty interesting.
>>
>>anthony
>
>if i had to do this i would produce a mock A/B program which first randomly
>decides whether the node is a fail-high node (~50% of all cases i guess), and
>then again randomly decides whether it fails high on the first move (~90% of all
>cases), and finally decides to fail high somewhere in between move 2...N if not
>on the first move (perhaps with equal probability, or with decreasing
>probability from move 2...N). by playing around with the probabilites of these
>random decisions, you can easily determine the numbers i'm looking for, and see
>how they depend on the quality of your move ordering.
>
>i disagree though that this is not practical: for example, if you find that
>going from 90% to 91% move ordering (= fail high at first/total fail highs) you
>need to search 50% less nodes on a typical search depth your engine reaches,
>then you definitely know you should work on your move ordering. if you find that
>you need to search 0.1% less nodes, you know that working on MO is a waste of
>time from now on. the true value will probably lie somewhere in between, but in
>any case, it gives you an idea of what you might gain with more expensive MO
>schemes.
>
>i can imagine some kind of formula just by doing some assumptions, which may be
>completely wrong...:
>
>the totally wrong-ordered case goes as
>nodes(depth) ~ (N*N)^D/2
>
>the totally ordered case goes as
>nodes(depth) ~(1*N)^D/2
>
>this formula stems from the fact that you search exactly 1 move at player A's
>moves, and all N at player B's moves AFAIK, so for going 2 ply deeper i need 1*N
>nodes. so let me assume random move ordering for player A, meaning i get the
>best move right on average at move N/2, which gives me
>
>nodes(depth) ~ (N*N/2)^D/2.
>
>perhaps that should read (N*sqrt(N))^D/2, because it might be the geometric
>instead of the arithmetic mean?!
>
>in any case, you can now assume that you find the best move 1st in the fraction
>F (~0.9), and in the remaining case at random, this leads to
>
>nodes(depth) ~ (N*(1*F+(N/2*(1-F)))^D/2.
>
>plugging in some numbers like N=20 (reduce from 40, trying to account for
>nullmove?!), D=10, F=0.90 and 0.91 i get
>
>nodes(10, F=0.90) = 79 million
>nodes(10, F=0.91) = 62 million
>
>so for 1% better move ordering i get 22% less nodes?! seems rather excessive.
>let me try again using sqrt(N) instead of N/2 - that still gets a 12% reduction.
>
>hmm, either my formulas are crap or i really have to work on my move
>ordering ;-)
>
>cheers
>  martin


OK, we did not communicate here.

What I am proposing is this:

Create some code that stores for each transposition table key the best move at
each ply.  Requires maybe 1/2 GB of memory for a reasonable depth search (my
back-of-the-envelope unoptimized computation).

E.G.:

Key 0x8432762183
Q-Nxd5
1-Nf3
2-Nf3
3-Qc3
...

Run search, record node counts.  Keep track of every time the move ordering was
wrong (both at PV and CUT nodes).

Flush transposition table. Run search iteratively until move ordering is
perfect. You can use the new table to simply always choose the best move first.
This might require a few runs since the tree might search different nodes as the
move ordering gets better.

At that point you will have a search of muse with perfect move ordering.  Record
the number of nodes visited versus a regular search. Of course, this is not
practical for a tounament game (for obvious reasons).

Some interesting sub-experiments: move ordering differences by ply - 3% at 5 ply
vs 100 % at 8 ply (?)  Also, move ordering at PV nodes vs fail-high nodes.

anthony



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.