Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: MTD is a big disaster

Author: Vincent Diepeveen

Date: 15:58:05 07/20/99

Go up one level in this thread


On July 20, 1999 at 14:07:18, Don Dailey wrote:

>Hi Vincent,
>
>First, let me clarify a few points.  I didn't have anyone in mind when
>I made  that post,  I was talking  to the  general audience and  I was
>trying to help people understand one  of the hairy issues that you can
>encounter with  MTD.  Also, I posted that  it was a big  win because I
>believe that it is, not as  a political statement or to refute anyone.
>I didn't know you were religiously  dead set against it or that anyone
>really was.  I know that people have tried it and some use it now, and
>others have decided the problems  they had were not worth the trouble.
>My post was not to criticize any one of them either.

Thanks.

>You took offense at this statement:  "All of the problems are based on
>not completely  understanding what  is going on  and not  bothering to
>stick with it  until you figure it  out."  I would like to  say that I
>was not  thinking of you at all  when I made this  statement.  I can't
>even figure  out why you thought  this.  However I am  sorry about the
>unfortunate wording I used, it  sounds like a criticism when it wasn't
>meant to  be.  I DO  feel that a  lot of people "haven't  bothered" to
>stick with it long enough to solve all the problems, but I don't fault
>them for it  or consider it a personality flaw  or anything like this!
>There  are a  million problems  and issue  in my  own chess  program I
>haven't taken the  time yet to fix or look at,  instead using the time
>for other  things.  So I  haven't bothered to  do a lot of  things yet
>that need to be done.

>I do  however still encourage  people to give  this a good  hard look.
>Especially if you have a well developed fine tuned program and are now
>looking for hard to get incremental improvements.  It's my belief that
>for  these people, it's  probably the  biggest single  improvement you
>will find.   I will make  another post soon describing  my experiences
>with it for those who are willing to listen and why I can't give it up
>even though I would like to.

>I have some specific answers to some of the points you bring up below:

>On July 19, 1999 at 23:35:50, Don Dailey wrote:

>> >I have noticed a lot of posts lately on the subject of MTD and thought
>> >I would give my observations and experiences.  First of all, I would say
>> >that MTD is simply a big win.  A lot of people have reported various problems
>> >with it including myself.  But all the problems are correctable and you
>> >will be rewarded with the fastest possible aspriation search on the planet.
>> >All of the problems are based on not completely understanding what is going
>> >on and not bothering to stick with it until you figure it out.  Even regular
>> >alpha beta searching is full of gotcha's and a lot of people don't fully
>> >understand the proper way of doing aspiration searching.   This is forgivable,
>> >though, it's complicated and very easy to overlook some of the hairy issues.
>> >It's one of those things that seems ridiculously simple once you understand
>> >it, but until then is not so simple.
>>
>> Note that i perfectly understand both algorithms. I don't mind you saying:
>> "Even regular alpha beta searching is full of gotcha's and a lot of
>>  people don't fully understand the proper way of doing aspiration searching."
>>
>> If however that line is meant for me, then i'm insulted till my bones.
>
>Don't be, it was meant for everyone.  I don't think I fully understand
>all the various intraccies of game  tree searching and yet I believe I
>am an expert on it.


>
>> In DIEP i don't have a lazy eval problem, as i don't use lazy eval.
>> I'm completely against it. Let's say i know too much about lazy eval
>> to let my hashtables be spoiled with them.
>
>I  no  longer  use  lazy  evaluation,  probably a  loss  for  me.   My
>evaluation has gotten so complicated  and aggressive that it is really
>hard to do this  well.  You can do it in a  perfectly correct way with
>no  problems whatsover  or  you can  use  the approach  where you  are
>willing  to accept  some  error in  the  worst cases  (or anywhere  in
>between.)  I have chosen to  "not bother" with this problem any longer
>and I take  the slowdown because I have too  many terms with agressive
>weights  and  I don't  want  to  accept the  errors.   If  I took  the
>practical  approach I  would probably  accept  the errors  and have  a
>stronger program for it.  If I  wanted to be really meticulous I would
>develop a "meta language" for creating evaluation terms and whenever I
>changed the weights or added a new term it would recompile the code in
>such a way as to make the lazy evaluation completely correct and doing
>only the minimal  computation needed.  I have no  illusions about this
>however,  I  think lazy  evaluation  makes sense  and  is  a big  win,
>especially if you have a big fat  evaluator like you and I do.  If you
>have  much simpler  evaluation you  can do  it with  much  less effort
>however.

I think there is 1 way to correctly evaluate lazy, but that can at
maximum give a 50% speedup.

All lazy evaluations that are just estimating the score of a position
for both sides, so mainly using a lot of eval terms that don't
scan around at the board, or quickly estimating them, such lazy evaluations
are of course hundreds of times faster than my evaluation function.

I've experimented a lot with this. I finally came to an evaluation
that could in 99.9% of the positions give an estimation of the position
which was within 3 pawns accurate. I took into account that passed pawns
could receive a lot of bonus, and extended the window there till 10 pawns.

This eval simply used a lot of easy to calculate terms, together with
some big terms, like hung pieces, most kingsafety code,
and so on. I really put a lot of work
in experimenting with it, and i drew a lot of tables out of it,
which showed me for example how far off it was after a testrun.

This eval was however not that fast, just about 3 times faster than
a full eval would be.

I experimented a lot with it, and indeed it was faster. It did well
at testsets. It kicked hell out of them at first.

What directly caught my attention was the fact that it everywhere
udes more nodes. The deeper it searched the more nodes it used.
So this was not a 'overhead issue', like: it's using 20% more nodes,
but gives 70% speedwise speedup, but it simply made my branching
factor worse.

I didn't like that, to put it in an understatement. Then i started
testing it at games. Of course, in blitz it initially played way better
(if you search 7/8 ply in blitz, then any speed improvement has a big
impact i bet).

Yet positionally it played a little weaker. I somehow *lost* information.
The few positions that were *misevaluated* appear to be crucial positions.

More complicated testsets (i've
collected over the time positions where my program either made a bad move
in the past or which positional or tactical is interesting anyway)
it of course found simple problems way faster, but the real difficult
ones it didn't find anymore. Crucial positional moves it needed plies more,
if it found them at all.

Compared to the ferrets and fritzes my DIEP simply doesn't solve a lot
of problems by seeing the whole tactical solution. In contradiction
it finds most problems by getting near the tactical solution and then
evaluation sees it.

Good example of this is nolot position 10:
r1b2rk1/1p1nbppp/pq1p4/3B4/P2NP3/2N1p3/1PP3PP/R2Q1R1K w - - bm Rxf7;

No wonder that most programs don't find this one. In the endposition
where DIEP sees that white wins, white is in quadrant of the passed pawn,
the bishop at c8 is deadly pinned. I'm sure most progs see that
positions also quite fast. They just don't evaluate it correctly.

Programs not evaluating that deadly pin and king in quadrant correctly
(it's a combination of the both terms), can of course never solve this,
because it's something like a 20 ply trick or something otherwise.

So to make a list of pro's and contra's

pro:
  - it speeded up my program a lot (timewise)

contra:
  - at analysis level (overnight) it didn't speedup as branching
    factor was a lot worse
  - missing crucial positions (the 0.01%)
  - 3 pawns is a lot;
    the more one prunes, the less one needs lazy evaluation,
    as you simply don't look at those positions then. A higher % of
    the positions seen then don't need lazy evaluation. A part of
    what you did in eval then is of course still usuable, but
    as soon as you also use code that estimates a lot of scores,
    then it's a simple mathematic calculation. Of course this was
    in my program always not outweighting a full eval, but it's
    together with the worse b.f. causing that in the end not doing it
    was better.
  - needing for a lot of positional problems where 1/100 of a pawn
    could turn the balance a few ply more or simply not finding it

>> I get however a 60% speedup by using a simple hashtable that only
>> stores my eval. This goes down to 50% speedup when loading factor
>> becomes huge (way above 1).
>>
>> As i know most programs are not using a eval hashtable, i think
>> i may point to my eval table for this reason. What they achieve
>> with lazy eval, i achieve more or less with an eval table.
>
>Wow, this is impressive.  We do  this with Cilkchess too but don't get
>very much for  it.  I think Diep probably has  a bigger evaluator that
>almost any program.

Right, in my draughtsprogram (getting 200k nodes a second at a PII450)
i abandonned all thoughts of using
an eval table.

That's an experience people should really know:
my eval in my draughtsprogram is a few thousands of C lines.
So it's not that small. It has mobility and some other things
that scan the board.

As there  one cannot use nullmove in draughts, it gets less
profit out of the table, about 15% of all evals come out of
evaltable there when i measured it, but *still* it doesn't
give a speedup!

It seems that reading a single line in SDRAM (stored tuples in 8 bytes)
is simply a lot slower than L1 cache and CPU :)

>>I would like to mention that I was forced to use MTD and solve these
>>problems (also problems like bad PV's) because it was simply faster,
>>and if the speedup was trivial I would have gladly just avoided the
>>issue!
>>- Don
>
>> Don, i don't doubt MTD is faster for your programs.
>>
>> Let's however write down some facts why my prog is unhappy with MTD.
>> It's up to others to generalize it to their progs:
>>
>>
>>   - the huge number of researches needed. In DIEP my evaluation is nowadays
>>     in 1/1000 of a pawn. For a long time i had 1/200 of a pawn (in the time
>>     i experimented with MTD), but now i have 1/1000 of a pawn. So a drop
>>     of 0.20 pawn, which is a normal drop in DIEP, is in fact a drop of 200
>>     points. Happy researching!
>
>I  think you miss  the whole  point of  MTD.  It's  not the  number of
>researches needed,  it's the number of  nodes you look  at.  The whole
>idea of MTD is that the search tree is basically kept in memory.  When
>you research with a different  beta value, you won't traverse the same
>tree, so you can think of it as not doing ANY extra work.  DON'T think
>of it  as doing a bunch of  searches over and over  again wasting your
>time, that's NOT what's going on.

It is the major point Don.
The number of flips in DIEP is less than 1%. That is not much.
To put it in other words: every iteration my program is
researching nearly the *same* tree.

So every research is searching *nearly* the same tree.
So searching it with different bounds means suffering a lot,
as you don't get cheap cutoffs then because of hashtable.

>Also, you can choose a different MTD increment.  You don't have to use
>a single point, it might take some experimentation.  However I use 100
>for a pawn and a single point increment works fine for me.
>
>
>>   - My evaluation has a lot of knowledge, so i can expect every iteration
>>     a different behaviour of it, although the ups and downs in evaluation
>>     every ply i've figured out are mainly caused by the mobility component,
>>     this means nevertheless that i'm sure that i need a lot of researches.
>
>I'm not sure  whether this matters a lot  or not.  If you do  a lot of
>strange window based extensions, that could be an issue though.  But I
>can't guarantee that MTD would  work for every program, I only suggest
>that it's  worth a try.  But  you do use aspriation  windows don't you
>like PVS?   If these things  work I can't  see why MTD  wouldn't work,
>it's just a variation on a theme and PVS requires researches too.

We get at a difficult terrain here.

What i do in DIEP is: i start with a window -infinite,+infinite
then after first move has been searched i use a window alfa,alfa+1.

I don't start with a window based upon the previous ply. That doesn't
help me.

PVS helps me a lot though. Normal alfabeta is simply a branching factor
worse.

Because of the 1% fliprate my DIEP *directly* gets a decent bound,
that might explain why even a rude window (for example a pawn)
doesn't reduce nodes.

My threat extensions are of course dependant upon alfa and beta,
idem for some other extensions.

However, currently all those extensions are turned off, except for
check in DIEP.

secondly *if* i do a test, then remember i'm having not a weak program.
My extensions are very limited in this sense that an extension is not
gonna eat up 100% of nodes extra. That's something which won't happen.

Even worse: if i run without check extensions then in most positions
my program doesn't need much more nodes, because they produce scores
that cutoff a huge tree directly, which otherwise would've been searched.

So i clearly think we can't blame problems upon window related things.

I've clearly indicated in a previous post what is a major problem
for DIEP when using MTD.

>
>
>>   - As my DIEP needs very little nodes a ply (must see the first prog
>>     needing less when also turning off the forward pruning, except for
>>     nullmove), although i don't dare to say that my sorting is optimal,
>>     *everything* but near optimal i prefer to say, this means however
>>     that all overhead i have to search is too much overhead for it.
>>
>>     Secondly it simply happens to be the case that
>>     when searching using a bound of 0.10 that all scores that get returned
>>     are very near to it (nullmove). This has the nasty habit that a lot
>>     of bounds stored
>>     in the transpositiontable will not work next research, and as we know we
>>     have a lot of researches to do, then it's easy to realize that this
>>     gives a lot of overhead which is simply unnecessary.
>>   - Chess is a worst case game. That means that if you have one big weak spot,
>>     that you are complete history. Even at nowadays hardware a weak spot of
>>     DIEP is its search speed. It's getting 15k nodes a second at a PII450.
>>     Now i can have a branching factor like heaven, but still it won't search
>>     very deep, unless i give it a lot of time. At very slow levels like 3
>>     minutes a move it usual will even search near the opponents depth, or
>>     sometimes even deeper, because of this branching factor, but up to the
>>     first 90 seconds or so, for sure a program that's 20 times slower than
>>     its opponents is not likely to search deeper.
>>
>>     So it's important to realize that it already doesn't search deeply,
>>     when getting to my point:
>>     in very complicated positions where the score goes up and down a lot,
>>     or where a few fail lows occur, that usual are very important points in
>>     a game. Now we all give our programs more time, but still those positions
>>     it not seldom searches 2 or 3 ply less or so than in other middlegame
>>     positions. In those complicated positions a lot of games get decided.
>>     It is in those positions that MTD has a major weak spot.
>>     MTD can't stand a lot of researches. So there where one needs its search
>>     algorithm the most, there MTD has a major shortcoming. There MTD shows its
>>     worst case and simply abandons you, needing up to 6 times more nodes.
>>
>>
>>     Now you can do the math. That's a laughable DEPTH i get then. If i
>>     search a few moves at a depth of 9 ply, and all others at like 13
>>     or 14 ply, then i'm completely smoked.
>>
>>     Even the tactical barrier where we talked about some years ago,
>>     and was estimated by me at 12 ply, is 3 plies off that.
>>
>>     Doing 9 ply searches is back to the old days!
>>     Now 9 versus 13/14 ply is an example, but it clearly shows my point.
>>
>>     MTD is simply a worst case/ best case algorithm. If scores are
>>     very close to each other (usually when there is only 1 obvious move
>>     to make), then it will not perform bad, but quickly get to the next ply.
>>     If MTD scores are not close, and a lot of different PVs follow each other
>>     up, then MTD has a major problem.
>>
>> It's obvious that MTD is a dead wrong algorithm for computerchess!
>
>It sounds like your program is more complicated than mine.  My program
>has the behavior that it will  virtually always try to return the same
>exact score no  matter what the search window, at  least to the extent
>that the  aspiration window  allows it.  In  other words  two searches
>won't contradict  each other.  If you  have this behavior  I can't see
>why any  kind of aspiration  search would be  a problem.  The  way you
>make it sound you shouldn't be  using PVS or any variant that does any
>kind of speculation because the research will kill you.

Of course my program also nearly always returns the same exact score no
matter what the search window is, i'm surprised to hear
that it doesn't do for you when searching in parallel.
Are you sure you debugged it well parallel?

I get mostly the same scores when searching in parallel.

I don't have extensions or nullmove limitations or any
limitation in my program which are for example dependant upon
the number of times that something is performed, or a certain window.

This is of course arguable, because just pruning at beta in the qsearch
already means you're depending upon that.

>For instance it would be unusual to see my program fail hi and then on
>a research fail low with a contradiction.  I don't think this behavior

After failing high diep hardly fails low of course when i start with
a cleaned hashtable at ply=1.

Of course one fails low a lot of moves in actual game play,
because of the simple reason that the hashtable has a lot of
true scores which previous move were investigated to a great depth,
which return something, but after investigating a weird
line the move which at depth=n is good, now at depth=n-x is suddenly
not good. Of course when getting near depth=n this move is the new PV
again...

>is guaranteed  with any  chess program unless  no selectivity  or hash
>tables are used, but it rarely happens with my program.  With parallel
>versions it  of course can  happen a lot  more, but it  still responds
>very well  to parallelization and  I haven't noticed a  problem.  This
>bad behavior  can also result with  lazy evaluation if  it's not coded
>exactly right.   But I'm not  even sure that  would make MTD  wrong to
>use, but perhaps it would.

You should take a look to some extensions and nullmove implementations
in your program if it behaves parallel different than when not running
parallel. Of course you can't see more when running in parallel.
You can get a quicker cutoff by investigating a certain move
earlier, but no way you can get a different root score
unless something is really wrong in your program,
such as lazy evaluation or pruning at alpha in qsearch (pruning
at alpha can be proven incorrect in a rather simple way, but
most use it as it speeds their program up).

>
>> If Don would say: "average it needs less nodes in Cilkchess", then considering
>> how Cilkchess has been made and how all progs of Don are, i directly
>> believe him on his word.
>>
>> What will remain of those results when you get a better eval in the future?
>> How about a worst case analysis of MTD versus PVS?
>
>I believe  one characteristic  of a very  good evaluation is  that you
>will get less  variation from iteration to iteration.   The whole idea
>is to  get the most  accurate possible evaluation  so in a  sense your
>scores should "smooth out"  with better evaluation, not fluctuate even
>more.  For instance,  if your evaluation is only  material, your score
>doesn't gradually get  lower as a backward pawn  becomes impossible to
>defend, it simply jumps to a pawn down when the tactics see it.

I tend to agree that from iteration to iteration you get less variation,
but how to ever get at a big iteration at a crucial position with MTD?
Still in DIEP there is this huge gap: even if i have a score difference
of say 0.05 from ply n to n+1, then still that's 50 points, which is
a huge gap to research!

I completely *miss* your comment on the worst case behaviour of MTD.
*Ever* investigated it?

Another thing. I see *clear* difference between MTD and
a search with alfa and beta.

When searching with alfa and beta there are of course different
methods, but from all those, PVS worked best.

Searching with a window based upon previous rootscore (say 0.25
pawns either side of it) works *considerably* worse for me than
PVS, in nearly every position. Especially in positions where my
DIEP fails low first move out of book i can get completely
dicked then as its getting a timeout too soon.

What we must consider here is actual game play.
Against those commercial programs you get completely dicked out of
book. Usually with -0.40 to -0.80 and easy play for opponent and
tough play for you. You *can* survive then usual by playing a single
line which is non forced, and only a GM found it before.

So i give my program a lot of more time first move out of book.
If it misses that difficult move, then you can already resign
usual against those commercial books.

Those first say 10 moves are completely *crucial*. Getting worse
out of book is simply something i *assume*.

Endgame i don't care simply. In endgames at the quad xeon
from Bob i already searched like a 20 ply or so. Against Eugen
my program simply 20 ply where Eugen got hardly 10 or 11 ply.
Roughly 10 plies more.

Compare that to an 9 ply search at a small micro using MTD
in a crucial middlegame/opening position. You're completely smoked then
if you do that!

Nearly *all* my testings of algorithms are therefore in *those*
middlegame/openingspositions.

Let's take a look at next run:

It's aposition from Deep Blue-Kasparov (game II).
The crucial 2 moves here are either Ra2 or Ra3. So simply
doubling at a-file. axb5 is definitely a bad move here.

I'm now rewriting my eval, so a lot of bugs in it now.
Still at a quad xeon diep would find it. After 26:30
(60mb has only at a PII450) it gets the crucial fail low.

With this version i joined WCCC. At 3 minutes a move
i would have found it at bobs hardware. First few moves
diep thinks up to 10 minutes. In this position i would
be just a few move out of book (just like deep blue i
assume). at 4 processors: 26:30 / 4 = under 7 minutes
to get the fail low. Would have easily made it.

I'm not convinced i would have made it here with MTD.
If an algorithm prevents my prog from finding good moves in
positions like this then you better not give me a gun
at such a moment i can fire bullets thru the book where
someone wrote the algorithm in, till nothing is left from the
book and till i've forgotten how much time i spent on
figuring out the so many-est algorithm without success.

I've wasted a lot of time at:
  FHR
  Singular extensions
  countermove
  history heuristic (although the first few versions of
                      diep it worked)
  probcut
  alibaba
  MTD

Things i only laughed for, cuz it's so obvious that just
describing it is a waste of time:
  opponent modelling

 r = - = r b k =   ...       1    ...
 = - q b = o o n   ...       2    ...
 o = - o - = - o   ...       3    ...
 = o = O o - = -   ...       4    ...
 O O o = O = - =   ...       5    ...
 = - O - B - N O   ...       6    ...
 - = B = Q O O =   ...       7    ...
 R - R - = - K -   ...       8    ...
white timeleft=27:46.40.00
white to move

clean
Clearing all Hashtables...
anal
Analysis mode is on
process 0: engineflags = 0 denktime=10000000 maxtime=10000000
00:00 7 (0) 1 0.87 a4xb5 a6xb5
00:00 20 (0) 1 1.21 Ng3-f5
00:00 92 (0) 2 0.63 Ng3-f5 Bd7xf5 e4xf5
++ a4-b5
00:00 167 (0) 2 0.87 a4xb5 a6xb5
00:00 316 (0) 3 1.29 a4xb5 a6xb5 Ng3-f5
00:00 1805 (0) 4 0.93 a4xb5 a6xb5 f2-f4 f7-f6
00:00 3682 (0) 5 0.94 a4xb5 a6xb5 f2-f4 Nh7-f6 f4xe5 Ra8xa1 Rc1xa1 Re8xe5
00:00 10305 (0) 6 0.94 a4xb5 a6xb5 f2-f4 Nh7-f6 f4xe5 Ra8xa1 Rc1xa1 Re8xe5
++ a1-a2
00:02 47870 (0) 6 1.00 Ra1-a2 b5xa4 Rc1-a1 Bf8-e7 Bc2xa4 Bd7xa4 Ra2xa4
00:03 65425 (0) 7 1.03 Ra1-a2 Nh7-f6 a4xb5 a6xb5 Rc1-a1 Ra8xa2 Ra1xa2
00:08 148221 (0) 8 0.77 Ra1-a2 Bf8-e7 a4xb5 a6xb5 Rc1-a1 Ra8xa2 Ra1xa2 Be7-h4
++ a4-b5
++ f2-f4
00:16 276743 (0) 8 0.78 f2-f4 e5xf4 Be3xf4 f7-f6 a4xb5 a6xb5 Rc1-f1 Nh7-g5 Ra1xa
8 Re8xa8
++ c1-d1
00:50 828674 (0) 9 0.82 f2-f4 e5xf4 Be3xf4 b5xa4 Qe2-f2 Bf8-e7 Bc2xa4 Bd7xa4 Ra1
xa4
++ a1-a2
00:58 981597 (0) 9 0.89 Ra1-a2 Bf8-e7 Rc1-a1 Be7-g5 a4xb5 Bd7xb5 Ng3-f5 Bg5xe3 Q
e2xe3
++ a4-b5
01:01 1039445 (0) 9 1.03 a4xb5 a6xb5 f2-f4 Ra8xa1 Rc1xa1 e5xf4 Be3xf4 Nh7-f6 Qe2
-f3
01:14 1275076 (0) 10 0.80 a4xb5 a6xb5 f2-f4 e5xf4 Be3xf4 Nh7-f6 Ra1xa8 Re8xa8 Ng
3-h5 Bf8-e7
++ a1-a2
01:23 1439123 (0) 10 0.81 Ra1-a2 Bf8-e7 a4xb5 a6xb5 Rc1-a1 Ra8xa2 Ra1xa2 Be7-h4
Ng3-h5 g7-g6 Nh5-g3
03:38 3778580 (0) 11 0.80 Ra1-a2 Bf8-e7 a4xb5 Bd7xb5 Ng3-h5 Be7-g5 f2-f4 e5xf4 N
h5xf4 Nh7-f6 Rc1-a1
++ a4-b5
03:53 4046967 (0) 11 0.86 a4xb5 a6xb5 f2-f4 e5xf4 Be3xf4 Nh7-f6 Ra1xa8 Re8xa8 Ng
3-h5 Bf8-e7 Rc1-f1 Nf6xh5 Qe2xh5
05:23 5733524 (0) 12 0.84 a4xb5 a6xb5 f2-f4 e5xf4 Be3xf4 Nh7-f6 Ra1xa8 Re8xa8 Rc
1-d1 g7-g5 Bf4-e3 Bf8-g7
11:51 12490703 (0) 13 1.03 a4xb5 a6xb5 f2-f4 e5xf4 Be3xf4 Nh7-f6 Qe2-f2 Qc7-b7 Q
f2-f3 Ra8xa1 Rc1xa1 Re8-a8
26:30 28000771 (0) 14 0.65 a4xb5 a6xb5 f2-f4 e5xf4 Be3xf4 Bf8-e7 Ra1xa8 Re8xa8 e
4-e5 d6xe5 Bf4xe5 Be7-d6 Be5xd6
++ a1-a2
34:17 35887650 (0) 14 0.97 Ra1-a2 g7-g6 a4xb5 Bd7xb5 Rc1-a1 h6-h5 Bc2-a4 Qc7-b7
Ng3-f1 Bf8-g7 Nf1-d2 Nh7-f6
++ a1-a3
49:44 51057764 (0) 14 1.04 Ra1-a3 Bf8-e7 Rc1-a1 Be7-g5 a4xb5 Bg5xe3 Qe2xe3 Bd7xb
5 Bc2-a4 Bb5xa4 Ra3xa4 Qc7-b7 Ng3-f5 Re8-d8
01:10:56 72354865 (0) 15 1.20 Ra1-a3 g7-g6 Rc1-a1 Bf8-g7 a4xb5 Bd7xb5 Bc2-a4 Qc7
-b7 Qe2-a2 Ra8-c8 Ba4xb5 a6xb5
01:36:36 97926104 (0) 16 1.02 Ra1-a3 Bf8-e7 Rc1-a1 Qc7-b7 a4xb5 a6xb5 Ra3-a7 Ra8
xa7 Ra1xa7 Qb7-c8 f2-f4 Be7-f6 Qe2-d2 e5xf4
02:40:41 161738324 (0) 17 1.15 Ra1-a3 g7-g6 Rc1-a1 Bf8-g7 a4xb5 Bd7xb5 Bc2-a4 Qc
7-b7 Qe2-c2
04:53:08 294001588 (0) 18 0.88 Ra1-a3 Bf8-e7 Rc1-a1 Qc7-b7 a4xb5 a6xb5 Ra3-a7 Ra
8xa7 Ra1xa7 Qb7-c8 Qe2-d2 Be7-g5 Be3xg5 Nh7xg5
13:01:42 763686793 (0) 19 0.98 Ra1-a3 Bf8-e7 Rc1-a1 Qc7-b7 a4xb5 a6xb5 Ra3-a7 Ra
8xa7 Ra1xa7 Qb7-c8 f2-f4 Be7-f6 Qe2-f3
24:16:28 1427958548 (0) 20 0.86 Ra1-a3 Bf8-e7 Rc1-a1 Qc7-b7 a4xb5 a6xb5 Ra3-a7 R
a8xa7 Ra1xa7 Qb7-c8 Bc2-d1 Be7-g5 Be3xg5



>> Vincent Diepeveen
>> diep@xs4all.nl

I've given my viewpoint. You've given your viewpoint.
It's interesting to hear how occam goes.

I like to see your comment still on worst case behaviour of MTD,
as that was the thing that completely made me against MTD,
otherwise it would be just another algorithm that for some would
work and others it wouldn't.

It of course doesn't take away that Aske did a great job in
discovering MTD. I would like to see how you chose your
window in MTD and how you fiddle with your researches,
as my stupid draughtsprogram still waits for an alternative,
and hey, i already outsearch rest of the world with that
program by a large number of plies (up to 20 more in endgame),
so i can take that worst case risk, which usually doesn't happen
anyway as it's having a prehistoric evaluation (although to my big luck
all other draughtsprogs except for Flits aren't doing much better),
and i didn't even have the guts so far to measure its bad flip rate,
so wasting some nodes on researches are well spent in it!

Greetings,
Vincent








This page took 0.1 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.