Author: Alessandro Damiani
Date: 14:38:01 09/11/99
Go up one level in this thread
On September 11, 1999 at 15:39:05, Robert Hyatt wrote:
>On September 11, 1999 at 12:36:54, Alessandro Damiani wrote:
>
>>On September 11, 1999 at 09:48:57, Robert Hyatt wrote:
>>
>>>On September 11, 1999 at 05:27:21, Alessandro Damiani wrote:
>>>
>>>>On September 11, 1999 at 00:34:46, Robert Hyatt wrote:
>>>>
>>>>>On September 10, 1999 at 17:57:38, Alessandro Damiani wrote:
>>>>>
>>>>>>On September 10, 1999 at 17:38:31, Robert Hyatt wrote:
>>>>>>
>>>>>>>On September 10, 1999 at 17:05:33, Alessandro Damiani wrote:
>>>>>>>
>>>>>>>>[..]
>>>>>>>>>>>For one moment forget about alpha and beta, you are on the wrong track as
>>>>>>>>>>>alpha and beta are not a part at all of the code. You need an extra stack
>>>>>>>>>>>that is set to -INF at each ply. Then before you do A/B you do the bestmove
>>>>>>>>>>>calculation for that ply. Involved variables: SCORE and STACK, no alpha beta.
>>>>>>>>>>>
>>>>>>>>>>>Ed
>>>>>>>>>>
>>>>>>>>>>I think the best way to explain is to write a small piece of code in pseudo C,
>>>>>>>>>>else we talk around the point.
>>>>>>>>>>
>>>>>>>>>>Alessandro
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>OK... here is what I did:
>>>>>>>>>
>>>>>>>>>Normal alpha/beta first:
>>>>>>>>>
>>>>>>>>>int Search(int alpha, int beta, etc...) {
>>>>>>>>> best=-infinity;
>>>>>>>>> bestmove=0;
>>>>>>>>> foreach (move in movelist) {
>>>>>>>>> MakeMove();
>>>>>>>>> value=-Search(-beta,-alpha,etc.)
>>>>>>>>> if (value > best) {
>>>>>>>>> best=value;
>>>>>>>>> bestmove=current.move;
>>>>>>>>> }
>>>>>>>>> if (value > alpha) {
>>>>>>>>> if (value >= beta) {
>>>>>>>>> return(value);
>>>>>>>>> }
>>>>>>>>> alpha=value;
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>> HashStore(bestmove,alpha, etc...)
>>>>>>>>>}
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>So what I did was to simply take the score for each search made after trying
>>>>>>>>>one of the moves at this ply, and remember the 'best' score and associated move.
>>>>>>>>>
>>>>>>>>>All I am saying is "this does not work". It is a characteristic of the alpha/
>>>>>>>>>beta search. It isn't a "it might work if ..." it simply won't work. Because
>>>>>>>>>the searches below this node (again, assuming this is a fail-low node where _no_
>>>>>>>>>move produces a score > alpha, which is the case where I claim there is never a
>>>>>>>>>best move to try here) return a bound on the value for that move. And I have no
>>>>>>>>>idea how to choose between a move with a bound <=200 and another move with a
>>>>>>>>>bound <= 150. Because the first could have a score well below 200. I simply
>>>>>>>>>don't know as I told the search below this node to stop whenever you find a
>>>>>>>>>score > X, where X is my negated alpha bound.
>>>>>>>>>
>>>>>>>>>Now, we have code. Did I misunderstand what you are saying? If not, then I
>>>>>>>>>can certainly explain further why that 'best' and 'bestmove' above are no good
>>>>>>>>>in this context. You can think of "best" as a random number that is <= alpha,
>>>>>>>>>nothing more. Which means "bestmove" is a random move chosen from the moves
>>>>>>>>>searched at this ply. And that is _not_ the move we want to try first when we
>>>>>>>>>get back to this position and it is suddenly not a fail-low position where all
>>>>>>>>>moves have to be tried, but rather it is a fail high position where the best
>>>>>>>>>move will let us cut off quickly...
>>>>>>>>
>>>>>>>>I never said something against that. When I read Ed's answers I see what you
>>>>>>>>say: fail-soft AlphaBeta. What you say is what I think all the time:
>>>>>>>>
>>>>>>>>best: score of the position after all moves are searched.
>>>>>>>>
>>>>>>>>best<=alpha => the real score of the position is <=best. There is no information
>>>>>>>>about the best move.
>>>>>>>>
>>>>>>>>bestmove==0 => best==-INF (after searching all moves)
>>>>>>>>
>>>>>>>>I think, Ed should write down a small piece of code.
>>>>>>>>
>>>>>>>>Alessandro
>>>>>>>
>>>>>>>
>>>>>>>There are only three choices here.
>>>>>>>
>>>>>>>(1) best <= alpha. We don't know anything about any of the moves. Yet I
>>>>>>>believe Ed (and I thought you as well) said that if you take this 'best'
>>>>>>>move and store it in the hash, it works well. In my test, it didn't.
>>>>>>>
>>>>>>>(2) alpha < best < beta. This is a move with an _exact_ score that is
>>>>>>>correct. Also known as a PV-candidate move normally.
>>>>>>>
>>>>>>>(3) alpha < beta < best. This is a fail high move. It is better than the
>>>>>>>best that is allowable. It is _not_ the best move at this level however, it is
>>>>>>>just good enough to produce a score >= beta and terminate the search early.
>>>>>>>This is the case that feeds (1) above at the previous ply. If we try _another_
>>>>>>>move here we might get an even _bigger_ score here, and back up an even worse
>>>>>>>score to the previous ply. But alpha/beta makes us stop _now_.
>>>>>>
>>>>>>This is the postcondition of AlphaBeta. In all three cases I store the best move
>>>>>>in the transposition table. BUT (I wrote that somewhere in an answer!) I don't
>>>>>>use that move in the case the score belonging to that move was an upper bound.
>>>>>>The code in Fortress:
>>>>>>
>>>>>>[[
>>>>>>precondition: depth>0 && (height, bound, wasSingular) is the information from
>>>>>> the transposition table.
>>>>>>
>>>>>> // there is no best move when the score was an upper bound.
>>>>>> // But we need the move for Singular Extensions, if it was singular
>>>>>> if (height>0 && bound==UPPER && !wasSingular) HashMove[ply]= 0;
>>>>>>
>>>>>> here: code for move ordering.
>>>>>>]]
>>>>>>
>>>>>>Conclusion: I do the same thing as you do, Bob.
>>>>>>
>>>>>>I store the best move to have an additional check to avoid hash errors, when the
>>>>>>hashtable is looked up (move must be pseudo-legal).
>>>>>>
>>>>>>I don't understand what Ed says.
>>>>>>
>>>>>>Alessandro
>>>>>
>>>>>
>>>>>I assume this means that for UPPER positions, you are just storing a random
>>>>>move that you searched? Or are you using that "bestmove" trick to pick the
>>>>>one with the highest value (even though it is still <= alpha?)
>>>>>
>>>>>validity checking isn't unreasonable, although if you are using 64 bits, I
>>>>>don't think it is worth the effort. I _never_ get errors reported on such
>>>>>moves, unless I introduce a bug... That is the only reason I actually leave
>>>>>the "ValidateMove()" test in the code in fact...
>>>>
>>>>
>>>>Yes, I store a random move at UPPER positions, but I don't use this move for
>>>>move ordering. I need it for two reasons:
>>>>
>>>>1. Singular Extensions. This is very important!
>>>>2. To check for hash errors (I am able to store only 52 bit of the hashcode, so
>>>> I use the check for legality of the move to "add some bits")
>>>>
>>>>Alessandro
>>>
>>>
>>>How do you use a 'random move' in singular extensions? I have played around
>>>with this from time to time, but haven't yet decided to try it 'in production'
>>>as I don't like how it behaves so far.
>>>
>>>However, I don't quite see how a random move would be very important (your
>>>words) for SE. If you aren't ready to discuss what you are doing, however,
>>>that is perfectly ok. I'll explain what I have been twiddling around with once
>>>I consider it 'working'...
>>
>>The problem with SE is the definition of singularity: it is defined as a
>>function of search depth. One move can be singular with search depth 5 and not
>>with search depth 6. And again singular with search depth 7.
>>
>>What happens:
>>We are on a PV-node and the score of move x is in (alpha, beta). You test move x
>>for singularity with search depth d, and the test is positive. You search it one
>>ply deeper, but now the score is <= alpha.
>>
>>Two cases:
>>
>>1. another move m has a score > alpha:
>>What information do we have? The singular move was losing. Currently I accept
>>the new best move and store it in the hashtable as usual. Although I am not
>>happy with that, since:
>>
>>the singular move x is losing => all moves are losing (since x is singular)
>>
>>But the problem is the definition that depends on search depth. :(
>>
>>2. all other moves have scores <= alpha:
>>I keep the singular move as best and store it in the hashtable (as you tried to
>>make understand all the time, there is no best move at a fail-low, so we can put
>>in the table what move we want). The idea is to avoid all the problems the Deep
>>Thought team had with search anomalies. In the next iteration the move from the
>>hashtable was singular but with an upper bound as score. The important
>>information is that it was singular, so now it will be extended again.
>>
>>With this method the current Fortress finds in the following WAC position the
>>mate at iteration number 6 (very quickly!):
>>
>>r2r2k1/pb3ppp/1p1bp3/7q/3n2nP/PP1B2P1/1B1N1P2/RQ2NRK1 b - - 0 1
>>
>>Alessandro
>
>
>AHA. :)
>
>that makes sense. In fact, another idea to try: if you go to store a fail-low
>hash entry, and you find the exact same hash signature in the hash table
>already. "steal" the best move before replacing the entry and store that best
>move with the current bound... that was the best move at some point in the
>same position... it makes sense to keep it as best when we don't know what is
>going on with any of the moves...
I will try that. Thank you!
Alessandro
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.