Author: Uri Blass
Date: 17:26:29 04/04/04
You can see that most of the functions that I give here are empty.
I also may have sign bugs in alpha and beta.
I added some code and also read my previous code again to fix some errors like
functions that did not get enough varaibles.
please check the code and help to improve it.
/*
Code by Uri Blass
I want to have some general search code(not only for chess) when it will be
possible to implement failsoft,failhard,mtd and many search ideas by the right
defines or by templates(thanks to Jay Scott for the idea but unfortunately
I do not know C++ and only C so maybe other people can improve this code to use
templates)
This code does.not use use template or define now and it is only a plan for the
future
The following heavy commented code that is based on ideas that I thought about
search, but most of the ideas in this code are not used for movei.
I hope that other people will help me to make this code more general to allow
also ideas like mtd or fail hard and fail soft and to add general code for the
iterate function and the
hash functions.
The idea is that there is going to be some general search code not only for
chess that every programmer will be able to use it as a basis.
part of the procedures or the functions may be empty in the first versions or
return a constant(for example in case that the program does not use iid it may
have return infinite in the function that calculate if to use iid.
Dann corbit made 2 corrections to the original code
one is that I forgot to have i++ in one of the loops and another correction was
that my code did not probe
the hash table before trying iid
*/
int probability_to_fail_high_too_small(int depth,int alpha,int i)
{
return 0;
}
int chance_fail_high_this_move(int depth,int alpha,int i)
{
return 1;
}
int extensions()
{
return 0;
}
int sortmove(int i,int depth)
{
return 0;
}
int TryDraw()
{
return 0;
}
int TryHashTables(int depth)
{
return 0;
}
int Reduction_Conditions()
{
return 0;
}
int Ply_Reduction()
{
return 0;
}
void evalprobability(int depth,int alpha)
{
}
void replace_move_data(int i,int valmax)
{
gen_t g;
gen_dat[i]=gen_dat[valmax];
g=gen_dat[i];
gen_dat[valmax]=gen_dat[i];
}
#define small_depth 3
#define ETC_depth 1000
#define starting_q_search_depth 7
#define min_reduction 1
int iid_reduction(int depth,int alpha)
{
return 0;
}
int probe_hash_depth_upper(int depth)
{
return 0;
}
int calc_hash(int i)
{
return 0;
}
int maybe_hash_cutoff(BitBoard zobkey,int beta)
{
return 0;
}
int Search(int depth, int alpha, int beta, int verify)
{
int last,/* last is the place of the last move in the move list
and
*it is calculated later,
*after generating the list of the legal moves*/
Newdepth,
hash_depth,
best_place_move,
val;
int first_alpha = alpha;
int iidfirst = first_move[ply];
int valmax = alpha;
/* there is a move stack similiar to a lot of programs we always have
* first_move[0]=0,first_move[1]=number of legal moves in the first ply
* of the search and first_move[x+1] to be the sum of first_move[x] and
* the number of legal moves in the position.after x plies */
int i = first_move[ply];
int reduction;
BitBoard zobkey;/*BitBoard is 64 bit integer*/
/* the function gen generates legal moves */
gen();
last = first_move[ply + 1];
/*we need to check hash tables before checking if to do internal iterative
deepening
* hash did not help to prune and the same for null move but
*hash still may be productive to avoid internal iterative deepening
*/
hash_depth=probe_hash_depth_upper(beta);
if (hash_depth<depth-min_reduction) {
/*we cannot know only based on hash that we should avoid iid
*so now it is time to calculate the real reduction that we use if we use
iid
*/
reduction = iid_reduction(depth, alpha);
/*
*we are still not sure that we cannot avoid iid based on the hash
*and the first step is to check if we can do it*/
if (depth > reduction+hash_depth) {
/*
* we look for a move to search with reduced depth(depth-reduction)
* in order to stop when a move generate cutoff so we have a good
move to start first*/
while (i < last) {
makemove(gen_dat[i].m);
val = -Search(depth - reduction, -beta, -valmax, verify);
undomove();
if (val > valmax) {
valmax = val;
iidfirst = i;
}
if (val >= beta)
break;
i++;
}
replace_move_data(i, valmax);
i = first_move[ply];
}
}
while (i < last) {
/* probability_to_fail_high_too_small is a function that considers
* also the previous moves to decide if there is no chance to fail high
* for example when it is clear that the obvious good moves
* were already searched then I may decide based on evaluation and depth
* that probability to fail high is too small and return alpha but
* in previous move I did not know it because I still hoped that the score
* will be above alpha.
* both probability_to_fail_high_too_small and
chance_fail_high_this_move
* that is used later may use a function that evaluate probability but
* I do not suggest to call a function that
* evaluates probability because it may be cheaper to do some lazy
* evaluation and not to evaluate exact probability in order to know
* if the probability is too small and to calculate exact probability
* only if the lazy evaluation fails to calculate exact probability */
if (probability_to_fail_high_too_small(depth, alpha,i))
return alpha;
/* the function sortmove find the next move to search and put it in
* place i in the global move stack. it is logical to spend more time
* if the depth is bigger so the function gets not only the number of
* move in the move stack but also the depth */
sortmove(i, depth);
/* it is possible that I cannot decide for all moves that there is no
* chance that they will fail high but I may decide for the next move
* that it is not going to fail high. I expect the function to say yes
* very fast in most cases and the only cases when I expect no chance
* for the move to fail high is when the move is quiet move and the
* evaluation enough below alpha */
if (chance_fail_high_this_move(depth, alpha, i)) {
makemove(gen_dat[i].m);
/* first we check for a draw by repetition or by endgame */
if (TryDraw())
{
undomove();
return 0;
}
/* now we calculate the extensions so we can use later the hash
* tables to decide if to prune the move */
Newdepth = depth + extensions();
if (TryHashTables(Newdepth) <= -beta)/*TryHashTables()<=-beta means
that
*the score in the hash is at most minus beta*/
{
undomove();
return beta;
}
if (depth>ETC_depth)
{
/*here there should be a code to use ETC*/
gen();
for (i=first_move[ply];i<first_move[ply+1];i++)
{
zobkey=calc_hash(i);/*first we calculate the hash_key after move number i
based on the
hash key of the position in the board that is global varaible in an array
because
we need it to detect repetition and based on i*/
if (maybe_hash_cutoff(zobkey,beta))
{
/*in most cases it is easy to find even without making the moves
*that there is no hash cutoff and I assume that it is not possible
*to calcualte the extensions before making the move so there are cases
*when we do not know if ETC generates a cutoff*/
makemove(gen_dat[i].m);
if (ProbeHash(depth,alpha,beta)>=beta)
{
//we need to calculate extensions here
undomove();
undomove();
return beta;
}
undomove();
}
}
}
if (nullconditions(depth)) {
/*nullconditions may return 0 if there is a big danger of zugzwang or
*if searching even to reduced depth discovers a zugzwnag*/
reduction = calculate_null_reduction();
donullmove();
gen();
/* starting_q_search_depth is simply a parameter that I give
* qsearch in the beginning and I need it because I search
* different in different plies of the qsearch */
if (reduction < Newdepth)
val = -Search(Newdepth - reduction, -beta, -beta + 1,
verify);
else
val = -Quies(-beta, -beta + 1, starting_q_search_depth);
undonullmove();
if (val >= beta) {
if (verify == 1) {
depth--;
verify = 0;
} else
return val;
}
}
if (depth <= 0) {
gen();
return -Quies(-beta, -alpha, starting_q_search_depth);
} else {
if (depth > small_depth) {
/* evalprobability may search every legal move to
* depth-small_depth in order to give exact evaluation of
probabilities
* that is going to be used for evaluating probabilities
* for every move about win draw loss when the probabilities
are
* going to be stored in the hash and be used to decide
later about
* reductions or extensions and singular extension may be a private
* case. evalprobability get the parameter alpha because
* I am interested also in storing in the hash probability to be best move
* and above alpha for every move and this information may
* help me later for better order of moves.
* I also think about having updateprobability
* function that update probabilities for all moves when
* I call it only when the remaining depth is big enough.
* I think also to store in the hash tables relatively
* small tree with probabilities when I update the
* probabilities in all the nodes of that tree every time
* that I get new information like changing the root
* score or fail high or fail low when the depth is big
* enough. */
evalprobability(depth,alpha);
}
/* in cases of reduction I want to investigate later only in
* case of unexpected fail high Reduction_Condition happens
* when the probability was not small enough to prune but
* still small */
if (Reduction_Conditions()) {
val = -Search(Newdepth - PLY - Ply_Reduction(), -beta,
-alpha, verify);
if (val > alpha)
val = -Search(Newdepth - PLY, -beta, -alpha, verify);
} else
val = -Search(Newdepth - PLY, -beta, -alpha, verify);
}
undomove();
if (val >= beta)
{
RecordHash(depth, beta, hashfBETA);
return beta;
}
if (val > alpha)
{
alpha = val;
best_place_move=i;
}
}
i++;
}
if (alpha!=first_alpha)
{
RecordHash(depth, beta, hashfEXACT);
updatepv(best_place_move);
}
return alpha;
}
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.