Author: Uri Blass
Date: 10:58:24 04/03/04
Go up one level in this thread
Here is my latest code so far after I decided basically to leave Dann's formatting. I change slighly some functions by adding a parameter that I think that it is needed. I looked again in the comments and fixed them or added information. No big change in the code. I hope that people will help with improving that code. /* Code by Uri Blass I want to have some general search code(not only for chess) when it will be possible to implement failsoft,failhard,mtd and many search ideas by the right defines or by templates(thanks to Jay Scott for the idea but unfortunately I do not know C++ and only C so maybe other people can improve this code to use templates) This code does.not use use template or define now and it is only a plan for the future The following heavy commented code that is based on ideas that I thought about search, but most of the ideas in this code are not used for movei. I hope that other people will help me to make this code more general to allow also ideas like mtd or fail hard and fail soft and to add general code for the iterate function and the hash functions. The idea is that there is going to be some general search code not only for chess that every programmer will be able to use it as a basis. part of the procedures or the functions may be empty in the first versions or return a constant(for example in case that the program does not use iid it may have return infinite in the function that calculate if to use iid. Dann corbit made 2 corrections to the original code one is that I forgot to have i++ in one of the loops and another correction was that my code did not probe the hash table before trying iid */ int Search(int depth, int alpha, int beta, int verify) { int last,/* last is the place of the last move in the move list and *it is calculated later, *after generating the list of the legal moves*/ Newdepth, hash_depth, val; int first_alpha = alpha; int iidfirst = first_move[ply]; int valmax = alpha; /* there is a move stack similiar to a lot of programs we always have * first_move[0]=0 and first_move[i] to be sum of perft 1 in the * positions in the line that leaded to the position. */ int i = first_move[ply]; int reduction; /* the function gen generates legal moves */ gen(); last = first_move[ply + 1]; /*we need to check hash tables before checking if to do internal iterative deepening * hash did not help to prune and the same for null move but *hash still may be productive to avoid internal iterative deepening */ hash_depth=probe_hash_depth_upper(beta); if (hash_depth<depth-min_reduction) { /*we cannot know only based on hash that we should avoid null move pruning *so now it is time to calculate the real reduction that we use if we use iid */ reduction = iid_reduction(depth, alpha); /* *we are still not sure that we cannot avoid iid based on the hash *and the first step is to check if we can do it*/ if (depth > reduction+hash_depth) { /* * we look for a move to search with reduced depth(depth-reduction) * in order to stop when a move generate cutoff so we have a good move to start first*/ while (i < last) { makemove(gen_dat[i].m); val = -Search(depth - reduction, -beta, -valmax, verify); undomove(); if (val > valmax) { valmax = val; iidfirst = i; } if (val > beta) break; i++; } replace_move_data(i, valmax); i = first_move[ply]; } } while (i < last) { /* probability_to_fail_high_too_small also consider the previous * moves if for example good captures were already searched then I * may decide based on evaluation and depth that probability to fail * high is too small and return alpha but in previous move I did not * know it because I still hoped that the score will be above alpha * both good_chance_to_fail_high and * probability_to_fail_high_too_small may use a function that * evaluate probability but I do not suggest to call a function that * evaluate probability because it may be cheaper to do some lazy * evaluation and not to evaluate exact probability in order to know * if the probability is too small and to calculate exact probability * only if the lazy evaluation fails to calculate exact probability */ if (probability_to_fail_high_too_small(depth, alpha,i)) return alpha; /* the function sortmove find the next move to search and put it in * place i in the global move stack it is logical to spend more time * if the depth is bigger so the function gets not only the number of * move in the move stack but also the depth */ sortmove(i, depth); /* it is possible that I cannot decide for all moves that there is no * chance that they will fail high but I may decide for the next move * that it is not going to fail high. I expect the function to say yes * very fast in most cases and the only cases when I expect no chance * for the move to fail high is when the move is quiet move and the * evaluation enough below alpha */ if (chance_fail_high_this_move(depth, alpha, i)) { makemove(gen_dat[i].m); /* first we check for a draw by repetition or by endgame */ if (TryDraw()) return 0; /* now we calculate the extensions so we can use later the hash * tables to decide if to prune the move */ Newdepth = depth + extensions(); if (TryHashTables(Newdepth) >= beta) return beta; /* I will also try ETC here in case that the depth is big enough * by generating all moves and I need to write code for it */ if (nullconditions(depth)) { /*nullconditions may return 0 if there is a big danger of zugzwang or *if searching even to reduced depth discovers a zugzwnag*/ reduction = calculate_null_reduction(); donullmove(); gen(); /* starting_q_search_depth is simply a parameter that I give * qsearch in the beginning and I need it because I search * different in different plies of the qsearch */ if (reduction < Newdepth) val = -Search(Newdepth - reduction, -beta, -beta + 1, verify); else val = -Quies(-beta, -beta + 1, starting_q_search_depth); undonullmove(); if (val >= beta) { if (verify == 1) { depth--; verify = 0; } else return val; } } if (depth <= 0) { gen(); return -Quies(-beta, -alpha, starting_q_search_depth); } else { if (depth > small_depth) { /* evalprobability may search every legal move to * depth-small_depth in order to give exact evaluation of probabilities * that is going to be used for evaluating probabilities * for every move about win draw loss when the probabilities are * going to be stored in the hash and be used to decide later about * reductions or extensions and singular extension may be a private * case. evalprobability get the parameter alpha because * I am interested also in storing in the hash probability to be best move * and above alpha for every move and this information may * help me later for better order of moves. * I also think about having updateprobability * function that update probabilities for all moves when * I call it only when the remaining depth is big enough. * I think also to store in the hash tables relatively * small tree with probabilities when I update the * probabilities in all the nodes of that tree every time * that I get new information like changing the root * score or fail high or fail low when the depth is big * enough. */ evalprobability(depth,alpha); } /* in cases of reduction I want to investigate later only in * case of unexpected fail high Reduction_Condition happens * when the probability was not small enough to prune but * still small */ if (Reduction_Conditions()) { val = -Search(Newdepth - PLY - Ply_Reduction(), -beta, -alpha, verify); if (val > alpha) val = -Search(Newdepth - PLY, -beta, -alpha, verify); } else val = -Search(Newdepth - PLY, -beta, -alpha, verify); } undomove(); if (val >= beta) return beta; if (val > alpha) alpha = val; } i++; } return alpha; }
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.