Computer Chess Club Archives


Search

Terms

Messages

Subject: Experiment: Search depth and ratings

Author: Russell Reagan

Date: 23:14:50 07/30/02


Has anyone done any experiments where they took a simple program and computed
ratings for different levels of search? I was thinking it might be interesting
to do this experiment, but I figured if someone has done this before it would
save me the trouble.

I was thinking about creating a simple alpha-beta engine with mainly material
evaluation, and perhaps some other small things like a piece-square table, and
maybe a bonus for castling. That would seem to help the program do things that
even beginners do, if for no other reason than they were taught to "control the
center" and "castle early". The main reason for adding those two things would be
so the program wouldn't be deciding on 1. h3 just because it was the first move
searched and the material evaluations all came up even.

My goal here is to compute ratings for various depths of search when using
(basically) material only evaluation. I would like to know, for example, how far
you could expect to get as a human player if you were able to catch all tactics
and combinations at a depth of 1, 2, 3, 4, and so on. I have heard people say
that a human can get to expert level (2000) by mastering tactics, and I would
guess they also know enough positional knowledge to get by.

If anyone has done anything like this before, I'd love to hear the results. It
would save me the time of doing it myself, although I might do it anyway just
for kicks.

Thanks,
Russell



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.