Saturday, March 9, 2013

Chess evaluation evolution

My little chess engine plays correct chess for some time now and also reached a decent strength level. So it is not yet among the worlds top 100 chess engines but the goal is in sight at least (place 117).

So far I haven't spent any time with tuning the evaluation. That does not mean its evaluation is simple. It is not, it possesses already a lot of chess knowledge but it is not able to use that knowledge to its full extend. Besides some basic material knowledge it also understands the concepts of mobility, center control, king safety, basic pawn structure properties, passed pawns or development.

So ice knows that having a doubled pawn is considered a liability and having control of the center is usually an advantage. So it will try to avoid unnecessarily doubling its pawns and will play moves that improve its influence in the center. But a game of chess consists usually of trade offs, unless your opponent makes a mistake you don't get anything for free. So it might be a good deal in a position to double the pawns if this also leads to more center control and a half open file for the rook, or it might still be bad deal.

Those things to decide are very difficult for an engine. A human will use its intuition for that which comes from experience, but an engine does not have this. It has to use a mathematical approach to decide whether some positional imbalances are good or bad for it.

And here tuning the evaluation features comes into play. With good tuned features an engine will be better in estimating whether the positional concessions in a line of play are worth it or whether this line should be avoided. Unfortunately picking the correct values is very tough and most of the values are connected somehow.

A simple example. Most chess players learn very early that a pawn is worth 1 point, knight and bishop 3, the rook 5 and the queen 9. Those are values that can be used as material values for the pieces. So now you add a bonus for the rook on an open file, this will increase the overall value of the rook, so to keep everything in balance its material value must be lowered. As my evaluation consists of more than a 100 terms that all are somehow connected the task of finding an optimal balanced setting lets the traveling sales man problem look like an easy job.

As I was always fascinated by genetic algorithms to solve optimization problems I decided to give it a try to optimize my evaluation. The general public opinion in the chess community is that it does not work. Of course this can also mean that it does work but the people that were successful with it don't talk about it. Having an effective and working automated evaluation tuner will give them a big competitive advantage against the manual weight pickers, so why give it away. I wanted to try it anyway because at least I learn something with it, even if it fails. And it is much more interesting than boring manual tuning.

I decided not to use some existing GA C libraries but to program the algorithm myself. Over the past month I dedicated a bit of my time to it and slowly an advanced tuning framework evolved. And my new i7 will provide the horse power to run it.

So far I did a few runs with different GA parameter settings and the results look promising. I might not be able to find the perfect weights but I'm surely able to find better weights than I would be able to find manually. So it is at least for me a limited form of success, and maybe more than that.

To find out I have to further test the evolved solutions with different parameters and fitness functions against the former version of ice and also against other engines. This will take some more time. Luckily not mine, everything runs automated and unattended, just my electricity bill will probably get a hit.

No comments:

Post a Comment