๐ Checkers AI is a Python-based program that plays the game of Checkers (Draughts) as one of the players. The game is played on an 8ร8 board with 12 pieces per player, following the standard checkers rules. The goal is to eliminate all opponent's pieces or block their possible moves.
๐ค This project implements AI-based decision-making, using:
- Heuristic Evaluation โ Determines the best move based on piece positioning and game state.
- Variable Depth Search Engine โ Adjusts search depth dynamically for better performance.
- Minimax Algorithm โ Computes the best possible move considering all future possibilities.
- Alpha-Beta Pruning โ Optimizes Minimax by eliminating unnecessary calculations.
- Hash Map Optimization โ Reduces redundant computations and speeds up move selection.
๐ฎ Game Modes:
- Standard Checkers Rules: Pieces move diagonally and can jump over opponent pieces.
- Two Play Modes: Choose whether capturing opponent pieces is mandatory or optional.
- King Promotion: When a piece reaches the opponent's back row, it becomes a "king" and can move both forward and backward.
- Multi-Jump Sequences: If a capture is possible, a piece can continue jumping over multiple opponent pieces in one turn.
- Console-Based Gameplay: The game is played through the terminal, with a graphical board representation and move selection.
โณ Performance:
- The AI must decide its move within 5 seconds.
- For higher scores, the move must be computed in less than 3 seconds.
๐ This project was developed as part of the "Algorithms and Data Structures" course (2023/2024).
- ๐ Technologies & Specifications
- ๐ง Installing and Running the Project
- ๐ How It Works
- ๐ค Game AI
- ๐ License
- ๐ฌ Contact
The project relies on the following libraries and technologies:
Library | Purpose |
---|---|
pygame |
Used for rendering the game board and handling user interactions |
queue |
Utilized for internal AI processing and state management |
time |
Used for AI move timing and performance measurement |
json |
Handles game configuration and move storage |
os |
File system operations and handling saved games |
Before running the project, install the required dependencies using:
pip install pygame
First, download the project by cloning the repository:
git clone https://github.com/MilanSazdov/checkers-ai.git
cd checkers-ai
Ensure you have all required dependencies installed:
pip install pygame
To start the checkers game, simply run the board.py script located inside the checkers folder:
python checkers/board.py
The Checkers AI game follows standard checkers rules, where players take turns moving pieces diagonally across the board. The AI calculates its best move using Minimax with Alpha-Beta Pruning and adjusts its search depth dynamically for performance optimization.
- The game starts with an 8ร8 checkers board, where black and red pieces are placed in their initial positions.
- The AI's thinking depth is displayed in the top-left corner of the screen.
- Pieces that can be moved are highlighted with a yellow glowing effect.
๐ผ Game Start:
At the beginning of the game, the depth is set to 3, and all available pieces that can be moved are highlighted.
- When a player selects a piece, it turns green to indicate the selection.
- The possible move positions are displayed as gray dots on the board.
๐ผ Piece Selection Example:
- If an opponent's piece can be captured, the game forces a jump.
- The game supports multi-jump captures, allowing a piece to continue jumping as long as legal moves exist.
๐ผ Multi-Jump in Action:
- When a piece reaches the opponent's back row, it gets promoted to a king.
- Kings are visually distinct and can move both forward and backward.
๐ผ King Pieces:
The AI in this checkers game is built using Minimax algorithm with Alpha-Beta pruning for optimizing decision-making. Additionally, the engine supports variable search depth, heuristic evaluation, and a hash map for board state caching to improve performance.
The Minimax algorithm is used for making optimal decisions in a two-player turn-based game. The idea is to build a game tree, where each node represents a possible board state. The AI then evaluates these states recursively and picks the best possible move.
- The Maximizing player (AI) tries to maximize the score.
- The Minimizing player (human) tries to minimize the AIโs score.
- The algorithm explores all possible moves, evaluates them, and chooses the best one.
- Generate all possible moves for the AI.
- Simulate each move and generate a game tree up to a predefined depth.
- Evaluate board positions using a heuristic function.
- Use Alpha-Beta pruning to eliminate unnecessary computations.
- Return the best move based on the computed scores.
Since Minimax searches an exponential number of board states, we use Alpha-Beta Pruning to skip unpromising branches of the tree. This improves efficiency by eliminating unnecessary calculations.
- Alpha (ฮฑ): The best score that the maximizing player can guarantee.
- Beta (ฮฒ): The best score that the minimizing player can guarantee.
- If at any node, the Maximizing Player finds a move that is better than Beta, further evaluation stops.
- If at any node, the Minimizing Player finds a move that is worse than Alpha, further evaluation stops.
๐ This results in a significant speedup of the Minimax algorithm!
Since checking all possible moves until the end of the game is computationally expensive, the AI evaluates board positions before reaching the final state. The evaluation function assigns a numerical score to each board state.
โ๏ธ Material Advantage: Counts the number of normal pieces and kings.
โ๏ธ Mobility: Prioritizes positions where more pieces have possible moves.
โ๏ธ Safety: Figures that cannot be captured are favored.
โ๏ธ Center Control: Figures in the center of the board are prioritized.
โ๏ธ Defense Strategy: Figures in the last two rows are valued higher.
โ๏ธ Promotion Potential: More empty squares on the opponentโs promotion row increases the score.
โ๏ธ Multiple Captures: Moves leading to multiple jumps are highly favored.
๐ Final evaluation formula:
๐ข The AI aims to maximize this score for itself and minimize it for the opponent.
The AI dynamically adjusts the search depth based on the game state:
- Many pieces on the board โ Lower depth (faster decision-making)
- Few pieces remaining โ Higher depth (more precise evaluation)
- Multiple jumps available โ Increases depth for better decision-making
Board State | Search Depth |
---|---|
๐ข More than 30 possible moves | 3 |
๐ต 15-30 possible moves | 4 |
๐ด Less than 8 possible moves | 5 |
๐ Multiple capture moves available | 5 |
This ensures that the AI plays efficiently in early game, but becomes highly precise in endgame situations.
Since some board states repeat multiple times, recalculating Minimax for the same state is wasteful. To speed up the AI, we use a Hash Map that stores previously evaluated board states.
โ๏ธ Key Idea: Each board state is converted into a unique string key (e.g., "wwbbbwwwbwwbww"
).
โ๏ธ If a state has already been evaluated, its best move is instantly retrieved from the hash map instead of recalculating Minimax.
โ๏ธ The evaluations are stored in evaluations.txt
and updated dynamically.
This significantly improves performance, especially in longer games.
1๏ธโฃ AI generates possible moves โ Filters out bad moves
2๏ธโฃ Evaluates board states using the heuristic function
3๏ธโฃ Prunes unpromising branches with Alpha-Beta pruning
4๏ธโฃ Selects the best move and executes it
5๏ธโฃ Stores the move in Hash Map for faster decision-making in future turns
๐ This results in an AI that plays efficiently, adapts to different game states, and continuously improves its decision-making!
This project is licensed under the MIT License.
See the LICENSE file for more details.
- ๐ README
- โค๏ธ Code of Conduct
- ๐ MIT License
๐ง Email: milansazdov@gmail.com
๐ GitHub: MilanSazdov