- Introduction
- Background Literature
- Project Structure
- Installation
- Usage
- Configuration
- Training
- Evaluation
- Results
- Contributing
- License
- Acknowledgments
SNAKE_AI is an AI-driven implementation of the classic Snake game. The project utilizes the Deep Q-Network (DQN) algorithm, a reinforcement learning technique, to train an intelligent agent capable of mastering the game. The agent learns to navigate the game grid, collect food, grow in length, and avoid collisions with itself and obstacles, demonstrating strategic gameplay and achieving high scores.
The Deep Q-Network (DQN) algorithm has revolutionized reinforcement learning by enabling agents to learn optimal policies in high-dimensional environments directly from pixel input. The seminal work by Mnih et al. (2015) demonstrated the capability of DQN to achieve human-level performance on various Atari 2600 games.
- Volodymyr Mnih et al. (2015): Introduced DQN, which uses deep neural networks to approximate the optimal action-value function. The algorithm was demonstrated to achieve human-level performance on multiple games.
- Planning-Based DQN (2017): Combines planning approaches with model-free DQN agents to improve efficiency and scores in dynamic environments.
- Real-Time Fighting Games (2018): Applied DQN to a visual fighting game, highlighting its potential effectiveness in real-time scenarios.
- OpenAI Gym (2019): Investigated the effectiveness of DQN and other reinforcement learning techniques for video games using the OpenAI Gym's Arcade Learning Environment.
- Lightweight CNNs for DRL (2020): Addressed resource-intensive requirements in DRL by using lightweight CNNs to demonstrate reasonable performance with compressed imagery data.
- Designed the Snake Game environment and trained an agent using the baseline DQN algorithm.
- Analyzed and compared the performance of the DQN algorithm with different hyperparameter settings, such as learning rate, batch size, and epsilon decay rate.
The project repository is structured as follows:
SNAKE_AI/
├── data/
│ └── checkpoints/ # Saved model checkpoints
├── src/
│ ├── snake_game.py # Core game implementation
│ ├── dqn_agent.py # DQN agent implementation
│ └── train.py # Training script
├── config/
│ └── config.yaml # Configuration file
├── notebooks/
│ └── analysis.ipynb # Jupyter notebook for analysis
├── README.md # Project documentation
└── requirements.txt # Python dependencies
To install the project, follow these steps:
-
Clone the repository:
git clone https://github.com/AIMSIIITA/SNAKE_AI.git cd SNAKE_AI
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the dependencies:
pip install -r requirements.txt
To run the Snake game with the trained DQN agent, use the following command:
python src/snake_game.py
The configuration for the project is stored in config/config.yaml
. It includes settings for the game environment, DQN parameters, and training hyperparameters. Modify the configuration file to customize the agent's training and evaluation process.
To train the DQN agent, run the training script:
python src/train.py
The training script will save model checkpoints in the data/checkpoints/
directory. The training process includes experience replay and target network updates to stabilize learning and improve convergence.
To evaluate the performance of the trained DQN agent, run the evaluation script:
python src/evaluate.py
The script will load the saved model checkpoints and display the agent's performance metrics, including average score and strategic gameplay analysis.
The results of the training and evaluation processes are documented in the notebooks/analysis.ipynb
notebook. The notebook provides detailed visualizations and analysis of the agent's performance, including:
- Learning curves
- Hyperparameter optimization
- Game play strategies
Contributions to the project are welcome. To contribute, follow these steps:
-
Fork the repository:
git clone https://github.com/your-username/SNAKE_AI.git cd SNAKE_AI
-
Create a feature branch:
git checkout -b feature/your-feature-name
-
Commit your changes:
git commit -m 'Add some feature'
-
Push to the branch:
git push origin feature/your-feature-name
-
Open a pull request:
Visit the repository on GitHub and open a pull request to the main
branch.
This project is licensed under the MIT License. See the LICENSE file for details.
We would like to thank the authors of the following papers and resources that significantly contributed to this project:
- Volodymyr Mnih et al., "Human-level control through deep reinforcement learning," Nature, 2015.
- OpenAI Gym
- Pygame Community
For more detailed information, please refer to the Deep_Q_Snake__An_Intelligent_Agent_Mastering_the_Snake_Game_with_Deep_Reinforcement_Learning.pdf file included in this repository.