This project classifies images of chest X-rays into two categories: Normal and Pneumonia. It uses a Convolutional Neural Network (CNN) for image classification using Keras.
The dataset used for this project is sourced from Kaggle.
It contains chest X-ray images categorized into two folders: NORMAL
and PNEUMONIA
. The NORMAL
folder contains X-ray images of healthy patients, while the PNEUMONIA
folder contains X-ray images of patients diagnosed with pneumonia.
The above images shows a batch of 5 images that are Normal
The above images shows a batch of 5 images that are Pneumonia
The custom CNN model consists of the following layers:
- Conv2D: 32 filters, kernel size (3, 3), activation 'relu', input shape (224, 224, 3)
- MaxPooling2D: pool size (2, 2)
- Conv2D: 64 filters, kernel size (3, 3), activation 'relu'
- MaxPooling2D: pool size (2, 2)
- Conv2D: 128 filters, kernel size (3, 3), activation 'relu'
- MaxPooling2D: pool size (2, 2)
- Flatten
- Dense: 512 units, activation 'relu'
- Dropout: rate 0.5
- Dense: 1 unit, activation 'sigmoid'
- Data Preprocessing: Images are resized to 224x224 pixels and normalized.
- Compilation: The model is compiled with binary cross-entropy loss and the Adam optimizer.
- Training: The model is trained on the training dataset with a validation split, using data augmentation techniques to improve generalization.
The VGG16 model is a deep convolutional network pre-trained on the ImageNet dataset. For this project, the following modifications are made:
- VGG16 Base: Pre-trained VGG16 model without the top layers.
- Flatten
- Dense: 256 units, activation 'relu'
- Dropout: rate 0.5
- Dense: 1 unit, activation 'sigmoid'
- Data Preprocessing: Images are resized to 224x224 pixels and normalized.
- Feature Extraction: The pre-trained VGG16 model is used to extract features from the images.
- Fine-Tuning: The custom top layers are trained on the extracted features, while the lower layers of VGG16 are optionally fine-tuned.
- Compilation: The model is compiled with binary cross-entropy loss and the Adam optimizer.
- Training: The model is trained on the training dataset with a validation split, using data augmentation techniques to improve generalization.
The user interface for this project is built using Streamlit. It allows users to upload chest X-ray images and get predictions from the trained models.
To run the Streamlit app, use the following command:
streamlit run app.py