Repository files consists of different ConvolutionalNeuralNets models used to train the system for recognizing expressions on the face dynamically.
DataSet is taken from Kaggle Challenges:
https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data
new_model.py:
Its a new CNN model was created for training the system.The layers used are:
INPUT- >[CONV64+RELU]- >MAX-POOL- >[CONV128+RELU]- >MAX-POOL- >FC1+RELU- >FC2+RELU- >Softmax Regression- >classification
CNN_Layers.py:
This file consists of CNN Layers API's from tensorflow library.Appropriate layers with required hyperparameters and parameters were defined.
new_model.py uses layers from this file.
resnets_model.py:
Residual Blocks are used as basic building blocks for creating the CNN model.
Tflearn API's are used for building the layers and residual blocks.Appropriate hyperparameters and parameters were used.
save_model_android.py:
We freeze the graph model we created so that it can be deployed in android in production.
Graph ,nodes and parameters(Weights and biases) are freezed and stored in a .pb file extenstion ( Protocol Buffer)