Text Generation with LSTM Neural Networks using Keras running on top of Theano.
By using the Amazon Web Services infrastructure, we were able to access GPUs to speed up the training of our deep learning models. A Community Amazon Machine Image was used (ami-125b2c72), which was made for the Stanford CS231n class. The following are the specifications for the GPU Graphics Instance that was used:
g2.2xlarge:
15 GiB memory,
1 x NVIDIA GRID GPU (Kepler GK104),
60 GB of local instance storage,
64-bit platform
The models each trained for 50 epochs, which took between 4 and 9 hours on Amazon's GPUs, depending on the size of the text files.
The training loss for the models went as follows:
The different authors' models vary in performance, with the Edgar Allan Poe model performing poorly, and the JK Rowling model showing the most promise. Additional to these two, we also ran models on the works of Dante Alighieri, JRR Tolkien, and and William Shakespeare. The models show both a base-level understanding of the English language, reliably forming proper words and sentences, and a higher level understanding of character, plot, and rhythm, creating sequences of text featuring similar dialogue and rhythmic patterns to those of the authors they were modeled from.
Available at http://edgaralanturing.us-west-2.elasticbeanstalk.com/ (backend is currently under construction).
Created for HackHarvard 2016.