Welcome to the final project of CSC249/449!
In this final project, you are going to build deep learning models for two tasks on A2D dataset. Please read final_project.pdf for more details of the project requirement.
Before start working on a specific task, please do the following preparation on your Google Cloud.
-
Clone the repository
Please use the following command to clone this repository (please do not download the zip file):
git clone --recursive https://github.com/rochesterxugroup/csc249_final_proj_2020.git
If there is any updates of the repository, please use the following commands to update:
git submodule update --remote --merge git pull --recurse-submodules
cd to the cloned repo:
cd csc249_final_proj_2020
-
Environment Configuration
-
Download and install miniconda from https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh (You can skip this step if miniconda is installed on your google cloud server.)
-
Install the virtual enviroment and its dependencies by:
conda env create -f env.yml
-
Activate the virtual environment by (please remember to activate virtual environment everytime you login on google cloud):
conda activate pytorch_0_4_1
-
Then, install Pytorch 0.4.1 and torchvision. (You can try newer version of PyTorch, but we don't guarantee the code template can work with a different environment.)
conda install pytorch=0.4.1 cuda92 -c pytorch conda install torchvision
-
Install the ffmpeg via
sudo apt install ffmpeg
-
-
Download A2D dataset
Please make sure you are at the
csc249_final_proj
directory.-
Download the A2D dataset
wget http://www.cs.rochester.edu/~cxu22/t/249S19/A2D.tar.gz
If wget is not install on the server, please install wget via
sudo apt install wget
-
Decompress the tar ball and remove tar ball.
tar xvzf A2D.tar.gz rm A2D.tar.gz
-
Extract frames from videos
(Tip: Since it takes a long time to extract frames from video, you can execute the command in
screen
ortmux
, in case the disconnection happens.)python extract_frames.py
-
Please read submission/README.md
for more details of submission format.