Skip to content

Latest commit

 

History

History
47 lines (32 loc) · 1.78 KB

README.md

File metadata and controls

47 lines (32 loc) · 1.78 KB

HAPZero

Pytorch

🌈 Model Architecture

Model_architecture

📚 Dependencies

  • Python 3.6.7
  • PyTorch = 1.7.0
  • All experiments are performed with one RTX 4090 GPU.

⚡ Prerequisites

  • Dataset: please download the dataset, i.e., IP102 to the dataset root path on your machine, Datasets can be download from Xian et al. (CVPR2017) and take them into dir ./datasets/.
  • Data split: dataset split files for the three groups are placed at ./data/xlsa19/split
  • Attribute w2v: download from link IP102 Att and place it in ./data/xlsa/w2v.
  • Download pretranined vision Transformer as the vision encoder and place it in ./pretrain_model_vit.

🚀 Train & Eval

Before running commands, you can set the hyperparameters in config on different datasets:

config/ip102.yaml       #IP102

Train:

 python train.py

Eval:

 python test.py

You can test our trained model: GroupA, [GroupB](please wait), [GroupC](please wait).

📕 Ackowledgement

We thank the following repos providing helpful components in our work. PSVMA