Code for paper: Learning Commonsense-aware Moment-Text Alignment for Fast Video Temporal Grounding.
Paper is available here.
Ziyue Wu, Junyu Gao, Shucheng Huang, Changsheng Xu
Grounding temporal video segments described in natural language queries effectively and efficiently is a crucial capability needed in vision-and-language fields. In this paper, we deal with the fast video temporal grounding (FVTG) task, aiming at localizing the target segment with high speed and favorable accuracy. Most existing approaches adopt elaborately designed cross-modal interaction modules to improve the grounding performance, which suffer from the test-time bottleneck. Although several common space-based methods enjoy the high-speed merit during inference, they can hardly capture the comprehensive and explicit relations between visual and textual modalities. In this paper, to tackle the dilemma of speed-accuracy tradeoff, we propose a commonsense-aware cross-modal alignment (CCA) framework, which incorporates commonsense-guided visual and text representations into a complementary common space for fast video temporal grounding. Specifically, the commonsense concepts are explored and exploited by extracting the structural semantic information from a language corpus. Then, a commonsense-aware interaction module is designed to obtain bridged visual and text features by utilizing the learned commonsense concepts. Finally, to maintain the original semantic information of textual queries, a cross-modal complementary common space is optimized to obtain matching scores for performing FVTG. Extensive results on two challenging benchmarks show that our CCA method performs favorably against state-of-the-arts while running at high speed.
We recommended the following dependencies.
- CUDA >= 11.0
- Python 3.7
- torch 1.7.1
- torchvision 0.8.2
- torchtext
- numpy
- easydict
- terminaltables
- yacs
- h5py
- tqdm
Download the dataset files and pre-trained model files from BaiduPan.
The folder structure should be as follows:
.
├── checkpoints
│ ├── tacos
│ └── acnet
├── data
│ ├── TACoS
│ │ ├── tall_c3d_features.hdf5
│ │ └── ...
│ └── ActivityNet
│ ├── sub_activitynet_v1-3.c3d.hdf5
│ └── ...
│
├── scripts
│ ├── train.sh
│ ├── eval.sh
│ └── ...
|
├── configs
│
├── cca
│ ├── modeling
│ └── ...
|
├── train_net.py
└── test_net.py
We provide scripts for simplifying training and inference. Please modify corresponding file and run it.
Training on TACoS: script/train.sh.
Training on ActivityNet Captions: script/train_acnet.sh.
Evaluating on TACoS: script/eval.sh
Evaluating on ActivtyNet Captions: script/eval_acnet.sh
Methods | TE | CML | ALL | ACC | sumACC |
---|---|---|---|---|---|
FVMR | 3.51 | 0.14 | 3.65 | 29.12 | 70.60 |
CCA(ours) | 2.33 | 0.29 | 2.62 | 32.83 | 78.13 |
Methods | TE | CML | ALL | ACC | sumACC |
---|---|---|---|---|---|
FVMR | 3.14 | 0.09 | 3.23 | 45.00 | 106.60 |
CCA(ours) | 2.80 | 0.30 | 3.10 | 46.19 | 106.77 |
R@1,IoU=0.1 | R@1,IoU=0.3 | R@1,IoU=0.5 | R@1,IoU=0.7 | R@5,IoU=0.1 | R@5,IoU=0.3 | R@5,IoU=0.5 | R@5,IoU=0.7 |
---|---|---|---|---|---|---|---|
56.00 | 45.30 | 32.83 | 18.07 | 76.60 | 64.38 | 52.68 | 33.10 |
R@1,IoU=0.3 | R@1,IoU=0.5 | R@1,IoU=0.7 | R@5,IoU=0.3 | R@5,IoU=0.5 | R@5,IoU=0.7 |
---|---|---|---|---|---|
60.58 | 46.19 | 28.87 | 86.02 | 77.86 | 60.28 |
Please Star this repo and Cite the following paper if you feel our CCA useful to your research:
@article{wu2022learning,
title={Learning Commonsense-aware Moment-Text Alignment for Fast Video Temporal Grounding},
author={Wu, Ziyue and Gao, Junyu and Huang, Shucheng and Xu, Changsheng},
journal={arXiv preprint arXiv:2204.01450},
year={2022}
}
@inproceedings{gao2021fast,
title={Fast video moment retrieval},
author={Gao, Junyu and Xu, Changsheng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={1523--1532},
year={2021}
}