Skip to content

haoshuai714/mix-generation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MixGen: A New Multi-Modal Data Augmentation

This is the official PyTorch implementation of MixGen, which is a joint data augmentation technique for vision-language representation learning to improve data efficiency.

Here are some image-text pairs generated by MixGen,

How to use

MixGen is an input-level data augmentation technique, which can be plugged-and-played into existing vision-language learning methods with minimal code change.

Here we adopt ALBEF, NeurIPS'21 as an illustrating example. We only need to add one line between dataloader and model forward here.

That is, change from

for i, (image, text) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
    optimizer.zero_grad()

to

import mixgen as mg
for i, (image, text) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
    image, text = mg.mixgen(image, text, num=16)
    optimizer.zero_grad()

And that's it!!! No more changes needed to be made. You can simply kicoff training just like ALBEF does,

python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain.py

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

About

MixGen: A New Multi-Modal Data Augmentation

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%