Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation is getting slower #491

Open
Yibo-0820 opened this issue Jan 7, 2025 · 2 comments
Open

Evaluation is getting slower #491

Yibo-0820 opened this issue Jan 7, 2025 · 2 comments

Comments

@Yibo-0820
Copy link

Yibo-0820 commented Jan 7, 2025

Hi,

This framework is awesome!

But I'm having a little problem with the evaluation. I tried to use the video_llava model to evaluate on two datasets, videomme and longvideobench. The evaluation works fine, which is good. However, I noticed that the evaluation gets slower and slower as it progresses, i.e. it starts out relatively fast and then gets slower and slower. I would like to ask why is this? Is it possible that I am missing something?
The following are the command I used, running in a Python==3.10 environment with only the lmms-eval package installed:

python3 -m accelerate.commands.launch --num_processes=2 -m lmms_eval --model video_llava --tasks longvideobench_val_v --batch_size 1 --log_samples --log_samples_suffix video_llava_lvb_v --output_path ./logs/

image

Hope to get an answer, thank you!

@pufanyi
Copy link
Collaborator

pufanyi commented Jan 17, 2025

@Yibo-0820 Hiii! I'm not entirely sure, but could it be due to the video length? I noticed that in the data around the 250-300 range, the videos are all very long: https://huggingface.co/datasets/longvideobench/LongVideoBench/viewer/default/validation?p=5

@kcz358
Copy link
Collaborator

kcz358 commented Jan 17, 2025

Hi, I think if the GPU usage is low, it could be possible that the video loading is the possible reason for that, which I think is hard to solve.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants