You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But I'm having a little problem with the evaluation. I tried to use the video_llava model to evaluate on two datasets, videomme and longvideobench. The evaluation works fine, which is good. However, I noticed that the evaluation gets slower and slower as it progresses, i.e. it starts out relatively fast and then gets slower and slower. I would like to ask why is this? Is it possible that I am missing something?
The following are the command I used, running in a Python==3.10 environment with only the lmms-eval package installed:
Hi,
This framework is awesome!
But I'm having a little problem with the evaluation. I tried to use the video_llava model to evaluate on two datasets, videomme and longvideobench. The evaluation works fine, which is good. However, I noticed that the evaluation gets slower and slower as it progresses, i.e. it starts out relatively fast and then gets slower and slower. I would like to ask why is this? Is it possible that I am missing something?
The following are the command I used, running in a Python==3.10 environment with only the lmms-eval package installed:
python3 -m accelerate.commands.launch --num_processes=2 -m lmms_eval --model video_llava --tasks longvideobench_val_v --batch_size 1 --log_samples --log_samples_suffix video_llava_lvb_v --output_path ./logs/
Hope to get an answer, thank you!
The text was updated successfully, but these errors were encountered: