Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why use different judge model? #716

Closed
kydxh opened this issue Jan 9, 2025 · 4 comments
Closed

why use different judge model? #716

kydxh opened this issue Jan 9, 2025 · 4 comments
Assignees

Comments

@kydxh
Copy link

kydxh commented Jan 9, 2025

I notice that in run.py, for different datasets, the judge models are different. For example, MCQ datasets use 'chatgpt-0125', and Mathvista uses 'gpt-4-turbo'. Why not use the same judge model for fair comparasion?
image

@PhoenixZ810
Copy link
Collaborator

Hi,

Thank you for your interest and feedback.

It's important to note that the judging model used for each benchmark is aligned with the configurations detailed in the original paper to ensure a fair and consistent comparison. Any changes to the judging model can lead to significantly different results.

We appreciate your understanding on this matter and are open to any further questions or discussions you might have.

@PhoenixZ810 PhoenixZ810 self-assigned this Jan 10, 2025
@kydxh
Copy link
Author

kydxh commented Jan 15, 2025

I got it. Thank you very much.
However, I still have some confusion. Do the choice of the judging model align with the dataset's paper or the model's paper? For different VLMs, is the judging model the same for the same dataset?
Additionally, does using different evaluation methods for different datasets raise concerns about fairness when comparing across datasets?

@PhoenixZ810
Copy link
Collaborator

Thank you for your interest and observations.

  • The selection of the judging model strictly adheres to the configurations outlined in the original papers of the benchmarks.
  • For various VLMs, we maintain consistency by using the same judging model when evaluating performance on the same benchmark.
  • Different benchmarks indeed feature varied formats of questions and answers, such as multiple-choice QA and open-ended QA. As a result, the evaluation methods are tailored to fit the specific format of each benchmark. This will not lead to unfairness.

@kydxh
Copy link
Author

kydxh commented Jan 20, 2025

I got it. Thank you very much.

@kydxh kydxh closed this as completed Jan 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants