-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why use different judge model? #716
Comments
Hi, Thank you for your interest and feedback. It's important to note that the judging model used for each benchmark is aligned with the configurations detailed in the original paper to ensure a fair and consistent comparison. Any changes to the judging model can lead to significantly different results. We appreciate your understanding on this matter and are open to any further questions or discussions you might have. |
I got it. Thank you very much. |
Thank you for your interest and observations.
|
I got it. Thank you very much. |
I notice that in run.py, for different datasets, the judge models are different. For example, MCQ datasets use 'chatgpt-0125', and Mathvista uses 'gpt-4-turbo'. Why not use the same judge model for fair comparasion?
The text was updated successfully, but these errors were encountered: