Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues about evaluating on ChartQA #6

Open
zhangliang-04 opened this issue Jan 19, 2024 · 4 comments
Open

Issues about evaluating on ChartQA #6

zhangliang-04 opened this issue Jan 19, 2024 · 4 comments

Comments

@zhangliang-04
Copy link

Thank you for the open sourcing! I want to reproduce the performance of ChartQA of ChartAst-S. I notice there is a yaml file named ./chart_multitask_mixed_othertypebasetype.yaml in the inference script accessory/exps/finetune/mm/test.sh and cannot find it anywhere. How about its content if I am going to inference on ChartQA?
In addition, are the evaluation results of ChartQA in the paper produced by this code ./accessory/eval_mm/evaluate.py?

@FanqingM
Copy link
Collaborator

FanqingM commented Jan 19, 2024

Sorry, it seems that I made a mistake of the code, I just update it, can you try this version?
Use test.sh , which use single_turn_eval_multitask.py.
for chartqa:test_all.json, you can refer the dataset class in single_turn_eval_multitask.py to make this json, it is easy

@FanqingM
Copy link
Collaborator

you alse need to fix some paths in single_turn_eval_multitask.py
including the images directory and the annotations(QA.json)

@FanqingM
Copy link
Collaborator

I just upload the test.json for chartqa, which is accessory/test_all1.json.
It is converted by the original chartqa repo, I just merge the human and the augment QAs.

@zhangliang-04
Copy link
Author

Much thanks! I will try it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants