You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for providing this meaningful benchmark.
I notice that in the Table 2 of the raw paper, 10.9% questions include multiple documents. But when I download the test.jsonl file from the HF, the clear data format does not have the multiple document field? It only includes the pid, which I assume should be the paper id according to OpenReview.
The text was updated successfully, but these errors were encountered:
By the way, if the external documents are implicitly mentioned in the question and should be referenced to answer the question, is it possible for me to obtain the concrete sources or URLs? Or how can I distinguish different categories of the questions (defined in Table 2) such that I can get a more fine-grained evaluation on different splits?
Thanks a lot for providing this meaningful benchmark.
I notice that in the Table 2 of the raw paper, 10.9% questions include multiple documents. But when I download the
test.jsonl
file from the HF, the clear data format does not have the multiple document field? It only includes thepid
, which I assume should be the paper id according to OpenReview.The text was updated successfully, but these errors were encountered: