-
Notifications
You must be signed in to change notification settings - Fork 208
original dataset and generation script output have different formats #14
Comments
The question generation script here on GitHub is mostly the same as the code used to generate CLEVR -- mostly I added documentation, tried to remove dead code that wasn't being used anymore, and changed the names of some of the JSON keys to have better names. Here's the original generation code for you to compare against in case there are other differences that I can't remember: https://gist.github.com/jcjohnson/6fb119a0372166ec9f4f006a1242a7bc In the original code (L710) "template_idx" is also the index of a template within a file, much like "question_family_index" in the GitHub version of the file. There was another script that converted the output from the original generation script into the format that we released as CLEVR_v1.0, which changed the names of JSON keys ("text_question" -> "question", "structured_question" -> "program"). Unfortunately after digging around today I wasn't able to find this conversion script. However I suspect that the conversion script also changed the semantics of "template_idx" / "question_family_index" to be an overall index of the template (between 0 and 89) rather than the index of the template within the file; in hindsight this was clearly a mistake since it makes it tough to figure out which template was used to generate which question. Thankfully the templates originally used for question generation have exactly the same structure as the ones on GitHub, so the only source of nondeterminism is the order that the JSON files are loaded (since this order depends on To fix this issue, I manually matched up values of "question_family_index" from the released CLEVR_v1.0 data to the text templates from the JSON files, and found that you can recover the template for each question if you load them in this order:
Here's a little script that shows how to recover templates from the released questions: It loads templates in this order, randomly samples some questions, and prints out the text of the question as well as it's recovered template: https://gist.github.com/jcjohnson/9f3173703f8578db787345d0ce61002a In the process of figuring this out, I realized another slight inconsistency between the original code and the GitHub code: we changed the wording of the "same_relate" templates to be less ambiguous (in particular adding "other" or "another"), but the semantics of these templates are exactly the same. Here are the old versions of those templates: https://gist.github.com/jcjohnson/09541f3bcb32e73e0ba47c57d09f3f6e |
Thanks a lot, @jcjohnson ! Regarding the inconsistency between the original code and the Github code: can you please clarify which code was used to generate the widely used CLEVR distribution? I just checked, and found that |
I found another small incompatibility. Original CLEVR key for the type of functional program was called "function", while the GitHub key is "type". So, I wrote a small conversion script:
|
We downgrade same_relate.json to the CLEVR 1.0 version. The question generator should use the same question format as the CLEVR 1.0 dataset on which the CLEVR IEP neural networds were trained. See facebookresearch#14 (comment) Also add fix_questions.py to convert the output questions json to the older format. See facebookresearch#14 (comment)
In the dataset, "question_family_index" field takes values from 0 to 89. When I generate a new dataset with the generation script, "question_family_index" takes smaller values as it refers to the index within a template file. In this regard, I have two questions:
The text was updated successfully, but these errors were encountered: