Skip to content

xiujiesong/ISA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Is Your Image a Good Storyteller?

Abstract

Quantifying image complexity at the entity level is straightforward, but the assessment of semantic complexity has been largely overlooked. In fact, there are differences in semantic complexity across images. Images with richer semantics can tell vivid and engaging stories and offer a wide range of application scenarios. For example, the Cookie Theft picture is such a kind of image and is widely used to assess human language and cognitive abilities due to its higher semantic complexity. Additionally, semantically rich images can benefit the development of vision models, as images with limited semantics are becoming less challenging for them. However, such images are scarce, highlighting the need for a greater number of them. For instance, there is a need for more images like Cookie Theft to cater to people from different cultural backgrounds and eras. Assessing semantic complexity requires human experts and empirical evidence. Automatic evaluation of how semantically rich an image will be the first step of mining or generating more images with rich semantics, and benefit human cognitive assessment, Artificial Intelligence, and various other applications. In response, we propose the Image Semantic Assessment (ISA) task to address this problem. We introduce the first ISA dataset and a novel method that leverages language to solve this vision problem. Experiments on our dataset demonstrate the effectiveness of our approach.

ISA Dataset

Dataset Introduction

For each image in the ISA dataset, we annotate it with two scores: an Entity Score and a Semantic Score. They correspond to the Entity Complexity Scoring task and the Semantic Complexity Scoring task, respectively.

data samples
Figure 1: Samples from the ISA dataset.

Data Access

To get access to the data, you must sign a Data Use Agreement (DUA). Please read the DUA carefully, and send an email to [email protected] with the message: "I consent to the Data Usage Agreement of the ISA dataset." and attach the DUA including your handwritten signature in it.

You can use the following code to load the dataset.

from datasets import load_dataset

dataset = load_dataset("path/to/dataset/dir")

VLISA

VLISA has two components: a Feature Extractor and a Discriminator. Specifically, we first use an LVLM (GPT-4o in this paper) as the Feature Extractor to extract semantic information in natural language form as features from images. Then, we use a Discriminator model, such as BERT and ViLT, to rate the input image based on the extracted features, optionally including the image itself.

VLISA
Figure 2: VLISA.

Citation

If you find our work useful, please cite our paper:

@article{song2024imagegoodstoryteller,
      title={Is Your Image a Good Storyteller?}, 
      author={Xiujie Song and Xiaoyi Pang and Haifeng Tang and Mengyue Wu and Kenny Q. Zhu},
      year={2024},
      eprint={2501.01982},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2501.01982}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages