Skip to content

FMSS: Fusion Modality-Specific and Modality-Shared Features for Multi-Hop Reasoning over Multi-Modal Knowledge Graph

Notifications You must be signed in to change notification settings

zhangdddong/FMSS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

FMSS: Fusion Modality-Specific and Modality-Shared Features for Multi-Hop Reasoning over Multi-Modal Knowledge Graph

This repository provides the official PyTorch implementation of the research paper FMSS: Fusion Modality-Specific and Modality-Shared Features for Multi-Hop Reasoning over Multi-Modal Knowledge Graph. It will update soon. Thanks for your attention.

Requirements

  • python >= 3.6
  • torch >= 1.8.1
  • torchvision >= 0.9.2
  • torch-geometric >= 2.0.3
  • torch-sparse >= 0.6.12
  • torch-scatter >= 2.0.6
  • dgl-cu111 >= 0.6.1
  • gensim >= 4.2.0
  • tqdm
  • pandas
  • rdflib

Dataset

# WN9-IMG-TXT
FBIMG/train/valid/test
# FB-IMG-TXT
WN9IMG/train/valid/test

How to Run

Step 1: pre-process the dataset

./experiment.sh configs/<dataset>.sh --process_data <gpu-ID>

Step 2: get PageRank score

You can get code from https://github.com/timothyasp/PageRank

python pageRank.py ./raw.csv directed > raw.pgrk

Step 3: get pretrain embeddings

The pretrain embeddings of multi-modal knowledge can be downloaded from following link:
Link:https://share.weiyun.com/6I3sTANu 
Code:q78wm7

Step 4: train MADC model

./experiment-emb.sh configs/<dataset>-<model>.sh --train <gpu-ID>

Step 5: train model

./experiment-rs.sh configs/<dataset>-rs.sh --train <gpu-ID> 

Step 6: test model

./experiment-rs.sh configs/<dataset>-rs.sh --inference <gpu-ID> 

Cite

About

FMSS: Fusion Modality-Specific and Modality-Shared Features for Multi-Hop Reasoning over Multi-Modal Knowledge Graph

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published