Skip to content

Official code of the paper "Relational Representation Learning Network for Cross-Spectral Image Patch Matching"

Notifications You must be signed in to change notification settings

YuChuang1205/RRL-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The official complete code for paper "Relational Representation Learning Network for Cross-Spectral Image Patch Matching [Paper/arXiv]"

In this open source project, we integrate multiple cross-spectral image patch matching networks (SCFDM, AFD-Net, MFD-Net, EFR-Net, FIL-Net, RRL-Net) and multiple cross-spectral image patch matching datasets (VIS-NIR patch dataset, OS patch dataset, SEN1-2 patch dataset). We hope we can contribute to the development of this field. Everyone is welcome to use it.

Overview of RRL-Net

Existing methods focus on extracting diverse feature relations and ignore individual intrinsic features. However, sufficient representation of the individual intrinsic features is the basis for subsequent mining of feature relations. Therefore, our relational representation learning focuses on fully mining two aspects: the intrinsic features of individual image patches and the relations between image patch features.

Datasets

  1. Original datasets
  1. The datasets we created from original datasets (can be used directly in our demo)

How to use our code

  1. Download the dataset.

        Click (1.VIS-NIR; 2.OS; 3.SEN1-2)

        Unzip the downloaded compressed package into the newly created "data" folder under root directory of the project.

  1. Creat a Anaconda Virtual Environment.

     conda create -n RRL-Net python=3.6 
     conda activate RRL-Net 
    
  2. Configure the running environment. (One of the configurations can be used.)

    • Configure 1: CUDA10.0
     conda install cudatoolkit==10.0.130
     conda install cudnn
     pip install keras==2.2.5 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install tensorflow-gpu==1.14.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install scikit-learn==0.24.1  -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install scikit-image==0.17.2  -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install matplotlib==3.3.4  -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install opencv-python==4.5.1.48 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install opencv-python-headless==4.5.1.48 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install imgaug==0.4.0  -i https://pypi.tuna.tsinghua.edu.cn/simple
    
    • Configure 2: CUDA11.0
     pip install keras==2.3.1 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install nvidia-tensorflow==1.15.4+nv20.10  -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install imgaug==0.4.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install scikit-learn==0.24.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
    
  3. Training the model.

    The default model and dataset are the RRL-Net and the VIS-NIR patch dataset. You can modify the default setting in the code directly.

    python train_model.py
    
  4. Testing the Model.

    The default model and dataset are the RRL-Net and the VIS-NIR patch dataset. You can modify the default setting in the code directly.

    python test_model.py
    

Results

  • Quantative Results on the VIS-NIR patch dataset:

Results on the VIS-NIR patch dataset

  • Quantative Results on the OS patch dataset:

Results on the OS patch dataset

  • Quantative Results on the SEN1-2 patch dataset:

Results on the SEN1-2 patch dataset

  • Qualitative Results on the VIS-NIR cross-spectral scenarios:

Visualization on the VIS-NIR patch dataset

From top to bottom, the four lines of samples are misjudged as non-matching, misjudged as matching, correctly judged as non-matching, and correctly judged as matching.

  • Qualitative Results on the VIS-SAR cross-spectral scenarios:

Visualization on the OS and SEN1-2 patch dataset

The left and right sides of the arrows denote the true labels and predicted results, respectively.

  • Visualization of feature maps:

Visualization

For each pair of samples, the middle two columns denote the output of the MGLA module at each stage, and the left and right columns denote the private feature maps output by the FIL module at each stage. From top to bottom, they denote the stages from shallow to deep.

Citation

If you find this repo helpful, please give us a 🤩star🤩. Please consider citing the RRL-Net if it benefits your project.

BibTeX reference is as follows:

@misc{yu2024relationalrepresentationlearningnetwork,
      title={Relational Representation Learning Network for Cross-Spectral Image Patch Matching}, 
      author={Chuang Yu and Yunpeng Liu and Jinmiao Zhao and Dou Quan and Zelin Shi},
      year={2024},
      eprint={2403.11751},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2403.11751}, 
}

word reference is as follows:

Chuang Yu, Yunpeng Liu, Jinmiao Zhao, Dou Quan and Zelin Shi. Relational Representation Learning Network for Cross-Spectral Image Patch Matching. arXiv preprint arXiv:2403.11751, 2024.

Other link

  1. My homepage: [YuChuang]
  2. KGL-Net: [demo]

About

Official code of the paper "Relational Representation Learning Network for Cross-Spectral Image Patch Matching"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages