Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corrected the code #19

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

take2rohit
Copy link

@take2rohit take2rohit commented Dec 31, 2021

Code wasn't running so i debugged the code and now its working (tested on torch version 1.7.1 )

  • async=True is deprecated therefor used non_blocking=True

@kprokofi
Copy link

@take2rohit , hi! Have you managed to reproduce the results of the paper (or at least close to it)? I tried several times in this repository and tried to integrate this approach (ML GCN model) into my project, but the model hasn't been trained (VOC dataset, the metric is about 50-60 mAP)

@take2rohit
Copy link
Author

take2rohit commented Jan 18, 2022

Hi @kprokofi . I haven't tried reproducing results. I was just using this code as a boilerplate for other code.

@Akimoto-Cris
Copy link

@take2rohit , hi! Have you managed to reproduce the results of the paper (or at least close to it)? I tried several times in this repository and tried to integrate this approach (ML GCN model) into my project, but the model hasn't been trained (VOC dataset, the metric is about 50-60 mAP)

Hi @kprokofi , did you manage to reproduce the results afterwards?
Every time I train, the mAP always rise up to only 10-20.

@tengxiao14
Copy link

@take2rohit , hi! Have you managed to reproduce the results of the paper (or at least close to it)? I tried several times in this repository and tried to integrate this approach (ML GCN model) into my project, but the model hasn't been trained (VOC dataset, the metric is about 50-60 mAP)

I tried to reproduce the result, but the mAP is noly about 10. Could you provide me with the training command?

@kprokofi
Copy link

https://github.com/kprokofi/ML-GCN - I couldn't reproduce the author's result, but it is got better. 93+ mAP

@Akimoto-Cris
Copy link

https://github.com/kprokofi/ML-GCN - I couldn't reproduce the author's result, but it is got better. 93+ mAP

Hi @kprokofi,

Any advise apart from the standard configs that you adopt? Appreciate advices, thanks.

@kprokofi
Copy link

https://github.com/kprokofi/ML-GCN - I couldn't reproduce the author's result, but it is got better. 93+ mAP

Hi @kprokofi,

Any advise apart from the standard configs that you adopt? Appreciate advices, thanks.

Gradient clipping was the bottleneck. Also, you could play with it and the learning rate.

@tengxiao14
Copy link

https://github.com/kprokofi/ML-GCN - I couldn't reproduce the author's result, but it is got better. 93+ mAP

Hi @kprokofi,
Any advise apart from the standard configs that you adopt? Appreciate advices, thanks.

Gradient clipping was the bottleneck. Also, you could play with it and the learning rate.

Could you provide the command for training? Thanks.

@cym-heu
Copy link

cym-heu commented Mar 3, 2022

pretrained=True、lr=0.01,I got 93.4 mAP

@LingCoder1997
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly.
So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

@changlulu123
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

excuse me ,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

@changlulu123
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

@LingCoderSonoscape
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

@sorrowyn
Copy link

sorrowyn commented Apr 2, 2022

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

@LingCoderSonoscape
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

Appreciate! Now the training process looks good!

@sorrowyn
Copy link

sorrowyn commented Apr 2, 2022

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

Appreciate! Now the training process looks good!

In VOC2007, ResNet101+GMP also achieves desirable results(93.*)
In MS-COCO2014, the mAP is 83.0.

@Byronliang8
Copy link

Hi,

How could I load my own dataset to test with the pre-trained model?

Thanks.

@sorrowyn
Copy link

Refer to this repository for more information.
https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/blob/main/data/init.py

@Byronliang8
Copy link

Refer to this repository for more information. https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/blob/main/data/init.py

thanks for your help.

@mjw123bs
Copy link

Hello, I am very interested in this project and would like to know how to train and test on my own data set and finally output visual results

@812130247
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

Appreciate! Now the training process looks good!

In VOC2007, ResNet101+GMP also achieves desirable results(93.*) In MS-COCO2014, the mAP is 83.0.

How to set pretrained = True

@sorrowyn
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

Appreciate! Now the training process looks good!

In VOC2007, ResNet101+GMP also achieves desirable results(93.*) In MS-COCO2014, the mAP is 83.0.

How to set pretrained = True

def gcn_resnet101(num_classes, t, pretrained=False, adj_file=None, in_channel=300):
    model = models.resnet101(pretrained=True) #  set pretrained = True
    return GCNResnet(model, num_classes, t=t, adj_file=adj_file, in_channel=in_channel)

@812130247
Copy link

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

Appreciate! Now the training process looks good!

In VOC2007, ResNet101+GMP also achieves desirable results(93.*) In MS-COCO2014, the mAP is 83.0.

How to set pretrained = True

def gcn_resnet101(num_classes, t, pretrained=False, adj_file=None, in_channel=300):
    model = models.resnet101(pretrained=True) #  set pretrained = True
    return GCNResnet(model, num_classes, t=t, adj_file=adj_file, in_channel=in_channel)

Thank you so much!

@sorrowyn
Copy link

sorrowyn commented Apr 23, 2023

Hi, I tried to reproduce the VOC2007 project but found that if I train the model from scratch, the mAP will rise to 17 and keep still after 50 epochs. However, if I use the pre-trained model, and using the command provided from Github. The mAP will reach 90 in about 5 epochs quickly. So I wonder, how can I obtained that pretrained model, and is it possible to reproduce the author's result by training from scratch?

hi,i have tried to run this code recently but there are some problem when i run this on my computer can you help me?

I was running this on the server, what kind of problem did you meet?

pretrained=True

Appreciate! Now the training process looks good!

In VOC2007, ResNet101+GMP also achieves desirable results(93.*) In MS-COCO2014, the mAP is 83.0.

How to set pretrained = True

def gcn_resnet101(num_classes, t, pretrained=False, adj_file=None, in_channel=300):
    model = models.resnet101(pretrained=True) #  set pretrained = True
    return GCNResnet(model, num_classes, t=t, adj_file=adj_file, in_channel=in_channel)

Thank you so much!
Refer to this repository for more information.
https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/blob/main/data/init.py
https://github.com/yu-gi-oh-leilei/Multi-label-Image-Recognition

@lxe32
Copy link

lxe32 commented Nov 27, 2024

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

@Byronliang8
Copy link

Byronliang8 commented Nov 27, 2024

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

你可以看看 https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/tree/main 里的trainer和main

@yu-gi-oh-leilei
Copy link

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

你可以看看 https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/tree/main 里的trainer和main

如果有复现代码的问题可以问我,就在那个项目下面。

@lxe32
Copy link

lxe32 commented Dec 22, 2024

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

你可以看看 https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/tree/main 里的trainer和main

如果有复现代码的问题可以问我,就在那个项目下面。

请问环境条件是什么,我打算在服务器上尝试复现

@ChristineDewi
Copy link

How to create this file by our self? [coco_glove_word2vec.pkl)

@sorrowyn
Copy link

How to create this file by our self? [coco_glove_word2vec.pkl)

# +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Created by: jasonseu
# Created on: 2022-3-22
# Email: [email protected]
#
# Copyright © 2022 - CPSS Group
# https://github.com/jasonseu/SALGL/blob/main/scripts/embedding.py
# +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
import os
import argparse
import numpy as np
import torch
from transformers import BertTokenizer, BertModel

model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
token_mapping = {
    'diningtable': 'dining table',
    'pottedplant': 'potted plant',
    'tvmonitor': 'tv monitor'
}

def bert_text_preparation(text):
    marked_text = "[CLS] " + text
    tokenized_text = tokenizer.tokenize(marked_text)
    indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
    segments_ids = [1] * len(indexed_tokens)

    tokens_tensor = torch.tensor([indexed_tokens])
    segments_tensors = torch.tensor([segments_ids])

    return tokenized_text, tokens_tensor, segments_tensors

def get_bert_embeddings(tokens_tensor, segments_tensors):
    """Get embeddings from an embedding model

    Args:
        tokens_tensor (obj): Torch tensor size [n_tokens]
            with token ids for each token in text
        segments_tensors (obj): Torch tensor size [n_tokens]
            with segment ids for each token in text
        model (obj): Embedding model to generate embeddings
            from token and segment ids

    Returns:
        list: List of list of floats of size
            [n_tokens, n_embedding_dimensions]
            containing embeddings for each token

    """

    # Gradient calculation id disabled
    # Model is in inference mode
    with torch.no_grad():
        outputs = model(tokens_tensor, segments_tensors)
        # Removing the first hidden state
        # The first state is the input state
        hidden_states = outputs[2][1:]

    # Getting embeddings from the final BERT layer
    token_embeddings = hidden_states[-1]
    # Collapsing the tensor into 1-dimension
    token_embeddings = torch.squeeze(token_embeddings, dim=0)
    # Converting torchtensors to lists
    list_token_embeddings = [token_embed.tolist() for token_embed in token_embeddings]

    return list_token_embeddings

def main(data):
    label_path = os.path.join('data', data, 'label.txt')
    labels = [t.strip() for t in open(label_path)]
    labels = [token_mapping[t] if t in token_mapping else t for t in labels]
    # labels = ['an image of {}'.format(t) for t in labels]
    bert_embeddings = []
    for t in labels:
        _, tokens_tensor, segments_tensors = bert_text_preparation(t)
        list_token_embeddings = get_bert_embeddings(tokens_tensor, segments_tensors)
        token_embedding = list_token_embeddings[1] if len(list_token_embeddings) == 2 else list_token_embeddings[0]
        bert_embeddings.append(token_embedding)
        # bert_embeddings.append(list_token_embeddings[0])
    bert_embeddings = np.array(bert_embeddings)
    np.save(os.path.join('data', data, 'bert.npy'), bert_embeddings)


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--data', type=str, default='voc2007')
    args = parser.parse_args()
    main(args.data)

@sorrowyn
Copy link

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

@take2rohit 我想问一下,作者的源文件没有train文件,你有吗 可以分享给我一下吗 我想尝试着复现这篇论文!谢谢

你可以看看 https://github.com/yu-gi-oh-leilei/ML-GCN_cvpr2019/tree/main 里的trainer和main

如果有复现代码的问题可以问我,就在那个项目下面。

请问环境条件是什么,我打算在服务器上尝试复现

你可以参考issue。
yu-gi-oh-leilei/ML-GCN_cvpr2019#2

@ChristineDewi
Copy link

How to create this file by our self? [coco_glove_word2vec.pkl)

# +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Created by: jasonseu
# Created on: 2022-3-22
# Email: [email protected]
#
# Copyright © 2022 - CPSS Group
# https://github.com/jasonseu/SALGL/blob/main/scripts/embedding.py
# +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
import os
import argparse
import numpy as np
import torch
from transformers import BertTokenizer, BertModel

model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
token_mapping = {
    'diningtable': 'dining table',
    'pottedplant': 'potted plant',
    'tvmonitor': 'tv monitor'
}

def bert_text_preparation(text):
    marked_text = "[CLS] " + text
    tokenized_text = tokenizer.tokenize(marked_text)
    indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
    segments_ids = [1] * len(indexed_tokens)

    tokens_tensor = torch.tensor([indexed_tokens])
    segments_tensors = torch.tensor([segments_ids])

    return tokenized_text, tokens_tensor, segments_tensors

def get_bert_embeddings(tokens_tensor, segments_tensors):
    """Get embeddings from an embedding model

    Args:
        tokens_tensor (obj): Torch tensor size [n_tokens]
            with token ids for each token in text
        segments_tensors (obj): Torch tensor size [n_tokens]
            with segment ids for each token in text
        model (obj): Embedding model to generate embeddings
            from token and segment ids

    Returns:
        list: List of list of floats of size
            [n_tokens, n_embedding_dimensions]
            containing embeddings for each token

    """

    # Gradient calculation id disabled
    # Model is in inference mode
    with torch.no_grad():
        outputs = model(tokens_tensor, segments_tensors)
        # Removing the first hidden state
        # The first state is the input state
        hidden_states = outputs[2][1:]

    # Getting embeddings from the final BERT layer
    token_embeddings = hidden_states[-1]
    # Collapsing the tensor into 1-dimension
    token_embeddings = torch.squeeze(token_embeddings, dim=0)
    # Converting torchtensors to lists
    list_token_embeddings = [token_embed.tolist() for token_embed in token_embeddings]

    return list_token_embeddings

def main(data):
    label_path = os.path.join('data', data, 'label.txt')
    labels = [t.strip() for t in open(label_path)]
    labels = [token_mapping[t] if t in token_mapping else t for t in labels]
    # labels = ['an image of {}'.format(t) for t in labels]
    bert_embeddings = []
    for t in labels:
        _, tokens_tensor, segments_tensors = bert_text_preparation(t)
        list_token_embeddings = get_bert_embeddings(tokens_tensor, segments_tensors)
        token_embedding = list_token_embeddings[1] if len(list_token_embeddings) == 2 else list_token_embeddings[0]
        bert_embeddings.append(token_embedding)
        # bert_embeddings.append(list_token_embeddings[0])
    bert_embeddings = np.array(bert_embeddings)
    np.save(os.path.join('data', data, 'bert.npy'), bert_embeddings)


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--data', type=str, default='voc2007')
    args = parser.parse_args()
    main(args.data)

Hi Dr, thank you for your reply, but what is label.txt?

Can you explain more about this 2 files
coco_adj.pkl and coco_glove_word2vec.pkl

How to generate or make this file by ourself?

Thanks

@sorrowyn
Copy link

sorrowyn commented Jan 2, 2025

coco_adj.pkl and coco_glove_word2vec.pkl

A1: Use "glove" to convert label_name into a vector, and you can get coco_glove_word2vec.
You can refer to this link: https://github.com/jasonseu/TSFormer/blob/main/scripts/embedding.py

A2: “coco_adj.pkl” is the co-occurrence matrix, which is obtained by counting the training set labels of COCO.

I'm in the China time zone, at 2:30 PM. If you have any questions, please let me know.

Get the train label.

import json
import os
import argparse
import numpy as np

pp = argparse.ArgumentParser(description='Format COCO2014 metadata.')
pp.add_argument('--load-path', type=str, default='.', help='Path to a directory containing a copy of the COCO2014 dataset.')
pp.add_argument('--save-path', type=str, default='.', help='Path to output directory.')
args = pp.parse_args()

def parse_categories(categories):
    category_list = []
    id_to_index = {}
    for i in range(len(categories)):
        category_list.append(categories[i]['name'])
        id_to_index[categories[i]['id']] = i

    return (category_list, id_to_index)

# initialize metadata dictionary:
meta = {}
meta['category_id_to_index'] = {}
meta['category_list'] = []

for split in ['train', 'val']:
    
    with open(os.path.join(args.load_path, 'annotations', 'instances_' + split + '2014.json'), 'r') as f:
        D = json.load(f)
    
    if len(meta['category_list']) == 0:
        # parse the category data:
        (meta['category_list'], meta['category_id_to_index']) = parse_categories(D['categories'])
    else:
        # check that category lists are consistent for train2014 and val2014:
        (category_list, id_to_index) = parse_categories(D['categories'])
        assert category_list == meta['category_list']
        assert id_to_index == meta['category_id_to_index']

    image_id_list = sorted(np.unique([str(D['annotations'][i]['image_id']) for i in range(len(D['annotations']))]))
    image_id_list = np.array(image_id_list, dtype=int)
    # sorting as strings for backwards compatibility 
    image_id_to_index = {image_id_list[i]: i for i in range(len(image_id_list))}
    
    num_categories = len(D['categories'])
    num_images = len(image_id_list)
    
    label_matrix = np.zeros((num_images,num_categories))
    image_ids = np.zeros(num_images)
    
    for i in range(len(D['annotations'])):
        
        image_id = int(D['annotations'][i]['image_id'])
        row_index = image_id_to_index[image_id]
    
        category_id = int(D['annotations'][i]['category_id'])
        category_index = int(meta['category_id_to_index'][category_id])
        
        label_matrix[row_index][category_index] = 1
        image_ids[row_index] = int(image_id)
    
    image_ids = np.array(['{}2014/COCO_{}2014_{}.jpg'.format(split, split, str(int(x)).zfill(12)) for x in image_ids])
    # save labels and corresponding image ids: 
    np.save(os.path.join(args.save_path, 'formatted_' + split + '_labels.npy'), label_matrix)
    np.save(os.path.join(args.save_path, 'formatted_' + split + '_images.npy'), image_ids)

Get the coco_adj.pkl

import os
import pickle
import numpy as np


# _adj = np.array(result['adj'])
# _nums = np.array(result['nums'])

def getCoOccurrenceLabel(path, mode, data_name=None):
    object_dit = {}

    assert mode in ('train', 'val')
    if mode == 'train':
        label_path = os.path.join(path, 'formatted_train_labels.npy')
    else:
        label_path = os.path.join(path, 'formatted_val_labels.npy')

    labels = np.load(label_path).astype(np.float64)
    num_sample = labels.shape[0]
    num_label = labels.shape[1]
    # print(num_sample, num_label)

    nums_matrix = np.zeros(shape=(num_label), dtype=np.int64)
    coOccurrencegraph = np.zeros((labels.shape[1], labels.shape[1]), dtype=np.int64)

    for index in range(num_sample):
        data = labels[index]
        for i in range(num_label):
            if data[i] == 1:
                nums_matrix[i] += 1
                for j in range(num_label):
                    if j != i:
                        if data[j] == 1:
                            coOccurrencegraph[i][j] += 1



    object_dit.update({'nums': nums_matrix})
    object_dit.update({'adj': coOccurrencegraph})

    # print(object_dit['nums'])

    return object_dit
    

    # np.save('./data/coco/{}_co-occurrence_label_vectors.npy'.format(mode), coOccurrenceLabel)

def main():
    root_path = '/media/data2/maleilei/MLIC/DDP-VTPMOD/data'


    path = os.path.join(root_path, 'vg256')
    object_dit = getCoOccurrenceLabel(path=path, mode='train')
    with open('./pkl_and_json/vg256_adj.pkl', 'wb') as file:
        pickle.dump(object_dit, file)


    path = os.path.join(root_path, 'voc2007')
    object_dit = getCoOccurrenceLabel(path=path, mode='train')
    with open('./pkl_and_json/voc2007_adj.pkl', 'wb') as file:
        pickle.dump(object_dit, file)


    path = os.path.join(root_path, 'nus')
    object_dit = getCoOccurrenceLabel(path=path, mode='train')
    with open('./pkl_and_json/nus_adj.pkl', 'wb') as file:
        pickle.dump(object_dit, file)


    path = os.path.join(root_path, 'coco')
    object_dit = getCoOccurrenceLabel(path=path, mode='train')
    with open('./pkl_and_json/coco_adj.pkl', 'wb') as file:
        pickle.dump(object_dit, file)


    path = os.path.join(root_path, 'cub')
    object_dit = getCoOccurrenceLabel(path=path, mode='train')
    with open('./pkl_and_json/cub_adj.pkl', 'wb') as file:
        pickle.dump(object_dit, file)

    path = os.path.join(root_path, 'objects365')
    object_dit = getCoOccurrenceLabel(path=path, mode='train')
    with open('./pkl_and_json/objects356_adj.pkl', 'wb') as file:
        pickle.dump(object_dit, file)

if __name__ == '__main__':
    main()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.