You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use SSA on custom categories and datasets. I saw you mentioned that users can customize in terms of the segmentor's architecture and the interested categories. But can I use my own datasets ?
I've made some attempts. But I don't understand what is "semantic_branch_processor", I try to use one of it directly.But an error will be reported —— ValueError: You have to specify the task_input. Found None. And I guess it has something to do with the fact that I didn't set the "semantic_branch_processor" correctly.
I want to know how to set up "semantic_branch_processor" if I want to use my own dataset.
Thank you very much for any reply.
Appendix command:
python scripts/main_ssa.py --ckpt_path ./ckp/sam_vit_h_4b8939.pth --save_img --world_size 1 --dataset VOC2012 --data_dir /media/guo/DATA/chen/lraspp/data/VOCdevkit/VOC2012/JPEGImages --gt_path /media/guo/DATA/chen/lraspp/data/VOCdevkit/VOC2012/Annotations --out_dir output_VOC2012 Complete error reporting:
Traceback (most recent call last):
File "scripts/main_ssa_try.py", line 269, in
main(0, args)
File "scripts/main_ssa_try.py", line 248, in main
semantic_segment_anything_inference(file_name, args.out_dir, rank, img=img, save_img=args.save_img,
File "/media/guo/DATA/chen/SSA/Semantic-Segment-Anything/scripts/pipeline.py", line 168, in semantic_segment_anything_inference
class_ids = segformer_func(img, semantic_branch_processor, semantic_branch_model, rank)
File "/media/guo/DATA/chen/SSA/Semantic-Segment-Anything/scripts/segformer.py", line 5, in segformer_segmentation
inputs = processor(images=image, return_tensors="pt").to(rank)
File "/home/guo/anaconda3/envs/ssa/lib/python3.8/site-packages/transformers/models/oneformer/processing_oneformer.py", line 112, in call
raise ValueError("You have to specify the task_input. Found None.")
ValueError: You have to specify the task_input. Found None.
The text was updated successfully, but these errors were encountered:
Hello, thank you for your work !
I want to use SSA on custom categories and datasets. I saw you mentioned that users can customize in terms of the segmentor's architecture and the interested categories. But can I use my own datasets ?
I've made some attempts. But I don't understand what is "semantic_branch_processor", I try to use one of it directly.But an error will be reported —— ValueError: You have to specify the task_input. Found None. And I guess it has something to do with the fact that I didn't set the "semantic_branch_processor" correctly.
I want to know how to set up "semantic_branch_processor" if I want to use my own dataset.
Thank you very much for any reply.
Appendix
command:
python scripts/main_ssa.py --ckpt_path ./ckp/sam_vit_h_4b8939.pth --save_img --world_size 1 --dataset VOC2012 --data_dir /media/guo/DATA/chen/lraspp/data/VOCdevkit/VOC2012/JPEGImages --gt_path /media/guo/DATA/chen/lraspp/data/VOCdevkit/VOC2012/Annotations --out_dir output_VOC2012
Complete error reporting:
Traceback (most recent call last):
File "scripts/main_ssa_try.py", line 269, in
main(0, args)
File "scripts/main_ssa_try.py", line 248, in main
semantic_segment_anything_inference(file_name, args.out_dir, rank, img=img, save_img=args.save_img,
File "/media/guo/DATA/chen/SSA/Semantic-Segment-Anything/scripts/pipeline.py", line 168, in semantic_segment_anything_inference
class_ids = segformer_func(img, semantic_branch_processor, semantic_branch_model, rank)
File "/media/guo/DATA/chen/SSA/Semantic-Segment-Anything/scripts/segformer.py", line 5, in segformer_segmentation
inputs = processor(images=image, return_tensors="pt").to(rank)
File "/home/guo/anaconda3/envs/ssa/lib/python3.8/site-packages/transformers/models/oneformer/processing_oneformer.py", line 112, in call
raise ValueError("You have to specify the task_input. Found None.")
ValueError: You have to specify the task_input. Found None.
The text was updated successfully, but these errors were encountered: