Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thank you and inquiry about MR preprocessing #329

Open
kulsoom-abdullah opened this issue Jul 6, 2024 · 6 comments
Open

Thank you and inquiry about MR preprocessing #329

kulsoom-abdullah opened this issue Jul 6, 2024 · 6 comments

Comments

@kulsoom-abdullah
Copy link

First, I want to express my sincere gratitude for TotalSegmentator. Your open-source contribution is invaluable to the medical imaging community.

I'm currently working on an L3 vertebra detection/selection model using TotalSegmentator as part of my workflow. For this project, I'm particularly interested in understanding the MR-specific preprocessing steps that occur before segmentation.

Could you please point me to the part of the codebase where MR preprocessing is implemented? I've looked through the nnUNet_predict_image function but haven't been able to identify the MR-specific steps.

Any guidance would be greatly appreciated. Thank you for your time and for creating this fantastic tool!

@wasserth
Copy link
Owner

The preprocessing is rather simple. TotalSegmentator resamples the images to 1.5mm isotropic resolution, then the images go to nnU-Net. nnU-Net will normalise the images, if I remember correctly to mean of 0 and stddev of 1. But this might have changed a bit. This information can be found in the nnunetv2 documentation. I hope this helps.
The most relevant code start at totalsegmentator/python_api.py.

@meeselizabeth
Copy link

Hi, similar question here! First of all, thank you for your public datasets! However, I am using your MR data to fine tune my model on, but the dice score stays superlow. I assume because of the way I preprocess/transform. Are the images in the dataset already in RAS orientation and already have right spacing? I do see your scripts for alignment and resampling, but I can't find out if this is already applied or still need to happen. So, what are the steps of preprocessing / transforms that still need to be done after loading the nifti files? Hope to hear from you soon!

@wasserth
Copy link
Owner

If you can share one of your datasets with me I can try to reproduce the issue. Otherwise it is difficult to say what the problem might be.

@meeselizabeth
Copy link

Hi, thx for your response. I downloaded this dataset: https://zenodo.org/records/11367005. And then this is what I do:

class TotalSegDataset3DMRI:
    def __init__(self, data_dir):
        self.data_dir = data_dir
        self.subjects = [subject for subject in os.listdir(self.data_dir) if os.path.isdir(os.path.join(self.data_dir, subject))]

    def __len__(self):
        return len(self.subjects)

    def __getitem__(self, idx):
        subject_id = self.subjects[idx]
        subject_dir = os.path.join(self.data_dir, subject_id)
        mri_path = os.path.join(subject_dir, "mri.nii.gz")

        # Load combined segmentation map
        combined_label_path = os.path.join(subject_dir, "combined_mask.nii.gz")

        data_dict = {'image': mri_path, 'label': combined_label_path}

        return data_dict

train_transform_mri = Compose([
    LoadImaged(keys=["image", "label"], reader='NibabelReader'),
    EnsureChannelFirstd(keys=["image", "label"]),
    Orientationd(keys=["image", "label"], axcodes="RAS"),
    Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 1.5), mode=("bilinear", "nearest")),
    ScaleIntensityd(keys=["image"], minv=0.0, maxv=1.0),  
    Resized(keys=["image", "label"], spatial_size=(128, 128, 128)),
    # SpatialPadd(keys=["image", "label"], spatial_size=(128, 128, 128)),
    # RandCropByPosNegLabeld(keys=["image", "label"], spatial_size=(128, 128, 128), label_key='label', pos=0.5, neg=0.5),
    RandRotate90d(keys=["image", "label"], prob=0.10, max_k=3),
    RandShiftIntensityd(keys=["image"], offsets=0.10, prob=0.20),
    ToTensord(keys=["image", "label"]),
])

val_transform_mri = Compose([
    LoadImaged(keys=["image", "label"], reader='NibabelReader'),
    EnsureChannelFirstd(keys=["image", "label"]),
    Orientationd(keys=["image", "label"], axcodes="RAS"), 
    Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 1.5), mode=("bilinear", "nearest")),
    ScaleIntensityd(keys=["image"], minv=0.0, maxv=1.0),
    Resized(keys=["image", "label"], spatial_size=(128, 128, 128)),
    ToTensord(keys=["image", "label"]),
])

The segmentations I use are the organs from class_map_parts_mr.

@wasserth
Copy link
Owner

For this dataset it should work well.
You do not need any of this code.
Just get one of the nifti files and then run
TotalSegmentator -i your_downloaded_file.nii.gz -o segmentations -ta total_mr

@meeselizabeth
Copy link

meeselizabeth commented Jul 22, 2024

Thank you! I already have the data stored as subjects (s0001 etc) with their corresponding mr.nii.gz and then their segmentations folders. I created a function to instead of having every segmentation in a separate file (like aorta.nii.gz), have all segmentations in one map, stored these in combined_mask.nii.gz for each subject. So do I still need to do TotalSegmentator -i your_downloaded_file.nii.gz -o segmentations -ta total_mr ? My use case is tuning my pre-trained segresnet on this data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants