A deep learning based framework for recognition and fusion of multimodal histopathological images
The source for Labelling website is now released, please see the folder "LabelingWebsite" or visit standalone code base: https://github.com/guoqingbao/Patholabelling
Important for using the labelling website: Edge legacy is required (other web browers, including the new Edge have problem when dealing with large images)
To use Edge legacy: 1) rename the new Edge folder to any other names you want; 2) download "EdgeLaunch.exe" to launch legacy Edge (or Win+run: shell:Appsfolder\Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge)
Bao G, Wang X, Xu R, Loh C, Adeyinka OD, Pieris DA, Cherepanoff S, Gracie G, Lee M, McDonald KL, Nowak AK, Banati R, Buckland ME, Graeber MB. PathoFusion: An Open-Source AI Framework for Recognition of Pathomorphological Features and Mapping of Immunohistochemical Data. Cancers. 2021; 13(4):617. https://doi.org/10.3390/cancers13040617
The following python libraries are required:
matplotlib, sqlite3, pandas, scipy, scikit-learn, pytorch, tensorflow and keras
Bao G, Graeber MB and Wang X. A Bifocal Classification and Fusion Network for Multimodal Image Analysis in Histopathology. 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), 2020, pp. 466-471, doi: 10.1109/ICARCV50220.2020.9305360.
The datasets used in our study were provided under folder "data". Raw data may be provided upon requests.
Pretrained models were provided under results/bcnn (torch_model.h5 and torch_model_cd276.h5) for recognition of two modality neuro-features from whole-slide images. Please refer to BrainPredction.py or BrainPredction-276.py. Colour normalization of your whole-slides images is required before using the pretrained models (a reference image is given under folder "others"). You may also train your own model using code provided. Usually, 30+ training cases are recommended.
We also provide the source code for the pathology image labelling website, and you can perform off-line marking of your own whole-slides images in the intranet. You can use code provided (ExtractImagePatches.py) to establish your pathology database first, then you train your model using code provided. For non-commercial usage, please contact corresponding author to obtain the source code of the labelling website.
Please see the folder "LabelingWebsite", newest updates can be found in the standalone code base: https://github.com/guoqingbao/Patholabelling
https://cloudstor.aarnet.edu.au/plus/s/dVmEp2R87lFhc6v
In the video, the original H&E is first shown; next, the predicted heatmap was overlaid; fianlly, the prediction was compared with expert markings.
https://cloudstor.aarnet.edu.au/plus/s/JSASsezqvrB9sgA
Make sure you deployed the labelling website first. After you finished image marking, you use this module to extract paired patches from the whole-slide images using marking coordinates saved in MySql database (website database). While you can also use our provided datasets (under folder "data") for reproducibility measurements.
Please refer to folder "models" and BrainModel.py.
You can use BrainModel.py and BrainModel-CD276.py to train models for the recognition. Please make sure you have downloaded datasets to folder "data" before running the code.
The training and test perfromance metrics of different models were recorded in the folder "results".
The model introduced in this study was compared with Xception (BrainXception.py), Xception (transfer learning part in BrainXception.py) and a subnet CNN (Single-input ordinary model, BrainModel-SubNet.py).
After the models trained (or using trained models provided), you can predict the heatmaps of unknown histological slides using BrainPredction.py and BrainPredction-CD276.py.
BrainPredction.py provides functions for fusion of multimodal whole-slide images.
Please refer to BrainPositivePercent.py.