You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The codebase supports the Macenko normalization option, but the paper does not mention stain normalization when describing pre-processing (pages 16 and 17 in the arxiv version):
WSIs were segmented into N tiles, with an edge length of 224 px corresponding to 256 μm,
resulting in an effective resolution of ~1.14 μm per pixel. All included foundation models in our
benchmark, except for Prov-GigaPath22, tessellate the slide into tiles of 224x224 pixels.
However, the Prov-GigaPath implementation transforms tiles using center cropping from17
256x256 into 224x224 before inputting it into the tile encoder. The slide encoder then
processes these feature embeddings generated by the tile encoder, implicitly maintaining the
224×224 tile dimensionality throughout the pipeline. Therefore, our choice of tile
dimensionality for slide tessellation is consistent with the foundation models selected for our
analyses. Background tiles were excluded using Canny edge detection 41. Feature extraction
was performed on each tile individually using the different foundational models.
Dear authors,
Thank you for releasing your code!
Did you use stain normalization in your work "Benchmarking foundation models as feature extractors for weakly-supervised computational pathology"? GitHub issues are disabled for the KatherLab/STAMP-Benchmark repository, so I am asking here.
The codebase supports the Macenko normalization option, but the paper does not mention stain normalization when describing pre-processing (pages 16 and 17 in the arxiv version):
In the default config, the normalization is set to
false
, which, together with the paper, would suggest no stain normalization being used:https://github.com/KatherLab/STAMP-Benchmark/blob/1f673a3e62a27f89e99dd4966bd56dbdb8b4fed1/stamp/config.yaml#L13
Many thanks,
George
The text was updated successfully, but these errors were encountered: