From 65f21fee7f9ac6e9b1c4b34b3e80f011d41ad22a Mon Sep 17 00:00:00 2001 From: Li Sun Date: Mon, 1 Jul 2024 13:45:41 -0400 Subject: [PATCH] Update index.html --- index.html | 40 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 38 insertions(+), 2 deletions(-) diff --git a/index.html b/index.html index 5be4e0854..9599a4bdc 100644 --- a/index.html +++ b/index.html @@ -127,7 +127,7 @@

MedSyn: Text-guided Anatomy-aware Synth

Abstract

- This paper introduces an novel methodology for producing high-quality 3D lung CT images guided by textual information. While diffusion-based generative models are increasingly used in medical imaging, current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information. Nevertheless, expanding text-guided generation to high-resolution 3D images poses significant memory and anatomical detail-preserving challenges. Addressing the memory issue, we introduce a hierarchical scheme that uses a modified UNet architecture. We start by synthesizing low-resolution images conditioned on the text, serving as a foundation for subsequent generators for complete volumetric data. To ensure the anatomical plausibility of the generated samples, we provide further guidance by generating vascular, airway, and lobular segmentation masks in conjunction with the CT images. The model demonstrates the capability to use textual input and segmentation tasks to generate synthesized images. Algorithmic comparative assessments and blind evaluations conducted by 10 board-certified radiologists indicate that our approach exhibits superior performance compared to baseline methods, especially in accurately retaining crucial anatomical features such as fissure lines and airways. This study focuses on two main objectives: (1) the development of a method for creating images based on textual prompts and anatomical components, and (2) the capability to generate new images conditioning on anatomical elements. + This paper introduces a novel methodology for producing high-quality 3D lung CT images guided by textual information. While diffusion-based generative models are increasingly used in medical imaging, current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information. Nevertheless, expanding text-guided generation to high-resolution 3D images poses significant memory and anatomical detail-preserving challenges. Addressing the memory issue, we introduce a hierarchical scheme that uses a modified UNet architecture. We start by synthesizing low-resolution images conditioned on the text, serving as a foundation for subsequent generators for complete volumetric data. To ensure the anatomical plausibility of the generated samples, we provide further guidance by generating vascular, airway, and lobular segmentation masks in conjunction with the CT images. The model demonstrates the capability to use textual input and segmentation tasks to generate synthesized images. Algorithmic comparative assessments and blind evaluations conducted by 10 board-certified radiologists indicate that our approach exhibits superior performance compared to baseline methods, especially in accurately retaining crucial anatomical features such as fissure lines and airways. This study focuses on two main objectives: (1) the development of a method for creating images based on textual prompts and anatomical components, and (2) the capability to generate new images conditioning on anatomical elements.

@@ -141,7 +141,7 @@

Abstract

-

Schematic

+

Model Schematic

@@ -151,6 +151,42 @@

Schematic

+
+
+

Comparison of Generated Samples

+
+
+ +
+
+ +
+
+ +
+
+

Generation Conditioned on Reports

+
+
+ +
+
+ +
+
+ +
+
+

Generation Conditioned on Segmentation Mask

+
+
+ +
+
+ +
+
+