Skip to content

Commit

Permalink
deploy: b6d603d
Browse files Browse the repository at this point in the history
  • Loading branch information
paquiteau committed Jan 30, 2025
1 parent aec73d5 commit 1957374
Show file tree
Hide file tree
Showing 440 changed files with 9,113 additions and 1,479 deletions.
Binary file not shown.
Binary file modified _downloads/038126aa6dd78d627c7aa86b076e558f/example_readme.zip
Binary file not shown.
Binary file modified _downloads/05ff0e4a4c7542701609a314823a9531/example_stacked.zip
Binary file not shown.
Binary file modified _downloads/07fc94ab669542d5bb33853fd085cadb/example_density.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,24 @@
=========================================
A small pytorch example to showcase learning k-space sampling patterns.
This example showcases the auto-diff capabilities of the NUFFT operator
This example showcases the auto-diff capabilities of the NUFFT operator
wrt to k-space trajectory in mri-nufft.
Briefly, in this example we try to learn the k-space samples :math:`\mathbf{K}` for the following cost function:
.. math::
\mathbf{\hat{K}} = arg \min_{\mathbf{K}} || \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K} \mathcal{F}_\mathbf{K} x_\ell - \mathbf{x}_{sos} ||_2^2
\mathbf{\hat{K}} = arg \min_{\mathbf{K}} || \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K} \mathcal{F}_\mathbf{K} x_\ell - \mathbf{x}_{sos} ||_2^2
where :math:`S_\ell` is the sensitivity map for the :math:`\ell`-th coil, :math:`\mathcal{F}_\mathbf{K}` is the forward NUFFT operator and :math:`D_\mathbf{K}` is the density compensators for trajectory :math:`\mathbf{K}`, :math:`\mathbf{x}_\ell` is the image for the :math:`\ell`-th coil, and :math:`\mathbf{x}_{sos} = \sqrt{\sum_{\ell=1}^L x_\ell^2}` is the sum-of-squares image as target image to be reconstructed.
In this example, the forward NUFFT operator :math:`\mathcal{F}_\mathbf{K}` is implemented with `model.operator` while the SENSE operator :math:`model.sense_op` models the term :math:`\mathbf{A} = \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K}`.
For our data, we use a 2D slice of a 3D MRI image from the BrainWeb dataset, and the sensitivity maps are simulated using the `birdcage_maps` function from `sigpy.mri`.
.. note::
To showcase the features of ``mri-nufft``, we use ``
"cufinufft"`` backend for ``model.operator`` without density compensation and ``"gpunufft"`` backend for ``model.sense_op`` with density compensation.
"cufinufft"`` backend for ``model.operator`` without density compensation and ``"gpunufft"`` backend for ``model.sense_op`` with density compensation.
.. warning::
This example only showcases the autodiff capabilities, the learned sampling pattern is not scanner compliant as the scanner gradients required to implement it violate the hardware constraints. In practice, a projection :math:`\Pi_\mathcal{Q}(\mathbf{K})` into the scanner constraints set :math:`\mathcal{Q}` is recommended (see [Proj]_). This is implemented in the proprietary SPARKLING package [Sparks]_. Users are encouraged to contact the authors if they want to use it.
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Simple UNet model.\n\nThis model is a simplified version of the U-Net architecture, \nwhich is widely used for image segmentation tasks. \nThis is implemented in the proprietary FASTMRI package [fastmri]_. \n\nThe U-Net model consists of an encoder (downsampling path) and \na decoder (upsampling path) with skip connections between corresponding \nlayers in the encoder and decoder. \nThese skip connections help in retaining spatial information \nthat is lost during the downsampling process.\n\nThe primary purpose of this model is to perform image reconstruction tasks, \nspecifically for MRI images. \nIt takes an input MRI image and reconstructs it to improve the image quality \nor to recover missing parts of the image.\n\nThis implementation of the UNet model was pulled from the FastMRI Facebook \nrepository, which is a collaborative research project aimed at advancing \nthe field of medical imaging using machine learning techniques.\n\n\\begin{align}\\mathbf{\\hat{x}} = \\mathrm{arg} \\min_{\\mathbf{x}} || \\mathcal{U}_\\mathbf{\\theta}(\\mathbf{y}) - \\mathbf{x} ||_2^2\\end{align}\n\nwhere $\\mathbf{\\hat{x}}$ is the reconstructed MRI image, $\\mathbf{x}$ is the ground truth image, \n$\\mathbf{y}$ is the input MRI image (e.g., k-space data), and $\\mathcal{U}_\\mathbf{\\theta}$ is the U-Net model parameterized by $\\theta$.\n\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>We train on a single image here. In practice, this should be done on a database like fastMRI [fastmri]_.</p></div>\n"
"\n# Simple UNet model.\n\nThis model is a simplified version of the U-Net architecture,\nwhich is widely used for image segmentation tasks.\nThis is implemented in the proprietary FASTMRI package [fastmri]_.\n\nThe U-Net model consists of an encoder (downsampling path) and\na decoder (upsampling path) with skip connections between corresponding\nlayers in the encoder and decoder.\nThese skip connections help in retaining spatial information\nthat is lost during the downsampling process.\n\nThe primary purpose of this model is to perform image reconstruction tasks,\nspecifically for MRI images.\nIt takes an input MRI image and reconstructs it to improve the image quality\nor to recover missing parts of the image.\n\nThis implementation of the UNet model was pulled from the FastMRI Facebook\nrepository, which is a collaborative research project aimed at advancing\nthe field of medical imaging using machine learning techniques.\n\n\\begin{align}\\mathbf{\\hat{x}} = \\mathrm{arg} \\min_{\\mathbf{x}} || \\mathcal{U}_\\mathbf{\\theta}(\\mathbf{y}) - \\mathbf{x} ||_2^2\\end{align}\n\nwhere $\\mathbf{\\hat{x}}$ is the reconstructed MRI image, $\\mathbf{x}$ is the ground truth image,\n$\\mathbf{y}$ is the input MRI image (e.g., k-space data), and $\\mathcal{U}_\\mathbf{\\theta}$ is the U-Net model parameterized by $\\theta$.\n\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>We train on a single image here. In practice, this should be done on a database like fastMRI [fastmri]_.</p></div>\n"
]
},
{
Expand Down
Binary file not shown.
Binary file modified _downloads/13bb15de5481cb9869dd72ac2f59a90c/example_cg.zip
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Learning sampling pattern with decimation\n\nAn example using PyTorch to showcase learning k-space sampling patterns with decimation.\n\nThis example showcases the auto-differentiation capabilities of the NUFFT operator\nwith respect to the k-space trajectory in MRI-nufft.\n\nHereafter we learn the k-space sample locations $\\mathbf{K}$ using the following cost function:\n\n\\begin{align}\\mathbf{\\hat{K}} = arg \\min_{\\mathbf{K}} || \\mathcal{F}_\\mathbf{K}^* D_\\mathbf{K} \\mathcal{F}_\\mathbf{K} \\mathbf{x} - \\mathbf{x} ||_2^2\\end{align}\nwhere $\\mathcal{F}_\\mathbf{K}$ is the forward NUFFT operator,\n$D_\\mathbf{K}$ is the density compensator for trajectory $\\mathbf{K}$,\nand $\\mathbf{x}$ is the MR image which is also the target image to be reconstructed.\n\nAdditionally, in order to converge faster, we also learn the trajectory in a multi-resolution fashion.\nThis is done by first optimizing x8 times decimated trajectory locations, called control points.\nAfter a fixed number of iterations (5 in this example), these control points are upscaled by a factor of 2.\nNote that the NUFFT operator always holds linearly interpolated version of the control points as k-space sampling trajectory.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example can run on a binder instance as it is purely CPU based backend (finufft), and is restricted to a 2D single coil toy case.</p></div>\n\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>This example only showcases the auto-differentiation capabilities, the learned sampling pattern\n is not scanner compliant as the gradients required to implement it violate the hardware constraints.\n In practice, a projection $\\Pi_\\mathcal{Q}(\\mathbf{K})$ onto the scanner constraints set $\\mathcal{Q}$ is recommended\n (see [Cha+16]_). This is implemented in the proprietary SPARKLING package [Cha+22]_.\n Users are encouraged to contact the authors if they want to use it.</p></div>\n"
"\n# Learning sampling pattern with decimation\n\nAn example using PyTorch to showcase learning k-space sampling patterns with decimation.\n\nThis example showcases the auto-differentiation capabilities of the NUFFT operator\nwith respect to the k-space trajectory in MRI-nufft.\n\nHereafter we learn the k-space sample locations $\\mathbf{K}$ using the following cost function:\n\n\\begin{align}\\mathbf{\\hat{K}} = arg \\min_{\\mathbf{K}} || \\mathcal{F}_\\mathbf{K}^* D_\\mathbf{K} \\mathcal{F}_\\mathbf{K} \\mathbf{x} - \\mathbf{x} ||_2^2\\end{align}\n\nwhere $\\mathcal{F}_\\mathbf{K}$ is the forward NUFFT operator,\n$D_\\mathbf{K}$ is the density compensator for trajectory $\\mathbf{K}$,\nand $\\mathbf{x}$ is the MR image which is also the target image to be reconstructed.\n\nAdditionally, in order to converge faster, we also learn the trajectory in a multi-resolution fashion.\nThis is done by first optimizing x8 times decimated trajectory locations, called control points.\nAfter a fixed number of iterations (5 in this example), these control points are upscaled by a factor of 2.\nNote that the NUFFT operator always holds linearly interpolated version of the control points as k-space sampling trajectory.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example can run on a binder instance as it is purely CPU based backend (finufft), and is restricted to a 2D single coil toy case.</p></div>\n\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>This example only showcases the auto-differentiation capabilities, the learned sampling pattern\n is not scanner compliant as the gradients required to implement it violate the hardware constraints.\n In practice, a projection $\\Pi_\\mathcal{Q}(\\mathbf{K})$ onto the scanner constraints set $\\mathcal{Q}$ is recommended\n (see [Cha+16]_). This is implemented in the proprietary SPARKLING package [Cha+22]_.\n Users are encouraged to contact the authors if they want to use it.</p></div>\n"
]
},
{
Expand Down
Binary file not shown.
Binary file not shown.
Binary file modified _downloads/21e1a44b51b2ed4565475ed05ab5e732/example_subspace.zip
Binary file not shown.
Binary file not shown.
28 changes: 14 additions & 14 deletions _downloads/2d0a0579b3c02394def1ca320735769a/example_fastMRI_UNet.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,30 +4,30 @@
Simple UNet model.
==================
This model is a simplified version of the U-Net architecture,
which is widely used for image segmentation tasks.
This is implemented in the proprietary FASTMRI package [fastmri]_.
The U-Net model consists of an encoder (downsampling path) and
a decoder (upsampling path) with skip connections between corresponding
layers in the encoder and decoder.
These skip connections help in retaining spatial information
This model is a simplified version of the U-Net architecture,
which is widely used for image segmentation tasks.
This is implemented in the proprietary FASTMRI package [fastmri]_.
The U-Net model consists of an encoder (downsampling path) and
a decoder (upsampling path) with skip connections between corresponding
layers in the encoder and decoder.
These skip connections help in retaining spatial information
that is lost during the downsampling process.
The primary purpose of this model is to perform image reconstruction tasks,
specifically for MRI images.
It takes an input MRI image and reconstructs it to improve the image quality
The primary purpose of this model is to perform image reconstruction tasks,
specifically for MRI images.
It takes an input MRI image and reconstructs it to improve the image quality
or to recover missing parts of the image.
This implementation of the UNet model was pulled from the FastMRI Facebook
repository, which is a collaborative research project aimed at advancing
This implementation of the UNet model was pulled from the FastMRI Facebook
repository, which is a collaborative research project aimed at advancing
the field of medical imaging using machine learning techniques.
.. math::
\mathbf{\hat{x}} = \mathrm{arg} \min_{\mathbf{x}} || \mathcal{U}_\mathbf{\theta}(\mathbf{y}) - \mathbf{x} ||_2^2
where :math:`\mathbf{\hat{x}}` is the reconstructed MRI image, :math:`\mathbf{x}` is the ground truth image,
where :math:`\mathbf{\hat{x}}` is the reconstructed MRI image, :math:`\mathbf{x}` is the ground truth image,
:math:`\mathbf{y}` is the input MRI image (e.g., k-space data), and :math:`\mathcal{U}_\mathbf{\theta}` is the U-Net model parameterized by :math:`\theta`.
.. warning::
Expand Down
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
.. math::
\mathbf{\hat{K}} = arg \min_{\mathbf{K}} || \mathcal{F}_\mathbf{K}^* D_\mathbf{K} \mathcal{F}_\mathbf{K} \mathbf{x} - \mathbf{x} ||_2^2
where :math:`\mathcal{F}_\mathbf{K}` is the forward NUFFT operator,
:math:`D_\mathbf{K}` is the density compensator for trajectory :math:`\mathbf{K}`,
and :math:`\mathbf{x}` is the MR image which is also the target image to be reconstructed.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,132 @@
"arguments = [True, False]\ntrajectory = np.copy(planar_trajectories[\"3D Cones\"])\ntrajectory[..., 2] *= 2\nfunction = lambda x: tools.stack(trajectory, nb_stacks=nb_repetitions, hard_bounded=x)\nshow_trajectories(\n function,\n arguments,\n one_shot=one_shot,\n subfig_size=subfigure_size,\n dim=\"2D\",\n axes=(0, 2),\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Stack Random\n\nA direct extension of the stacking expansion is to distribute the stacks\naccording to a random distribution over the $k_z$-axis.\n\nArguments:\n- ``trajectory (array)``: array of k-space coordinates of size\n$(N_c, N_s, N_d)$\n- ``dim_size (int)``: size of the kspace in voxel units\n- ``center_prop (int or float)`` : number of line\n- ``acceleration (int)``: Acceleration factor\n- ``pdf (str or array)``: Probability density function for the random distribution\n- ``rng (int or np.random.Generator)``: Random number generator\n- ``order (int)``: Order of the shots in the stack\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"trajectory = tools.stack_random(\n planar_trajectories[\"Spiral\"],\n dim_size=128,\n center_prop=0.1,\n accel=16,\n pdf=\"uniform\",\n order=\"top-down\",\n rng=42,\n)\n\nshow_trajectory(trajectory, figure_size=figure_size, one_shot=one_shot)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ``trajectory (array)``\nThe main use case is to stack trajectories consisting of\nflat or thick planes that will match the image slices.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"arguments = [\"Radial\", \"Spiral\", \"2D Cones\", \"3D Cones\"]\nfunction = lambda x: tools.stack_random(\n planar_trajectories[x],\n dim_size=128,\n center_prop=0.1,\n accel=16,\n pdf=\"gaussian\",\n order=\"top-down\",\n rng=42,\n)\nshow_trajectories(function, arguments, one_shot=one_shot, subfig_size=subfigure_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ``dim_size (int)``\nSize of the k-space in voxel units over the stacking direction. It\nis used to normalize the stack positions, and is used with the ``accel``\nfactor and ``center_prop`` to determine the number of stacks.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"arguments = [32, 64, 128]\nfunction = lambda x: tools.stack_random(\n planar_trajectories[\"Spiral\"],\n dim_size=x,\n center_prop=0.1,\n accel=8,\n pdf=\"gaussian\",\n order=\"top-down\",\n rng=42,\n)\nshow_trajectories(function, arguments, one_shot=one_shot, subfig_size=subfigure_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ``center_prop (int or float)``\nNumber of lines to keep in the center of the k-space. It is used to determine\nthe number of stacks and the acceleration factor, and to keep the center of\nthe k-space with a higher density of shots. If a ``float`` this is a fraction\nof the total ``dim_size``. If ``int`` it is directly the number of lines.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"arguments = [1, 5, 0.1, 0.5]\nfunction = lambda x: tools.stack_random(\n planar_trajectories[\"Spiral\"],\n dim_size=128,\n center_prop=x,\n accel=16,\n pdf=\"uniform\",\n order=\"top-down\",\n rng=42,\n)\nshow_trajectories(function, arguments, one_shot=one_shot, subfig_size=subfigure_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ``accel (int)``\nAcceleration factor to subsample the outer region of the k-space.\nNote that the acceleration factor does not take into account the center lines.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"arguments = [1, 4, 8, 16, 32]\nfunction = lambda x: tools.stack_random(\n planar_trajectories[\"Spiral\"],\n dim_size=128,\n center_prop=0.1,\n accel=x,\n pdf=\"uniform\",\n order=\"top-down\",\n rng=42,\n)\nshow_trajectories(function, arguments, one_shot=one_shot, subfig_size=subfigure_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ``pdf (str or array)``\nProbability density function for the sampling of the outer region. It can\neither be a string to use a known probability law (\"gaussian\" or \"uniform\") or\n\"equispaced\" for a coherent undersampling (like the one used in GRAPPA). It\ncan also be a array, for using a customed density probability.\nIn this case, it will be normalized so that ``sum(pdf) =1``.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dim_size = 128\narguments = [\n \"gaussian\",\n \"uniform\",\n \"equispaced\",\n np.arange(dim_size),\n]\nfunction = lambda x: tools.stack_random(\n planar_trajectories[\"Spiral\"],\n dim_size=128,\n center_prop=0.1,\n accel=32,\n pdf=x,\n order=\"top-down\",\n rng=42,\n)\nshow_trajectories(function, arguments, one_shot=one_shot, subfig_size=subfigure_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ``order (str)``\nDetermine the ordering of the shot in the trajectory.\nAccepeted values are \"center-out\", \"top-down\" or \"random\".\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dim_size = 128\narguments = [\n \"center-out\",\n \"random\",\n \"top-down\",\n]\nfunction = lambda x: tools.stack_random(\n planar_trajectories[\"Spiral\"],\n dim_size=128,\n center_prop=0.1,\n accel=32,\n pdf=\"uniform\",\n order=x,\n rng=42,\n)\nshow_trajectories(function, arguments, one_shot=one_shot, subfig_size=subfigure_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
Binary file modified _downloads/592845a87450091541674bff1b2a8d24/example_gif_3D.zip
Binary file not shown.
Loading

0 comments on commit 1957374

Please sign in to comment.