From 6e6cb2223afd9709d486f16e22ee32a6ca910a04 Mon Sep 17 00:00:00 2001 From: Jaewon Chung Date: Tue, 9 Apr 2024 23:57:04 -0400 Subject: [PATCH] Code Base Update (#421) * rename ndmg to m2g * strip down requirements * update to run on python3 not on specific version * add lecture * reorg tutorials * update readme * Update docs * add disc data * Figure 3 * Update * black * figure 4 * Figure 4 data * Header for figure 4 * runner * add description * Change depth * Change title * finish diffusion * update pipelines * Update * finish pipeline * remove files * add * Try new dockerfile * Remove unused imports * update * update * update * update * update * Add setup.cfg for update compatibility * Remove dockerfile --- README.md | 403 +- docs/Makefile | 19 - docs/README.md | 0 docs/_static/diff_mapped_atlas.png | Bin 0 -> 96652 bytes docs/_static/diff_skullstip.png | Bin 0 -> 218930 bytes docs/_static/func_motion_plot.png | Bin 0 -> 54171 bytes docs/_static/func_skullstrip.png | Bin 0 -> 224100 bytes docs/_static/m2g_pipeline.png | Bin 0 -> 1455097 bytes docs/_static/qa-d/connectome.png | Bin 0 -> 40841 bytes docs/_static/qa-d/skullstrip.png | Bin 0 -> 218930 bytes docs/_static/qa-d/tractography.png | Bin 0 -> 132278 bytes docs/_static/qa-f/func_motion_plot.png | Bin 0 -> 54171 bytes docs/_static/qa-f/func_skullstrip.png | Bin 0 -> 224100 bytes docs/conf.py | 3 + docs/diffusion.rst | 159 +- docs/docker.rst | 39 + docs/functional.rst | 227 +- docs/index.rst | 7 +- docs/install.rst | 232 +- docs/links.rst | 23 + docs/make.bat | 35 - docs/paper/data/diffusion_disc.csv | 37 + docs/paper/data/discrim_results.ods | Bin 0 -> 71255 bytes docs/paper/data/dwi_ipsi_aal.csv | 3979 ++++++ docs/paper/data/dwi_ipsi_dkt.csv | 3982 ++++++ docs/paper/data/dwi_ipsi_hammer.csv | 3979 ++++++ docs/paper/data/func_ipsi_aal.csv | 10519 ++++++++++++++++ docs/paper/data/func_ipsi_dkt.csv | 10519 ++++++++++++++++ docs/paper/data/func_ipsi_hammer.csv | 10519 ++++++++++++++++ docs/paper/data/functional_disc.csv | 37 + docs/paper/discrim.py | 967 ++ docs/paper/discrim_runner.py | 147 + docs/paper/figure2.ipynb | 27 + docs/paper/figure3.ipynb | 176 + docs/paper/figure4.ipynb | 239 + docs/paper/figures/figure3.pdf | Bin 0 -> 41655 bytes docs/paper/figures/figure3.png | Bin 0 -> 577410 bytes docs/paper/figures/figure4.pdf | Bin 0 -> 246818 bytes docs/paper/figures/figure4.png | Bin 0 -> 903485 bytes docs/paper/index.rst | 20 + docs/paper/ipsi-calc.py | 419 + .../tutorials}/AWS_batch_setup.md | 0 {tutorials => docs/tutorials}/Batch.ipynb | 0 {tutorials => docs/tutorials}/Overview.ipynb | 58 +- .../tutorials}/Presentation.ipynb | 97 +- .../tutorials}/QA_Tractography_Tutorial.ipynb | 155 +- .../tutorials}/Qa_skullstrip.ipynb | 44 +- ...graphy_Directional_Field_QA_Tutorial.ipynb | 106 +- .../Tutorial_For_QA_Registration.ipynb | 169 +- .../tutorials}/Tutorial_of_QA_for_FAST.ipynb | 77 +- docs/usage.rst | 202 + m2g/__init__.py | 4 +- m2g/functional/m2g_func.py | 3 +- m2g/preproc.py | 2 - m2g/register.py | 3 - m2g/utils/gen_utils.py | 1 - m2g/utils/reg_utils.py | 4 - requirements.txt | 53 +- setup.cfg | 61 + setup.py | 86 +- 60 files changed, 46796 insertions(+), 1042 deletions(-) delete mode 100644 docs/Makefile delete mode 100644 docs/README.md create mode 100644 docs/_static/diff_mapped_atlas.png create mode 100644 docs/_static/diff_skullstip.png create mode 100644 docs/_static/func_motion_plot.png create mode 100644 docs/_static/func_skullstrip.png create mode 100644 docs/_static/m2g_pipeline.png create mode 100644 docs/_static/qa-d/connectome.png create mode 100644 docs/_static/qa-d/skullstrip.png create mode 100644 docs/_static/qa-d/tractography.png create mode 100644 docs/_static/qa-f/func_motion_plot.png create mode 100644 docs/_static/qa-f/func_skullstrip.png create mode 100644 docs/docker.rst create mode 100644 docs/links.rst delete mode 100644 docs/make.bat create mode 100644 docs/paper/data/diffusion_disc.csv create mode 100644 docs/paper/data/discrim_results.ods create mode 100644 docs/paper/data/dwi_ipsi_aal.csv create mode 100644 docs/paper/data/dwi_ipsi_dkt.csv create mode 100644 docs/paper/data/dwi_ipsi_hammer.csv create mode 100644 docs/paper/data/func_ipsi_aal.csv create mode 100644 docs/paper/data/func_ipsi_dkt.csv create mode 100644 docs/paper/data/func_ipsi_hammer.csv create mode 100644 docs/paper/data/functional_disc.csv create mode 100644 docs/paper/discrim.py create mode 100644 docs/paper/discrim_runner.py create mode 100644 docs/paper/figure2.ipynb create mode 100644 docs/paper/figure3.ipynb create mode 100644 docs/paper/figure4.ipynb create mode 100644 docs/paper/figures/figure3.pdf create mode 100644 docs/paper/figures/figure3.png create mode 100644 docs/paper/figures/figure4.pdf create mode 100644 docs/paper/figures/figure4.png create mode 100644 docs/paper/index.rst create mode 100644 docs/paper/ipsi-calc.py rename {tutorials => docs/tutorials}/AWS_batch_setup.md (100%) rename {tutorials => docs/tutorials}/Batch.ipynb (100%) rename {tutorials => docs/tutorials}/Overview.ipynb (92%) rename {tutorials => docs/tutorials}/Presentation.ipynb (97%) rename {tutorials => docs/tutorials}/QA_Tractography_Tutorial.ipynb (99%) rename {tutorials => docs/tutorials}/Qa_skullstrip.ipynb (99%) rename {tutorials => docs/tutorials}/Tractography_Directional_Field_QA_Tutorial.ipynb (99%) rename {tutorials => docs/tutorials}/Tutorial_For_QA_Registration.ipynb (99%) rename {tutorials => docs/tutorials}/Tutorial_of_QA_for_FAST.ipynb (98%) create mode 100644 docs/usage.rst create mode 100644 setup.cfg diff --git a/README.md b/README.md index 8cb547e1b..c61e22798 100644 --- a/README.md +++ b/README.md @@ -11,29 +11,34 @@ NeuroData's MR Graphs package, **m2g**, is a turn-key pipeline which uses struct # Contents - [Overview](#overview) +- [Documentation](#documentation) - [System Requirements](#system-requirements) - [Installation Guide](#installation-guide) -- [Docker](#docker) -- [Tutorial](#tutorial) -- [Outputs](#outputs) - [Usage](#usage) -- [Working with S3 Datasets](#Working-with-S3-Datasets) -- [Example Datasets](#example-datasets) -- [Documentation](#documentation) - [License](#license) -- [Manuscript Reproduction](#manuscript-reproduction) - [Issues](#issues) +- [Citing `m2g`](#citing-m2g) # Overview The **m2g** pipeline has been developed as a beginner-friendly solution for human connectome estimation by providing robust and reliable estimates of connectivity across a wide range of datasets. The pipelines are explained and derivatives analyzed in our pre-print, available on [BiorXiv](https://www.biorxiv.org/content/10.1101/2021.11.01.466686v1.full). +# Documentation + +Check out some [resources](http://m2g.io) on our website, or our [function reference](https://ndmg.neurodata.io/) for more information about **m2g**. + # System Requirements +## Hardware Requirements + +**m2g** pipelines requires only a standard computer with enough RAM (< 16 GB). + +## Software Requirements + The **m2g** pipeline: - was developed and tested primarily on Mac OS (10,11), Ubuntu (16, 18, 20), and CentOS (5, 6); -- made to work on Python 3.7; +- made to work on Python 3.8; - is wrapped in a [Docker container](https://hub.docker.com/r/neurodata/m2g); - has install instructions via a Dockerfile; - requires no non-standard hardware to run; @@ -42,386 +47,24 @@ The **m2g** pipeline: - For binaries required to install AFNI, FSL, INDI, ICA_AROMA, see the [Dockerfile](Dockerfile) - takes approximately 1-core, < 16-GB of RAM, and 1-2 hours to run for most datasets (varies based on data). -# Demo - -To install and run a tutorial of the latest Docker image of m2g, pull the docker image from DockerHub using the following command. Then enter it using `docker run`: - -``` -docker pull neurodata/m2g:latest -docker run -ti --entrypoint /bin/bash neurodata/m2g:latest -``` - -Once inside of the Docker container, download a tutorial dataset of fMRI and diffusion MRI data from the `open-neurodata` AWS S3 bucket to the `/input` directory in your container (make sure you are connected to the internet): - -``` -aws s3 sync --no-sign-request s3://open-neurodata/m2g/TUTORIAL /input -``` - -Now you can run the `m2g` pipeline for both the functional and diffusion MRI data using the command below. The number of `seeds` is intentionally set lower than recommended, along with a larger than recommended `voxelsize` for a faster run time (approximately 25 minutes). For more information as to what these input arguments represent, see the Tutorial section below. - -``` -m2g --participant_label 0025864 --session_label 1 --parcellation AAL_ --pipeline both --seeds 1 --voxelsize 4mm /input /output -``` - -Once the pipeline is done running, the resulting outputs can be found in `/output/sub-0025864/ses-1/`, see Outputs section below for a description of each file. - -# Installation Guide - -## Docker - -While you can install **m2g** from `pip` using the command `pip install m2g`, as there are several dependencies needed for both **m2g** and **CPAC**, it is highly recommended to use **m2g** through a docker container: - -**m2g** is available through Dockerhub, and the most recent docker image can be pulled using: - - docker pull neurodata/m2g:latest - -The image can then be used to create a container and run directly with the following command (and any additional options you may require for Docker, such as volume mounting): +# Installation - docker run -ti --entrypoint /bin/bash neurodata/m2g:latest - -**m2g** docker containers can also be made from m2g's Dockerfile. - - git clone https://github.com/neurodata/m2g.git - cd m2g - docker build -t . - -Where "uniquelabel" can be whatever you wish to call this Docker image (for example, m2g:latest). Additional information about building Docker images can be found [here](https://docs.docker.com/engine/reference/commandline/image_build/). -Creating the Docker image should take several minutes if this is the first time you have used this docker file. -In order to create a docker container from the docker image and access it, use the following command to both create and enter the container: - - docker run -it --entrypoint /bin/bash m2g:uniquelabel - -## Local Installation [COMING SOON] - -We highly recommend the use of the Docker container provided above. - -Due to the versioning required for CPAC, along with `m2g-d`, we are currently working on streamlining the installation of `m2g`. Stay tuned for updates. - -- Requires numerous system level dependencies with specified versions. As such, CPAC on its own runs on a docker container, and we recommond the usage +Instructions can be found within our documentation: https://docs.neurodata.io/m2g/tutorials/install.html # Usage -The **m2g** pipeline can be used to generate connectomes as a command-line utility on [BIDS datasets](http://bids.neuroimaging.io) with the following: - - m2g --pipeline /input/bids/dataset /output/directory - -Note that more options are available which can be helpful if running on the Amazon cloud, which can be found and documented by running `m2g -h`. - -## Docker Container Usage - -If running with the Docker container shown above, the `entrypoint` is already set to `m2g`, so the pipeline can be run directly from the host-system command line as follows: - - docker run -ti -v /path/to/local/data:/data neurodata/m2g /data/ /data/outputs - -This will run **m2g** on the local data and save the output files to the directory /path/to/local/data/outputs. Note that if you have created the docker image from github, replace `neurodata/m2g` with `imagename:uniquelabel`. - -Also note that currently, running `m2g` on a single bids-formatted dataset directory only runs a single scan. To run the entire dataset, we recommend parallelizing on a high-performance cluster or using `m2g`'s s3 integration. - -# Tutorial - -## Structural Connectome Pipeline (`m2g-d`) - -Once you have the pipeline up and running, you can run the structural connectome pipeline with: - - m2g --pipeline dwi - -We recommend specifying an atlas and lowering the default seed density on test runs (although, for real runs, we recommend using the default seeding -- lowering seeding simply decreases runtime): - - m2g --pipeline dwi --seeds 1 --parcellation Desikan +Instructions can be found within our documentation: https://docs.neurodata.io/m2g/tutorials/usage.html -You can set a particular scan and session as well (recommended for batch scripts): +# License - m2g --pipeline dwi --seeds 1 --parcellation Desikan --participant_label