Skip to content

Commit

Permalink
Merge pull request #8 from elodiegermani/main
Browse files Browse the repository at this point in the history
Issues #1 and #2
  • Loading branch information
elodiegermani authored Jun 17, 2022
2 parents 3e22425 + 8d67c8d commit 919d9ea
Show file tree
Hide file tree
Showing 19 changed files with 3,004 additions and 177 deletions.
16 changes: 16 additions & 0 deletions .github/ISSUE_TEMPLATE/pipeline_reproduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
name: Pipeline reproduction
about: Reproduction of a pipeline that is not already reproduce or in reproduction.
title: 'Pipeline reproduction ID:'
labels: "\U0001F6A6 status: awaiting triage"
assignees: ''

---

**Pipeline details**
- Original pipeline ID:
- Original pipeline software package:
- Type of input data: fMRIprep / raw data

**Additional context**
Add any other context about the problem here.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
Expand Down
2 changes: 2 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

General guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects).

## Reproduce a pipeline ⌨️

## Find or propose an issue 📋

Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**.
Expand Down
71 changes: 65 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ However, different analytical choices can cause variation in the results (Botvin

**The goal of the NARPS open pipeline project is to create a codebase reproducing the 70 pipelines of the NARPS project (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**.

To perform the reproduction, we are lucky to be able to use the description provided by each team available here.
To perform the reproduction, we are lucky to be able to use the description provided by each team available [here](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx).

## To start

Expand All @@ -43,24 +43,83 @@ Instructions to download data are available [below](#download-data).

### Install Docker container

*Coming soon*
To use the notebooks and launch the pipelines, you need to install the [NiPype](https://nipype.readthedocs.io/en/latest/users/install.html) Python package but also the original software package used in the pipeline (SPM, FSL, AFNI...).

To facilitate this step, we created a Docker container based on [Neurodocker](https://github.com/ReproNim/neurodocker) that contains the necessary Python packages and software packages. To install the Docker image, use the command below:
```
docker pull elodiegermani/open_pipeline:latest
```

The image should install itself. Once it's done you can check available images on your system:
```
docker images
```

When the installation is finished, you have to build a container using the command below:
```
docker run -ti -p 8888:8888 elodiegermani/open_pipeline
```

On this commandline, you need to add volumes to be able to link with your local files (original dataset and git repository). If you stored the original dataset in `data/original`, just make a volume with the `narps_open_pipelines` directory:
```
docker run -ti -p 8888:8888 -v /users/egermani/Documents/narps_open_pipelines:/home/ elodiegermani/open_pipeline
```

If it is in another directory, make a second volume with the path to your dataset:
```
docker run -ti -p 8888:8888 -v /Users/egermani/Documents/narps_open_pipelines:/home/ -v /Users/egermani/Documents/data/NARPS/:/data/ elodiegermani/open_pipeline
```

After that, your container will be launched!

#### Other command that could be useful:
##### START THE CONTAINER
```docker start [name_of_the_container]```

##### VERIFY THE CONTAINER IS IN THE LIST
```docker ps ```

##### EXECUTE BASH OR ATTACH YOUR CONTAINER
```docker exec -ti [name_of_the_container] bash```
OR
```docker attach [name_of_the_container]```

#### Useful command inside the container:
##### ACTIVATE CONDA ENVIRONMENT
```source activate neuro```

##### LAUNCH JUPYTER NOTEBOOK
```jupyter notebook --port=8888 --no-browser --ip=0.0.0.0```

#### If you did not use your container for a while:
##### VERIFY IT STILL RUN :
```docker ps -l```
##### IF YOUR DOCKER CONTAINER IS IN THE LIST, RUN :
```docker start [name_of_the_container]```
##### ELSE, RERUN IT WITH :
```docker run -ti -p 8888:8888 -v /home/egermani:/home [name_of_the_image]```

#### To use SPM inside the container, use this command at the beginning of your script:
```
from nipype.interfaces import spm
matlab_cmd = '/opt/spm12-r7771/run_spm12.sh /opt/matlabmcr-2010a/v713/ script'
spm.SPMCommand.set_mlab_paths(matlab_cmd=matlab_cmd, use_mcr=True)
```

### Download data

#### Original dataset

File containing pipeline description is available in `/data/original`.

The dataset used for the `/src/reproduction_*.ipynb` notebooks can be downloaded [**here**](https://openneuro.org/datasets/ds001734/versions/1.0.5).

The data must be stored in a directory inside the `data/original` directory.
I recommand to store the data in a directory inside the `data/original` directory.

I recommand to download it with **Datalad**. If you want to use it, install [**Datalad**](http://handbook.datalad.org/en/latest/intro/installation.html#install), place yourself in the `data/original` directory and run `datalad install ///openneuro/ds001734`.
After, you can download all the files by running `datalad get ./*` and if you only want parts of the data, replace the * by the paths to the desired files.

#### Derived data

Derived data such as original stat maps from teams and reproduced stat maps can be downloaded from [NeuroVault](www.neurovault.org) (Gorgolewski & al, 2015).
Derived data such as original stat maps from teams and reproduced stat maps can be downloaded from [NeuroVault](https://www.neurovault.org) (Gorgolewski & al, 2015).

*Coming soon*

Expand Down
Binary file removed data/original/analysis_pipelines_for_analysis.xlsx
Binary file not shown.
Loading

0 comments on commit 919d9ea

Please sign in to comment.