diff --git a/.gitignore b/.gitignore index bcf9e07a..5b1b525d 100644 --- a/.gitignore +++ b/.gitignore @@ -139,3 +139,6 @@ dmypy.json # Pyre type checker .pyre/ *pyscript* + +# For mac users +*.DS_Store diff --git a/CONTENT.md b/CONTENT.md new file mode 100644 index 00000000..5fbeed5a --- /dev/null +++ b/CONTENT.md @@ -0,0 +1,7 @@ +### Contents overview + +- :snake: :package: `narps_open/` contains the Python package with all the pipelines logic. +- :brain: `data/` contains data that is used by the pipelines, as well as the (intermediate or final) results data. Instructions to download data are available in [INSTALL.md](/INSTALL.md#data-download-instructions). +- :blue_book: `docs/` contains the documentation for the project. Start browsing it with the entry point [docs/README.md](/docs/README.md) +- :orange_book: `examples/` contains notebooks examples to launch of the reproduced pipelines. +- :microscope: `tests/` contains the tests of the narps_open package. \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1ddbda84..7429acdd 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,15 +1,19 @@ # How to contribute to NARPS Open Pipelines ? -General guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) in the GitHub documentation. +For the reproductions, we are especially looking for contributors with the following profiles: + - 👩‍🎤 SPM, FSL, AFNI or nistats has no secrets for you? You know this fMRI analysis software by heart 💓. Please help us by reproducing the corresponding NARPS pipelines. 👣 after step 1, follow the fMRI expert trail. + - 🧑‍🎤 You are a nipype guru? 👣 after step 1, follow the nipype expert trail. -## Reproduce a pipeline :keyboard: -:thinking: Not sure which one to start with ? You can have a look on [this table](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) giving the work progress status for each pipeline. This will help choosing the one that best suits you! +# Step 1: Choose a pipeline to reproduce :keyboard: +:thinking: Not sure which pipeline to start with ? 🚦The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is in red (not started). -Need more information ? You can have a look to the pipeline description [here](https://docs.google.com/spreadsheets/d/1FU_F6kdxOD4PRQDIHXGHS4zTi_jEVaUqY_Zwg0z6S64/edit?usp=sharing). Also feel free to use the `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md). +Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. When you are ready, [start an issue](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose **Pipeline reproduction**! -### If you have experience with NiPype +# Step 2: Reproduction + +## 🧑‍🎤 NiPype trail We created templates with modifications to make and holes to fill to create a pipeline. You can find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). @@ -21,9 +25,9 @@ Feel free to have a look to the following pipelines, these are examples : | 2T6S | SPM | Yes | [/narps_open/pipelines/team_2T6S.py](/narps_open/pipelines/team_2T6S.py) | | X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_2T6S.py) | -### If you have experience with the original software package but not with NiPype +## 👩‍🎤 fMRI software trail -A fantastic tool named [Giraffe](https://giraffe.tools/porcupine/TimVanMourik/GiraffePlayground/master) is available. It allows you to create a graph of your pipeline using NiPype functions but without coding! Just save your NiPype script in a .py file and send it as a new issue, we will convert this script to a script which works with our specific parameters. +... ## Find or propose an issue :clipboard: Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. @@ -64,3 +68,7 @@ Once your PR is ready, you may add a reviewer to your PR, as described [here](ht Please turn your Draft Pull Request into a "regular" Pull Request, by clicking **Ready for review** in the Pull Request page. **:wave: Thank you in advance for contributing to the project!** + +## Additional resources + + - git and Gitub: general guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) in the GitHub documentation. diff --git a/README.md b/README.md index 20125d83..c042855d 100644 --- a/README.md +++ b/README.md @@ -15,45 +15,26 @@

-## Table of contents - -- [Project presentation](#project-presentation) -- [Getting Started](#getting-started) - - [Contents overview](#contents-overview) - - [Installation](#installation) - - [Contributing](#contributing) -- [References](#references) -- [Funding](#funding) - ## Project presentation -Neuroimaging workflows are highly flexible, leaving researchers with multiple possible options to analyze a dataset [(Carp, 2012)](https://www.frontiersin.org/articles/10.3389/fnins.2012.00149/full). -However, different analytical choices can cause variation in the results [(Botvinik-Nezer et al., 2020)](https://www.nature.com/articles/s41586-020-2314-9), leading to what was called a "vibration of effects" [(Ioannidis, 2008)](https://pubmed.ncbi.nlm.nih.gov/18633328/) also known as analytical variability. +**The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS study (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**. -**The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS project (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**. +We base our reproductions on the [original descriptions provided by the teams](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) and test the quality of the reproductions by comparing our results with the original results published on NeuroVault. -To perform the reproduction, we are lucky to be able to use the [descriptions provided by the teams](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx). -We also created a [shared spreadsheet](https://docs.google.com/spreadsheets/d/1FU_F6kdxOD4PRQDIHXGHS4zTi_jEVaUqY_Zwg0z6S64/edit?usp=sharing) that can be used to add comments on pipelines (e.g.: identify the ones that are not reproducible with NiPype). +:vertical_traffic_light: See [the pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) to view our current progress at a glance. -:vertical_traffic_light: Lastly, please find [here in the project's wiki](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) a dashboard to see pipelines work progresses at first glance. +## Contributing -## Getting Started +NARPS open pipelines uses [nipype](https://nipype.readthedocs.io/en/latest/index.html) as a workflow manager and provides a series of templates and examples to help reproduce the different teams’ analysis. -### Contents overview - -- :snake: :package: `narps_open/` contains the Python package with all the pipelines logic. -- :brain: `data/` contains data that is used by the pipelines, as well as the (intermediate or final) results data. Instructions to download data are available in [INSTALL.md](/INSTALL.md#data-download-instructions). -- :blue_book: `docs/` contains the documentation for the project. Start browsing it with the entry point [docs/README.md](/docs/README.md) -- :orange_book: `examples/` contains notebooks examples to launch of the reproduced pipelines. -- :microscope: `tests/` contains the tests of the narps_open package. +There are many ways you can contribute 🤗 :wave: Any help is welcome ! Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! ### Installation To get the pipelines running, please follow the installation steps in [INSTALL.md](/INSTALL.md) -### Contributing - -:wave: Any help is welcome ! Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! +## Getting started +If you are interested in using the codebase to run the pipelines, see the [user documentation (work-in-progress)]. ## References @@ -64,7 +45,7 @@ To get the pipelines running, please follow the installation steps in [INSTALL.m ## Funding -This project is supported by Région Bretagne (Boost MIND). +This project is supported by Région Bretagne (Boost MIND) and by Inria (Exploratory action GRASP). ## Credits diff --git a/docs/README.md b/docs/README.md index 8c4fd662..f9f6d193 100644 --- a/docs/README.md +++ b/docs/README.md @@ -11,4 +11,5 @@ Here are the available topics : * :microscope: [testing](/docs/testing.md) details the testing features of the project, i.e.: how is the code tested ? * :package: [ci-cd](/docs/ci-cd.md) contains the information on how continuous integration and delivery (knowned as CI/CD) is set up. * :writing_hand: [pipeline](/docs/pipelines.md) tells you all you need to know in order to write pipelines +* :compass: [core](/docs/core.md) a list of helpful functions when writing pipelines * :vertical_traffic_light: [status](/docs/status.md) contains the information on how to get the work progress status for a pipeline. diff --git a/docs/core.md b/docs/core.md new file mode 100644 index 00000000..2ea8e536 --- /dev/null +++ b/docs/core.md @@ -0,0 +1,117 @@ +# Core functions you can use to write pipelines + +Here are a few functions that could be useful for creating a pipeline with Nipype. These functions are meant to stay as unitary as possible. + +These are intended to be inserted in a nipype.Workflow inside a [nipype.Function](https://nipype.readthedocs.io/en/latest/api/generated/nipype.interfaces.utility.wrappers.html#function) interface, or for some of them (see associated docstring) as part of a [nipype.Workflow.connect](https://nipype.readthedocs.io/en/latest/api/generated/nipype.pipeline.engine.workflows.html#nipype.pipeline.engine.workflows.Workflow.connect) method. + +In the following example, we use the `list_intersection` function of `narps_open.core.common`, in both of the mentioned cases. + +```python +from nipype import Node, Function, Workflow +from narps_open.core.common import list_intersection + +# First case : a Function Node +intersection_node = Node(Function( + function = list_intersection, + input_names = ['list_1', 'list_2'], + output_names = ['output'] + ), name = 'intersection_node') +intersection_node.inputs.list_1 = ['001', '002', '003', '004'] +intersection_node.inputs.list_2 = ['002', '004', '005'] +print(intersection_node.run().outputs.output) # ['002', '004'] + +# Second case : inside a connect node +# We assume that there is a node_0 returning ['001', '002', '003', '004'] as `output` value +test_workflow = Workflow( + base_dir = '/path/to/base/dir', + name = 'test_workflow' + ) +test_workflow.connect([ + # node_1 will receive the evaluation of : + # list_intersection(['001', '002', '003', '004'], ['002', '004', '005']) + # as in_value + (node_0, node_1, [(('output', list_intersection, ['002', '004', '005']), 'in_value')]) + ]) +test_workflow.run() +``` + +> [!TIP] +> Use a [nipype.MapNode](https://nipype.readthedocs.io/en/latest/api/generated/nipype.pipeline.engine.nodes.html#nipype.pipeline.engine.nodes.MapNode) to run these functions on lists instead of unitary contents. E.g.: the `remove_file` function of `narps_open.core.common` only removes one file at a time, but feel free to pass a list of files using a `nipype.MapNode`. + +```python +from nipype import MapNode, Function +from narps_open.core.common import remove_file + +# Create the MapNode so that the `remove_file` function handles lists of files +remove_files_node = MapNode(Function( + function = remove_file, + input_names = ['_', 'file_name'], + output_names = [] + ), name = 'remove_files_node', iterfield = ['file_name']) + +# ... A couple of lines later, in the Worlflow definition +test_workflow = Workflow(base_dir = '/home/bclenet/dev/tests/nipype_merge/', name = 'test_workflow') +test_workflow.connect([ + # ... + # Here we assume the select_node's output `out_files` is a list of files + (select_node, remove_files_node, [('out_files', 'file_name')]) + # ... + ]) +``` + +## narps_open.core.common + +This module contains a set of functions that nearly every pipeline could use. + +* `remove_file` remove a file when it is not needed anymore (to save disk space) + +```python +from narps_open.core.common import remove_file + +# Remove the /path/to/the/image.nii.gz file +remove_file('/path/to/the/image.nii.gz') +``` + +* `elements_in_string` : return the first input parameter if it contains one element of second parameter (None otherwise). + +```python +from narps_open.core.common import elements_in_string + +# Here we test if the file 'sub-001_file.nii.gz' belongs to a group of subjects. +elements_in_string('sub-001_file.nii.gz', ['005', '006', '007']) # Returns None +elements_in_string('sub-001_file.nii.gz', ['001', '002', '003']) # Returns 'sub-001_file.nii.gz' +``` + +> [!TIP] +> This can be generalised to a group of files, using a `nipype.MapNode`! + +* `clean_list` : remove elements of the first input parameter (list) if it is equal to the second parameter. + +```python +from narps_open.core.common import clean_list + +# Here we remove subject 002 from a group of subjects. +clean_list(['002', '005', '006', '007'], '002') +``` + +* `list_intersection` : return the intersection of two lists. + +```python +from narps_open.core.common import list_intersection + +# Here we keep only subjects that are in the equalRange group and selected for the analysis. +equal_range_group = ['002', '004', '006', '008'] +selected_for_analysis = ['002', '006', '010'] +list_intersection(equal_range_group, selected_for_analysis) # Returns ['002', '006'] +``` + +## narps_open.core.image + +This module contains a set of functions dedicated to computations on images. + + * `get_voxel_dimensions` : returns the voxel dimensions of an image + +```python +# Get dimensions of voxels along x, y, and z in mm (returns e.g.: [1.0, 1.0, 1.0]). +get_voxel_dimensions('/path/to/the/image.nii.gz') +``` diff --git a/docs/status.md b/docs/status.md index 7fde8239..28492390 100644 --- a/docs/status.md +++ b/docs/status.md @@ -1,10 +1,18 @@ # Access the work progress status pipelines The class `PipelineStatusReport` of module `narps_open.utils.status` allows to create a report containing the following information for each pipeline: +* a work progress status : `idle`, `progress`, or `done`; * the software it uses (collected from the `categorized_for_analysis.analysis_SW` of the [team description](/docs/description.md)) ; * whether it uses data from fMRIprep or not ; * a list of issues related to it (the opened issues of the project that have the team ID inside their title or description) ; -* a work progress status : `idle`, `progress`, or `done`. +* a list of pull requests related to it (the opened pull requests of the project that have the team ID inside their title or description) ; +* whether it was excluded from the original NARPS analysis ; +* a reproducibility rating : + * default score is 4; + * -1 if the team did not use fmriprep data; + * -1 if the team used several pieces of software (e.g.: FSL and AFNI); + * -1 if the team used custom or marginal software (i.e.: something else than SPM, FSL, AFNI or nistats); + * -1 if the team did not provided his source code. This allows contributors to best select the pipeline they want/need to contribute to. For this purpose, the GitHub Actions workflow [`.github/workflows/pipeline_status.yml`](/.github/workflows/pipeline_status.yml) allows to dynamically generate the report and to store it in the [project's wiki](https://github.com/Inria-Empenn/narps_open_pipelines/wiki). @@ -55,22 +63,31 @@ python narps_open/utils/status --json # "softwares": "FSL", # "fmriprep": "No", # "issues": {}, -# "status": "idle" +# "excluded": "No", +# "reproducibility": 3, +# "reproducibility_comment": "", +# "issues": {}, +# "pulls": {}, +# "status": "2-idle" # }, # "0C7Q": { # "softwares": "FSL, AFNI", # "fmriprep": "Yes", # "issues": {}, +# "excluded": "No", +# "reproducibility": 3, +# "reproducibility_comment": "", +# "issues": {}, +# "pulls": {}, # "status": "idle" # }, # ... python narps_open/utils/status --md -# | team_id | status | softwares used | fmriprep used ? | related issues | -# | --- |:---:| --- | --- | --- | -# | 08MQ | :red_circle: | FSL | No | | -# | 0C7Q | :red_circle: | FSL, AFNI | Yes | | -# | 0ED6 | :red_circle: | SPM | No | | -# | 0H5E | :red_circle: | SPM | No | | +# ... +# | team_id | status | main software | fmriprep used ? | related issues | related pull requests | excluded from NARPS analysis | reproducibility | +# | --- |:---:| --- | --- | --- | --- | --- | --- | +# | Q6O0 | :green_circle: | SPM | Yes | | | No | :star::star::star::black_small_square:
| +# | UK24 | :orange_circle: | SPM | No | [2](url_issue_2), | | No | :star::star::black_small_square::black_small_square:
| # ... ``` diff --git a/narps_open/core/__init__.py b/narps_open/core/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/narps_open/core/common.py b/narps_open/core/common.py new file mode 100644 index 00000000..e40d4e9a --- /dev/null +++ b/narps_open/core/common.py @@ -0,0 +1,65 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Common functions to write pipelines """ + +def remove_file(_, file_name: str) -> None: + """ + Fully remove files generated by a Node, once they aren't needed anymore. + This function is meant to be used in a Nipype Function Node. + + Parameters: + - _: input only used for triggering the Node + - file_name: str, a single absolute filename of the file to remove + """ + # This import must stay inside the function, as required by Nipype + from os import remove + + try: + remove(file_name) + except OSError as error: + print(error) + +def elements_in_string(input_str: str, elements: list) -> str: #| None: + """ + Return input_str if it contains one element of the elements list. + Return None otherwise. + This function is meant to be used in a Nipype Function Node. + + Parameters: + - input_str: str + - elements: list of str, elements to be searched in input_str + """ + if any(e in input_str for e in elements): + return input_str + return None + +def clean_list(input_list: list, element = None) -> list: + """ + Remove elements of input_list that are equal to element and return the resultant list. + This function is meant to be used in a Nipype Function Node. It can be used inside a + nipype.Workflow.connect call as well. + + Parameters: + - input_list: list + - element: any + + Returns: + - input_list with elements equal to element removed + """ + return [f for f in input_list if f != element] + +def list_intersection(list_1: list, list_2: list) -> list: + """ + Returns the intersection of two lists. + This function is meant to be used in a Nipype Function Node. It can be used inside a + nipype.Workflow.connect call as well. + + Parameters: + - list_1: list + - list_2: list + + Returns: + - list, the intersection of list_1 and list_2 + """ + return [e for e in list_1 if e in list_2] diff --git a/narps_open/core/image.py b/narps_open/core/image.py new file mode 100644 index 00000000..35323e45 --- /dev/null +++ b/narps_open/core/image.py @@ -0,0 +1,25 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Image functions to write pipelines """ + +def get_voxel_dimensions(image: str) -> list: + """ + Return the voxel dimensions of a image in millimeters. + + Arguments: + image: str, string that represent an absolute path to a Nifti image. + + Returns: + list, size of the voxels in the image in millimeters. + """ + # This import must stay inside the function, as required by Nipype + from nibabel import load + + voxel_dimensions = load(image).header.get_zooms() + + return [ + float(voxel_dimensions[0]), + float(voxel_dimensions[1]), + float(voxel_dimensions[2]) + ] diff --git a/narps_open/data/description/analysis_pipelines_comments.tsv b/narps_open/data/description/analysis_pipelines_comments.tsv index 93cd4f24..5ebf2405 100644 --- a/narps_open/data/description/analysis_pipelines_comments.tsv +++ b/narps_open/data/description/analysis_pipelines_comments.tsv @@ -1,71 +1,71 @@ teamID excluded_from_narps_analysis exclusion_comment reproducibility reproducibility_comment -50GV no N/A ? Uses custom software (Denoiser) -9Q6R no N/A -O21U no N/A -U26C no N/A -43FJ no N/A -C88N no N/A -4TQ6 yes Resampled image offset and too large compared to template. -T54A no N/A -2T6S no N/A -L7J7 no N/A -0JO0 no N/A -X1Y5 no N/A -51PW no N/A -6VV2 no N/A -O6R6 no N/A -C22U no N/A ? Custom Matlab script for white matter PCA confounds -3PQ2 no N/A -UK24 no N/A -4SZ2 yes Resampled image offset from template brain. -9T8E no N/A -94GU no N/A ? Multiple software dependencies : SPM + ART + TAPAS + Matlab. -I52Y no N/A -5G9K no N/A ? ? -2T7P yes Missing thresholded images. ? ? -UI76 no N/A -B5I6 no N/A -V55J yes Bad histogram : very small values. -X19V no N/A -0C7Q yes Appears to be a p-value distribution, with slight excursions below and above zero. -R5K7 no N/A -0I4U no N/A -3C6G no N/A -R9K3 no N/A -O03M no N/A -08MQ no N/A -80GC no N/A -J7F9 no N/A -R7D1 no N/A -Q58J yes Bad histogram : bimodal, zero-inflated with a second distribution centered around 5. -L3V8 yes Rejected due to large amount of missing brain in center. -SM54 no N/A -1KB2 no N/A -0H5E yes Rejected due to large amount of missing brain in center. -P5F3 yes Rejected due to large amounts of missing data across brain. -Q6O0 no N/A -R42Q no N/A ? Uses fMRIflows, a custom software based on NiPype. -L9G5 no N/A -DC61 no N/A -E3B6 yes Bad histogram : very long tail, with substantial inflation at a value just below zero. -16IN no N/A ? Multiple software dependencies : matlab + SPM + FSL + R + TExPosition + neuroim -46CD no N/A -6FH5 yes Missing much of the central brain. -K9P0 no N/A -9U7M no N/A -VG39 no N/A -1K0E yes Used surface-based analysis, only provided data for cortical ribbon. ? ? -X1Z4 yes Used surface-based analysis, only provided data for cortical ribbon. ? Multiple software dependencies : FSL + fmriprep + ciftify + HCP workbench + Freesurfer + ANTs -I9D6 no N/A -E6R3 no N/A -27SS no N/A -B23O no N/A -AO86 no N/A -L1A8 yes Resampled image much smaller than template brain. ? ? -IZ20 no N/A -3TR7 no N/A -98BT yes Rejected due to very bad normalization. -XU70 no N/A ? Uses custom software : FSL + 4drealign -0ED6 no N/A ? ? -I07H yes Bad histogram : bimodal, with second distribution centered around 2.5. -1P0Y no N/A +50GV No N/A 3 Uses custom software (Denoiser) +9Q6R No N/A 2 +O21U No N/A 3 +U26C No N/A 4 Link to shared analysis code : https://github.com/gladomat/narps +43FJ No N/A 2 +C88N No N/A 3 +4TQ6 Yes Resampled image offset and too large compared to template. 3 +T54A No N/A 3 +2T6S No N/A 3 +L7J7 No N/A 3 +0JO0 No N/A 3 +X1Y5 No N/A 2 +51PW No N/A 3 +6VV2 No N/A 2 +O6R6 No N/A 3 +C22U No N/A 1 Custom Matlab script for white matter PCA confounds +3PQ2 No N/A 2 +UK24 No N/A 2 +4SZ2 Yes Resampled image offset from template brain. 3 +9T8E No N/A 3 +94GU No N/A 1 Multiple software dependencies : SPM + ART + TAPAS + Matlab. +I52Y No N/A 2 +5G9K Yes Values in the unthresholded images are not z / t stats 3 +2T7P Yes Missing thresholded images. 2 Link to shared analysis code : https://osf.io/3b57r +UI76 No N/A 3 +B5I6 No N/A 3 +V55J Yes Bad histogram : very small values. 2 +X19V No N/A 3 +0C7Q Yes Appears to be a p-value distribution, with slight excursions below and above zero. 2 +R5K7 No N/A 2 +0I4U No N/A 2 +3C6G No N/A 2 +R9K3 No N/A 3 +O03M No N/A 3 +08MQ No N/A 2 +80GC No N/A 3 +J7F9 No N/A 3 +R7D1 No N/A 3 Link to shared analysis code : https://github.com/IMTAltiStudiLucca/NARPS_R7D1 +Q58J Yes Bad histogram : bimodal, zero-inflated with a second distribution centered around 5. 3 Link to shared analysis code : https://github.com/amrka/NARPS_Q58J +L3V8 Yes Rejected due to large amount of missing brain in center. 2 +SM54 No N/A 3 +1KB2 No N/A 2 +0H5E Yes Rejected due to large amount of missing brain in center. 2 +P5F3 Yes Rejected due to large amounts of missing data across brain. 2 +Q6O0 No N/A 3 +R42Q No N/A 2 Uses fMRIflows, a custom software based on NiPype. Code available here : https://github.com/ilkayisik/narps_R42Q +L9G5 No N/A 2 +DC61 No N/A 3 +E3B6 Yes Bad histogram : very long tail, with substantial inflation at a value just below zero. 4 Link to shared analysis code : doi.org/10.5281/zenodo.3518407 +16IN Yes Values in the unthresholded images are not z / t stats 2 Multiple software dependencies : matlab + SPM + FSL + R + TExPosition + neuroim. Link to shared analysis code : https://github.com/jennyrieck/NARPS +46CD No N/A 1 +6FH5 Yes Missing much of the central brain. 2 +K9P0 No N/A 3 +9U7M No N/A 2 +VG39 Yes Performed small volume corrected instead of whole-brain analysis 3 +1K0E Yes Used surface-based analysis, only provided data for cortical ribbon. 1 +X1Z4 Yes Used surface-based analysis, only provided data for cortical ribbon. 1 Multiple software dependencies : FSL + fmriprep + ciftify + HCP workbench + Freesurfer + ANTs +I9D6 No N/A 2 +E6R3 No N/A 2 +27SS No N/A 2 +B23O No N/A 3 +AO86 No N/A 2 +L1A8 Yes Not in MNI standard space. 2 +IZ20 No N/A 1 +3TR7 No N/A 3 +98BT Yes Rejected due to very bad normalization. 2 +XU70 No N/A 1 Uses custom software : FSL + 4drealign +0ED6 No N/A 2 +I07H Yes Bad histogram : bimodal, with second distribution centered around 2.5. 2 +1P0Y No N/A 2 diff --git a/narps_open/data/participants.py b/narps_open/data/participants.py index a9cc65a5..835e834f 100644 --- a/narps_open/data/participants.py +++ b/narps_open/data/participants.py @@ -49,3 +49,11 @@ def get_participants(team_id: str) -> list: def get_participants_subset(nb_participants: int = 108) -> list: """ Return a list of participants of length nb_participants """ return get_all_participants()[0:nb_participants] + +def get_group(group_name: str) -> list: + """ Return a list containing all the participants inside the group_name group + + Warning : the subject ids are return as written in the participants file (i.e.: 'sub-*') + """ + participants = get_participants_information() + return participants.loc[participants['group'] == group_name]['participant_id'].values.tolist() diff --git a/narps_open/utils/status.py b/narps_open/utils/status.py index 6dced5bc..0058b40b 100644 --- a/narps_open/utils/status.py +++ b/narps_open/utils/status.py @@ -76,10 +76,18 @@ def generate(self): # Get software used in the pipeline, from the team description description = TeamDescription(team_id) - self.contents[team_id]['softwares'] = \ + self.contents[team_id]['software'] = \ description.categorized_for_analysis['analysis_SW'] self.contents[team_id]['fmriprep'] = description.preprocessing['used_fmriprep_data'] + # Get comments about the pipeline + self.contents[team_id]['excluded'] = \ + description.comments['excluded_from_narps_analysis'] + self.contents[team_id]['reproducibility'] = \ + int(description.comments['reproducibility']) + self.contents[team_id]['reproducibility_comment'] = \ + description.comments['reproducibility_comment'] + # Get issues and pull requests related to the team issues = {} pulls = {} @@ -109,10 +117,11 @@ def generate(self): else: self.contents[team_id]['status'] = '1-progress' - # Sort contents with the following priorities : 1-"status", 2-"softwares" and 3-"fmriprep" + # Sort contents with the following priorities : + # 1-"status", 2-"softwares", 3-"fmriprep" self.contents = OrderedDict(sorted( self.contents.items(), - key=lambda k: (k[1]['status'], k[1]['softwares'], k[1]['fmriprep']) + key=lambda k: (k[1]['status'], k[1]['software'], k[1]['fmriprep']) )) def markdown(self): @@ -124,14 +133,23 @@ def markdown(self): output_markdown += '
:red_circle: not started yet\n' output_markdown += '
:orange_circle: in progress\n' output_markdown += '
:green_circle: completed\n' - output_markdown += '

The *softwares used* column gives a simplified version of ' + output_markdown += '

The *main software* column gives a simplified version of ' output_markdown += 'what can be found in the team descriptions under the ' output_markdown += '`general.software` column.\n' + output_markdown += '

The *reproducibility* column rates the pipeline as follows:\n' + output_markdown += ' * default score is :star::star::star::star:;\n' + output_markdown += ' * -1 if the team did not use fmriprep data;\n' + output_markdown += ' * -1 if the team used several pieces of software ' + output_markdown += '(e.g.: FSL and AFNI);\n' + output_markdown += ' * -1 if the team used custom or marginal software ' + output_markdown += '(i.e.: something else than SPM, FSL, AFNI or nistats);\n' + output_markdown += ' * -1 if the team did not provided his source code.\n' # Start table - output_markdown += '| team_id | status | softwares used | fmriprep used ? |' - output_markdown += ' related issues | related pull requests |\n' - output_markdown += '| --- |:---:| --- | --- | --- | --- |\n' + output_markdown += '\n| team_id | status | main software | fmriprep used ? |' + output_markdown += ' related issues | related pull requests |' + output_markdown += ' excluded from NARPS analysis | reproducibility |\n' + output_markdown += '| --- |:---:| --- | --- | --- | --- | --- | --- |\n' # Add table contents for team_key, team_values in self.contents.items(): @@ -146,7 +164,7 @@ def markdown(self): status = ':red_circle:' output_markdown += f'| {status} ' - output_markdown += f'| {team_values["softwares"]} ' + output_markdown += f'| {team_values["software"]} ' output_markdown += f'| {team_values["fmriprep"]} ' issues = '' @@ -159,8 +177,15 @@ def markdown(self): for issue_number, issue_url in team_values['pulls'].items(): pulls += f'[{issue_number}]({issue_url}), ' - output_markdown += f'| {pulls} |\n' + output_markdown += f'| {pulls} ' + output_markdown += f'| {team_values["excluded"]} ' + reproducibility_ranking = '' + for _ in range(team_values['reproducibility']): + reproducibility_ranking += ':star:' + for _ in range(4-team_values['reproducibility']): + reproducibility_ranking += ':black_small_square:' + output_markdown += f'| {reproducibility_ranking}
{team_values["reproducibility_comment"]} |\n' return output_markdown diff --git a/tests/core/__init__.py b/tests/core/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/tests/core/test_common.py b/tests/core/test_common.py new file mode 100644 index 00000000..3e00fd1b --- /dev/null +++ b/tests/core/test_common.py @@ -0,0 +1,319 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Tests of the 'narps_open.core.common' module. + +Launch this test with PyTest + +Usage: +====== + pytest -q test_common.py + pytest -q test_common.py -k +""" +from os import mkdir +from os.path import join, exists, abspath +from shutil import rmtree +from pathlib import Path + +from pytest import mark, fixture +from nipype import Node, Function, Workflow + +from narps_open.utils.configuration import Configuration +import narps_open.core.common as co + +TEMPORARY_DIR = join(Configuration()['directories']['test_runs'], 'test_common') + +@fixture +def remove_test_dir(): + """ A fixture to remove temporary directory created by tests """ + + rmtree(TEMPORARY_DIR, ignore_errors = True) + mkdir(TEMPORARY_DIR) + yield # test runs here + rmtree(TEMPORARY_DIR, ignore_errors = True) + +class TestCoreCommon: + """ A class that contains all the unit tests for the common module.""" + + @staticmethod + @mark.unit_test + def test_remove_file(remove_test_dir): + """ Test the remove_file function """ + + # Create a single file + test_file_path = abspath(join(TEMPORARY_DIR, 'file1.txt')) + Path(test_file_path).touch() + + # Check file exist + assert exists(test_file_path) + + # Create a Nipype Node using remove_files + test_remove_file_node = Node(Function( + function = co.remove_file, + input_names = ['_', 'file_name'], + output_names = [] + ), name = 'test_remove_file_node') + test_remove_file_node.inputs._ = '' + test_remove_file_node.inputs.file_name = test_file_path + test_remove_file_node.run() + + # Check file is removed + assert not exists(test_file_path) + + @staticmethod + @mark.unit_test + def test_node_elements_in_string(): + """ Test the elements_in_string function as a nipype.Node """ + + # Inputs + string = 'test_string' + elements_false = ['z', 'u', 'warning'] + elements_true = ['z', 'u', 'warning', '_'] + + # Create a Nipype Node using elements_in_string + test_node = Node(Function( + function = co.elements_in_string, + input_names = ['input_str', 'elements'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_str = string + test_node.inputs.elements = elements_true + out = test_node.run().outputs.output + + # Check return value + assert out == string + + # Change input and check return value + test_node = Node(Function( + function = co.elements_in_string, + input_names = ['input_str', 'elements'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_str = string + test_node.inputs.elements = elements_false + out = test_node.run().outputs.output + assert out is None + + @staticmethod + @mark.unit_test + def test_connect_elements_in_string(remove_test_dir): + """ Test the elements_in_string function as evaluated in a connect """ + + # Inputs + string = 'test_string' + elements_false = ['z', 'u', 'warning'] + elements_true = ['z', 'u', 'warning', '_'] + function = lambda in_value: in_value + + # Create Nodes + node_1 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_1') + node_1.inputs.in_value = string + node_true = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_true') + node_false = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_false') + + # Create Workflow + test_workflow = Workflow( + base_dir = TEMPORARY_DIR, + name = 'test_workflow' + ) + test_workflow.connect([ + # elements_in_string is evaluated as part of the connection + (node_1, node_true, [( + ('out_value', co.elements_in_string, elements_true), 'in_value')]), + (node_1, node_false, [( + ('out_value', co.elements_in_string, elements_false), 'in_value')]) + ]) + + test_workflow.run() + + test_file_t = join(TEMPORARY_DIR, 'test_workflow', 'node_true', '_report', 'report.rst') + with open(test_file_t, 'r', encoding = 'utf-8') as file: + assert '* out_value : test_string' in file.read() + + test_file_f = join(TEMPORARY_DIR, 'test_workflow', 'node_false', '_report', 'report.rst') + with open(test_file_f, 'r', encoding = 'utf-8') as file: + assert '* out_value : None' in file.read() + + @staticmethod + @mark.unit_test + def test_node_clean_list(): + """ Test the clean_list function as a nipype.Node """ + + # Inputs + input_list = ['z', '_', 'u', 'warning', '_', None] + element_to_remove_1 = '_' + output_list_1 = ['z', 'u', 'warning', None] + element_to_remove_2 = None + output_list_2 = ['z', '_', 'u', 'warning', '_'] + + # Create a Nipype Node using clean_list + test_node = Node(Function( + function = co.clean_list, + input_names = ['input_list', 'element'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_list = input_list + test_node.inputs.element = element_to_remove_1 + + # Check return value + assert test_node.run().outputs.output == output_list_1 + + # Change input and check return value + test_node = Node(Function( + function = co.clean_list, + input_names = ['input_list', 'element'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_list = input_list + test_node.inputs.element = element_to_remove_2 + + assert test_node.run().outputs.output == output_list_2 + + @staticmethod + @mark.unit_test + def test_connect_clean_list(remove_test_dir): + """ Test the clean_list function as evaluated in a connect """ + + # Inputs + input_list = ['z', '_', 'u', 'warning', '_', None] + element_to_remove_1 = '_' + output_list_1 = ['z', 'u', 'warning', None] + element_to_remove_2 = None + output_list_2 = ['z', '_', 'u', 'warning', '_'] + function = lambda in_value: in_value + + # Create Nodes + node_0 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_0') + node_0.inputs.in_value = input_list + node_1 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_1') + node_2 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_2') + + # Create Workflow + test_workflow = Workflow( + base_dir = TEMPORARY_DIR, + name = 'test_workflow' + ) + test_workflow.connect([ + # elements_in_string is evaluated as part of the connection + (node_0, node_1, [(('out_value', co.clean_list, element_to_remove_1), 'in_value')]), + (node_0, node_2, [(('out_value', co.clean_list, element_to_remove_2), 'in_value')]) + ]) + test_workflow.run() + + test_file_1 = join(TEMPORARY_DIR, 'test_workflow', 'node_1', '_report', 'report.rst') + with open(test_file_1, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_1}' in file.read() + + test_file_2 = join(TEMPORARY_DIR, 'test_workflow', 'node_2', '_report', 'report.rst') + with open(test_file_2, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_2}' in file.read() + + @staticmethod + @mark.unit_test + def test_node_list_intersection(): + """ Test the list_intersection function as a nipype.Node """ + + # Inputs / outputs + input_list_1 = ['001', '002', '003', '004'] + input_list_2 = ['002', '004'] + input_list_3 = ['001', '003', '005'] + output_list_1 = ['002', '004'] + output_list_2 = ['001', '003'] + + # Create a Nipype Node using list_intersection + test_node = Node(Function( + function = co.list_intersection, + input_names = ['list_1', 'list_2'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.list_1 = input_list_1 + test_node.inputs.list_2 = input_list_2 + + # Check return value + assert test_node.run().outputs.output == output_list_1 + + # Change input and check return value + test_node = Node(Function( + function = co.list_intersection, + input_names = ['list_1', 'list_2'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.list_1 = input_list_1 + test_node.inputs.list_2 = input_list_3 + + assert test_node.run().outputs.output == output_list_2 + + @staticmethod + @mark.unit_test + def test_connect_list_intersection(remove_test_dir): + """ Test the list_intersection function as evaluated in a connect """ + + # Inputs / outputs + input_list_1 = ['001', '002', '003', '004'] + input_list_2 = ['002', '004'] + input_list_3 = ['001', '003', '005'] + output_list_1 = ['002', '004'] + output_list_2 = ['001', '003'] + function = lambda in_value: in_value + + # Create Nodes + node_0 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_0') + node_0.inputs.in_value = input_list_1 + node_1 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_1') + node_2 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_2') + + # Create Workflow + test_workflow = Workflow( + base_dir = TEMPORARY_DIR, + name = 'test_workflow' + ) + test_workflow.connect([ + # elements_in_string is evaluated as part of the connection + (node_0, node_1, [(('out_value', co.list_intersection, input_list_2), 'in_value')]), + (node_0, node_2, [(('out_value', co.list_intersection, input_list_3), 'in_value')]) + ]) + test_workflow.run() + + test_file_1 = join(TEMPORARY_DIR, 'test_workflow', 'node_1', '_report', 'report.rst') + with open(test_file_1, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_1}' in file.read() + + test_file_2 = join(TEMPORARY_DIR, 'test_workflow', 'node_2', '_report', 'report.rst') + with open(test_file_2, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_2}' in file.read() diff --git a/tests/core/test_image.py b/tests/core/test_image.py new file mode 100644 index 00000000..d3b83ac5 --- /dev/null +++ b/tests/core/test_image.py @@ -0,0 +1,48 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Tests of the 'narps_open.core.image' module. + +Launch this test with PyTest + +Usage: +====== + pytest -q test_image.py + pytest -q test_image.py -k +""" + +from os.path import abspath, join +from numpy import isclose + +from pytest import mark +from nipype import Node, Function + +from narps_open.utils.configuration import Configuration +import narps_open.core.image as im + +class TestCoreImage: + """ A class that contains all the unit tests for the image module.""" + + @staticmethod + @mark.unit_test + def test_get_voxel_dimensions(): + """ Test the get_voxel_dimensions function """ + + # Path to the test image + test_file_path = abspath(join( + Configuration()['directories']['test_data'], + 'core', + 'image', + 'test_image.nii.gz')) + + # Create a Nipype Node using get_voxel_dimensions + test_get_voxel_dimensions_node = Node(Function( + function = im.get_voxel_dimensions, + input_names = ['image'], + output_names = ['voxel_dimensions'] + ), name = 'test_get_voxel_dimensions_node') + test_get_voxel_dimensions_node.inputs.image = test_file_path + outputs = test_get_voxel_dimensions_node.run().outputs + + # Check voxel sizes + assert isclose(outputs.voxel_dimensions, [8.0, 8.0, 9.6]).all() diff --git a/tests/data/test_description.py b/tests/data/test_description.py index c66e23b3..9c8d633c 100644 --- a/tests/data/test_description.py +++ b/tests/data/test_description.py @@ -55,7 +55,7 @@ def test_arguments_properties(): assert description['analysis.RT_modeling'] == 'duration' assert description['categorized_for_analysis.analysis_SW_with_version'] == 'SPM12' assert description['derived.func_fwhm'] == '8' - assert description['comments.excluded_from_narps_analysis'] == 'no' + assert description['comments.excluded_from_narps_analysis'] == 'No' # 4 - Check properties assert isinstance(description.general, dict) @@ -84,7 +84,7 @@ def test_arguments_properties(): assert description.analysis['RT_modeling'] == 'duration' assert description.categorized_for_analysis['analysis_SW_with_version'] == 'SPM12' assert description.derived['func_fwhm'] == '8' - assert description.comments['excluded_from_narps_analysis'] == 'no' + assert description.comments['excluded_from_narps_analysis'] == 'No' # 6 - Test another team description = TeamDescription('9Q6R') diff --git a/tests/data/test_participants.py b/tests/data/test_participants.py index d30cd23e..f36f0a05 100644 --- a/tests/data/test_participants.py +++ b/tests/data/test_participants.py @@ -10,28 +10,46 @@ pytest -q test_participants.py pytest -q test_participants.py -k """ +from os.path import join -from pytest import mark +from pytest import mark, fixture import narps_open.data.participants as part +from narps_open.utils.configuration import Configuration + +@fixture +def mock_participants_data(mocker): + """ A fixture to provide mocked data from the test_data directory """ + + mocker.patch( + 'narps_open.data.participants.Configuration', + return_value = { + 'directories': { + 'dataset': join( + Configuration()['directories']['test_data'], + 'data', 'participants') + } + } + ) class TestParticipants: """ A class that contains all the unit tests for the participants module.""" @staticmethod @mark.unit_test - def test_get_participants_information(): + def test_get_participants_information(mock_participants_data): """ Test the get_participants_information function """ - """p_info = part.get_participants_information() - assert len(p_info) == 108 - assert p_info.at[5, 'participant_id'] == 'sub-006' - assert p_info.at[5, 'group'] == 'equalRange' - assert p_info.at[5, 'gender'] == 'M' - assert p_info.at[5, 'age'] == 30 - assert p_info.at[12, 'participant_id'] == 'sub-015' - assert p_info.at[12, 'group'] == 'equalIndifference' - assert p_info.at[12, 'gender'] == 'F' - assert p_info.at[12, 'age'] == 26""" + + p_info = part.get_participants_information() + assert len(p_info) == 4 + assert p_info.at[1, 'participant_id'] == 'sub-002' + assert p_info.at[1, 'group'] == 'equalRange' + assert p_info.at[1, 'gender'] == 'M' + assert p_info.at[1, 'age'] == 25 + assert p_info.at[2, 'participant_id'] == 'sub-003' + assert p_info.at[2, 'group'] == 'equalIndifference' + assert p_info.at[2, 'gender'] == 'F' + assert p_info.at[2, 'age'] == 27 @staticmethod @mark.unit_test @@ -87,3 +105,12 @@ def test_get_participants_subset(): assert len(participants_list) == 80 assert participants_list[0] == '020' assert participants_list[-1] == '003' + + @staticmethod + @mark.unit_test + def test_get_group(mock_participants_data): + """ Test the get_group function """ + + assert part.get_group('') == [] + assert part.get_group('equalRange') == ['sub-002', 'sub-004'] + assert part.get_group('equalIndifference') == ['sub-001', 'sub-003'] diff --git a/tests/test_data/core/image/test_image.nii.gz b/tests/test_data/core/image/test_image.nii.gz new file mode 100644 index 00000000..06fb9aa3 Binary files /dev/null and b/tests/test_data/core/image/test_image.nii.gz differ diff --git a/tests/test_data/data/description/test_markdown.md b/tests/test_data/data/description/test_markdown.md index 1749e7c1..3763eee3 100644 --- a/tests/test_data/data/description/test_markdown.md +++ b/tests/test_data/data/description/test_markdown.md @@ -97,7 +97,7 @@ Model EVs (2): eq_indiff, eq_range * `func_fwhm` : 5 * `con_fwhm` : ## Comments -* `excluded_from_narps_analysis` : no +* `excluded_from_narps_analysis` : No * `exclusion_comment` : N/A -* `reproducibility` : +* `reproducibility` : 2 * `reproducibility_comment` : diff --git a/tests/test_data/data/description/test_str.json b/tests/test_data/data/description/test_str.json index c2550fcd..a5919f4b 100644 --- a/tests/test_data/data/description/test_str.json +++ b/tests/test_data/data/description/test_str.json @@ -54,8 +54,8 @@ "derived.excluded_participants": "018, 030, 088, 100", "derived.func_fwhm": "5", "derived.con_fwhm": "", - "comments.excluded_from_narps_analysis": "no", + "comments.excluded_from_narps_analysis": "No", "comments.exclusion_comment": "N/A", - "comments.reproducibility": "", + "comments.reproducibility": "2", "comments.reproducibility_comment": "" } \ No newline at end of file diff --git a/tests/test_data/data/participants/participants.tsv b/tests/test_data/data/participants/participants.tsv new file mode 100644 index 00000000..312dbcde --- /dev/null +++ b/tests/test_data/data/participants/participants.tsv @@ -0,0 +1,5 @@ +participant_id group gender age +sub-001 equalIndifference M 24 +sub-002 equalRange M 25 +sub-003 equalIndifference F 27 +sub-004 equalRange M 25 \ No newline at end of file diff --git a/tests/test_data/utils/status/test_markdown.md b/tests/test_data/utils/status/test_markdown.md index 7e0ff2d2..500bbc52 100644 --- a/tests/test_data/utils/status/test_markdown.md +++ b/tests/test_data/utils/status/test_markdown.md @@ -3,11 +3,18 @@ The *status* column tells whether the work on the pipeline is :
:red_circle: not started yet
:orange_circle: in progress
:green_circle: completed -

The *softwares used* column gives a simplified version of what can be found in the team descriptions under the `general.software` column. -| team_id | status | softwares used | fmriprep used ? | related issues | related pull requests | -| --- |:---:| --- | --- | --- | --- | -| Q6O0 | :green_circle: | SPM | Yes | | | -| UK24 | :orange_circle: | SPM | No | [2](url_issue_2), | | -| 2T6S | :orange_circle: | SPM | Yes | [5](url_issue_5), | [3](url_pull_3), | -| 1KB2 | :red_circle: | FSL | No | | | -| C88N | :red_circle: | SPM | Yes | | | +

The *main software* column gives a simplified version of what can be found in the team descriptions under the `general.software` column. +

The *reproducibility* column rates the pipeline as follows: + * default score is :star::star::star::star:; + * -1 if the team did not use fmriprep data; + * -1 if the team used several pieces of software (e.g.: FSL and AFNI); + * -1 if the team used custom or marginal software (i.e.: something else than SPM, FSL, AFNI or nistats); + * -1 if the team did not provided his source code. + +| team_id | status | main software | fmriprep used ? | related issues | related pull requests | excluded from NARPS analysis | reproducibility | +| --- |:---:| --- | --- | --- | --- | --- | --- | +| Q6O0 | :green_circle: | SPM | Yes | | | No | :star::star::star::black_small_square:
| +| UK24 | :orange_circle: | SPM | No | [2](url_issue_2), | | No | :star::star::black_small_square::black_small_square:
| +| 2T6S | :orange_circle: | SPM | Yes | [5](url_issue_5), | [3](url_pull_3), | No | :star::star::star::black_small_square:
| +| 1KB2 | :red_circle: | FSL | No | | | No | :star::star::black_small_square::black_small_square:
| +| C88N | :red_circle: | SPM | Yes | | | No | :star::star::star::black_small_square:
| diff --git a/tests/test_data/utils/status/test_str.json b/tests/test_data/utils/status/test_str.json index 638a06f7..3c590881 100644 --- a/tests/test_data/utils/status/test_str.json +++ b/tests/test_data/utils/status/test_str.json @@ -1,14 +1,20 @@ { "Q6O0": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "Yes", + "excluded": "No", + "reproducibility": 3, + "reproducibility_comment": "", "issues": {}, "pulls": {}, "status": "0-done" }, "UK24": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "No", + "excluded": "No", + "reproducibility": 2, + "reproducibility_comment": "", "issues": { "2": "url_issue_2" }, @@ -16,8 +22,11 @@ "status": "1-progress" }, "2T6S": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "Yes", + "excluded": "No", + "reproducibility": 3, + "reproducibility_comment": "", "issues": { "5": "url_issue_5" }, @@ -27,15 +36,21 @@ "status": "1-progress" }, "1KB2": { - "softwares": "FSL", + "software": "FSL", "fmriprep": "No", + "excluded": "No", + "reproducibility": 2, + "reproducibility_comment": "", "issues": {}, "pulls": {}, "status": "2-idle" }, "C88N": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "Yes", + "excluded": "No", + "reproducibility": 3, + "reproducibility_comment": "", "issues": {}, "pulls": {}, "status": "2-idle" diff --git a/tests/utils/test_status.py b/tests/utils/test_status.py index 17170b71..2e0df22a 100644 --- a/tests/utils/test_status.py +++ b/tests/utils/test_status.py @@ -232,32 +232,42 @@ def test_generate(mock_api_issue, mocker): report.generate() test_pipeline = report.contents['2T6S'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'Yes' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 3 assert test_pipeline['issues'] == {5: 'url_issue_5'} assert test_pipeline['pulls'] == {3: 'url_pull_3'} assert test_pipeline['status'] == '1-progress' test_pipeline = report.contents['UK24'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'No' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 2 assert test_pipeline['issues'] == {2: 'url_issue_2'} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '1-progress' test_pipeline = report.contents['Q6O0'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'Yes' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 3 assert test_pipeline['issues'] == {} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '0-done' test_pipeline = report.contents['1KB2'] - assert test_pipeline['softwares'] == 'FSL' + assert test_pipeline['software'] == 'FSL' assert test_pipeline['fmriprep'] == 'No' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 2 assert test_pipeline['issues'] == {} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '2-idle' test_pipeline = report.contents['C88N'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'Yes' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 3 assert test_pipeline['issues'] == {} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '2-idle'