From f2d197d8c2705a64c43453df936083edabf4e792 Mon Sep 17 00:00:00 2001 From: cmaumet Date: Tue, 7 Nov 2023 15:02:38 +0100 Subject: [PATCH 1/5] Add DS_Store to git ignore --- .gitignore | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.gitignore b/.gitignore index bcf9e07a..5b1b525d 100644 --- a/.gitignore +++ b/.gitignore @@ -139,3 +139,6 @@ dmypy.json # Pyre type checker .pyre/ *pyscript* + +# For mac users +*.DS_Store From 93bb7188646636d0161cc26608cac3be168e029d Mon Sep 17 00:00:00 2001 From: Camille Maumet Date: Tue, 14 Nov 2023 11:38:35 +0100 Subject: [PATCH 2/5] Updating the documentation (#125) * Shorten intro * Add inria support * Add intro contrib * move content to new file (developer doc) * Contrib as main title * Add two profiles * Separate in two issues * fix typo * Remove toc * Remove duplicate ref * Add link to nipype * Remove direct links to step 2 * Add wip to user doc --- CONTENT.md | 7 +++++++ CONTRIBUTING.md | 22 +++++++++++++++------- README.md | 35 ++++++++--------------------------- 3 files changed, 30 insertions(+), 34 deletions(-) create mode 100644 CONTENT.md diff --git a/CONTENT.md b/CONTENT.md new file mode 100644 index 00000000..5fbeed5a --- /dev/null +++ b/CONTENT.md @@ -0,0 +1,7 @@ +### Contents overview + +- :snake: :package: `narps_open/` contains the Python package with all the pipelines logic. +- :brain: `data/` contains data that is used by the pipelines, as well as the (intermediate or final) results data. Instructions to download data are available in [INSTALL.md](/INSTALL.md#data-download-instructions). +- :blue_book: `docs/` contains the documentation for the project. Start browsing it with the entry point [docs/README.md](/docs/README.md) +- :orange_book: `examples/` contains notebooks examples to launch of the reproduced pipelines. +- :microscope: `tests/` contains the tests of the narps_open package. \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1ddbda84..7429acdd 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,15 +1,19 @@ # How to contribute to NARPS Open Pipelines ? -General guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) in the GitHub documentation. +For the reproductions, we are especially looking for contributors with the following profiles: + - 👩‍🎤 SPM, FSL, AFNI or nistats has no secrets for you? You know this fMRI analysis software by heart 💓. Please help us by reproducing the corresponding NARPS pipelines. 👣 after step 1, follow the fMRI expert trail. + - 🧑‍🎤 You are a nipype guru? 👣 after step 1, follow the nipype expert trail. -## Reproduce a pipeline :keyboard: -:thinking: Not sure which one to start with ? You can have a look on [this table](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) giving the work progress status for each pipeline. This will help choosing the one that best suits you! +# Step 1: Choose a pipeline to reproduce :keyboard: +:thinking: Not sure which pipeline to start with ? 🚦The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is in red (not started). -Need more information ? You can have a look to the pipeline description [here](https://docs.google.com/spreadsheets/d/1FU_F6kdxOD4PRQDIHXGHS4zTi_jEVaUqY_Zwg0z6S64/edit?usp=sharing). Also feel free to use the `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md). +Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. When you are ready, [start an issue](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose **Pipeline reproduction**! -### If you have experience with NiPype +# Step 2: Reproduction + +## 🧑‍🎤 NiPype trail We created templates with modifications to make and holes to fill to create a pipeline. You can find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). @@ -21,9 +25,9 @@ Feel free to have a look to the following pipelines, these are examples : | 2T6S | SPM | Yes | [/narps_open/pipelines/team_2T6S.py](/narps_open/pipelines/team_2T6S.py) | | X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_2T6S.py) | -### If you have experience with the original software package but not with NiPype +## 👩‍🎤 fMRI software trail -A fantastic tool named [Giraffe](https://giraffe.tools/porcupine/TimVanMourik/GiraffePlayground/master) is available. It allows you to create a graph of your pipeline using NiPype functions but without coding! Just save your NiPype script in a .py file and send it as a new issue, we will convert this script to a script which works with our specific parameters. +... ## Find or propose an issue :clipboard: Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. @@ -64,3 +68,7 @@ Once your PR is ready, you may add a reviewer to your PR, as described [here](ht Please turn your Draft Pull Request into a "regular" Pull Request, by clicking **Ready for review** in the Pull Request page. **:wave: Thank you in advance for contributing to the project!** + +## Additional resources + + - git and Gitub: general guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) in the GitHub documentation. diff --git a/README.md b/README.md index 20125d83..ab80db18 100644 --- a/README.md +++ b/README.md @@ -15,45 +15,26 @@

-## Table of contents - -- [Project presentation](#project-presentation) -- [Getting Started](#getting-started) - - [Contents overview](#contents-overview) - - [Installation](#installation) - - [Contributing](#contributing) -- [References](#references) -- [Funding](#funding) - ## Project presentation -Neuroimaging workflows are highly flexible, leaving researchers with multiple possible options to analyze a dataset [(Carp, 2012)](https://www.frontiersin.org/articles/10.3389/fnins.2012.00149/full). -However, different analytical choices can cause variation in the results [(Botvinik-Nezer et al., 2020)](https://www.nature.com/articles/s41586-020-2314-9), leading to what was called a "vibration of effects" [(Ioannidis, 2008)](https://pubmed.ncbi.nlm.nih.gov/18633328/) also known as analytical variability. - **The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS project (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**. -To perform the reproduction, we are lucky to be able to use the [descriptions provided by the teams](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx). -We also created a [shared spreadsheet](https://docs.google.com/spreadsheets/d/1FU_F6kdxOD4PRQDIHXGHS4zTi_jEVaUqY_Zwg0z6S64/edit?usp=sharing) that can be used to add comments on pipelines (e.g.: identify the ones that are not reproducible with NiPype). +We base our reproductions on the [original descriptions provided by the teams](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) and test the quality of the reproductions by comparing our results with the original results published on NeuroVault. -:vertical_traffic_light: Lastly, please find [here in the project's wiki](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) a dashboard to see pipelines work progresses at first glance. +:vertical_traffic_light: See [the pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) to view our current progress at a glance. -## Getting Started +## Contributing -### Contents overview +NARPS open pipelines uses [nipype](https://nipype.readthedocs.io/en/latest/index.html) as a workflow manager and provides a series of templates and examples to help reproduce the different teams’ analysis. -- :snake: :package: `narps_open/` contains the Python package with all the pipelines logic. -- :brain: `data/` contains data that is used by the pipelines, as well as the (intermediate or final) results data. Instructions to download data are available in [INSTALL.md](/INSTALL.md#data-download-instructions). -- :blue_book: `docs/` contains the documentation for the project. Start browsing it with the entry point [docs/README.md](/docs/README.md) -- :orange_book: `examples/` contains notebooks examples to launch of the reproduced pipelines. -- :microscope: `tests/` contains the tests of the narps_open package. +There are many ways you can contribute 🤗 :wave: Any help is welcome ! Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! ### Installation To get the pipelines running, please follow the installation steps in [INSTALL.md](/INSTALL.md) -### Contributing - -:wave: Any help is welcome ! Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! +## Getting started +If you are interested in using the codebase to run the pipelines, see the [user documentation (work-in-progress)]. ## References @@ -64,7 +45,7 @@ To get the pipelines running, please follow the installation steps in [INSTALL.m ## Funding -This project is supported by Région Bretagne (Boost MIND). +This project is supported by Région Bretagne (Boost MIND) and by Inria (Exploratory action GRASP). ## Credits From 7b2e33a55d41b556ed396b5769e0cd7a13e95401 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= <117362283+bclenet@users.noreply.github.com> Date: Tue, 14 Nov 2023 15:25:25 +0100 Subject: [PATCH 3/5] Add reproducibility info to pipeline status page (#123) * [BUG] inside unit_tests workflow * Browsing all issues pages from Github API * Get all pages of GitHub issues * [TEST] Updating test for status module * [TEST] fetch several issues * Dealing with single page of issues * Removine per_page query parameter * [TEST] adjusting tests * Add reproducibility columns to status report * [TEST] Add reproducibility columns to status report * [TEST] updating narps_open.data.description tests * Add links to share code + exclusion reasons * : reproducibility score + added source code links * [TEST] : reproducibility score * [DOC] : reproducibility score * [BUG] : reproducibility score * [BUG] : reproducibility score --- docs/status.md | 33 ++++- .../analysis_pipelines_comments.tsv | 140 +++++++++--------- narps_open/utils/status.py | 43 ++++-- tests/data/test_description.py | 4 +- .../data/description/test_markdown.md | 4 +- .../test_data/data/description/test_str.json | 4 +- tests/test_data/utils/status/test_markdown.md | 23 ++- tests/test_data/utils/status/test_str.json | 25 +++- tests/utils/test_status.py | 20 ++- 9 files changed, 185 insertions(+), 111 deletions(-) diff --git a/docs/status.md b/docs/status.md index 7fde8239..28492390 100644 --- a/docs/status.md +++ b/docs/status.md @@ -1,10 +1,18 @@ # Access the work progress status pipelines The class `PipelineStatusReport` of module `narps_open.utils.status` allows to create a report containing the following information for each pipeline: +* a work progress status : `idle`, `progress`, or `done`; * the software it uses (collected from the `categorized_for_analysis.analysis_SW` of the [team description](/docs/description.md)) ; * whether it uses data from fMRIprep or not ; * a list of issues related to it (the opened issues of the project that have the team ID inside their title or description) ; -* a work progress status : `idle`, `progress`, or `done`. +* a list of pull requests related to it (the opened pull requests of the project that have the team ID inside their title or description) ; +* whether it was excluded from the original NARPS analysis ; +* a reproducibility rating : + * default score is 4; + * -1 if the team did not use fmriprep data; + * -1 if the team used several pieces of software (e.g.: FSL and AFNI); + * -1 if the team used custom or marginal software (i.e.: something else than SPM, FSL, AFNI or nistats); + * -1 if the team did not provided his source code. This allows contributors to best select the pipeline they want/need to contribute to. For this purpose, the GitHub Actions workflow [`.github/workflows/pipeline_status.yml`](/.github/workflows/pipeline_status.yml) allows to dynamically generate the report and to store it in the [project's wiki](https://github.com/Inria-Empenn/narps_open_pipelines/wiki). @@ -55,22 +63,31 @@ python narps_open/utils/status --json # "softwares": "FSL", # "fmriprep": "No", # "issues": {}, -# "status": "idle" +# "excluded": "No", +# "reproducibility": 3, +# "reproducibility_comment": "", +# "issues": {}, +# "pulls": {}, +# "status": "2-idle" # }, # "0C7Q": { # "softwares": "FSL, AFNI", # "fmriprep": "Yes", # "issues": {}, +# "excluded": "No", +# "reproducibility": 3, +# "reproducibility_comment": "", +# "issues": {}, +# "pulls": {}, # "status": "idle" # }, # ... python narps_open/utils/status --md -# | team_id | status | softwares used | fmriprep used ? | related issues | -# | --- |:---:| --- | --- | --- | -# | 08MQ | :red_circle: | FSL | No | | -# | 0C7Q | :red_circle: | FSL, AFNI | Yes | | -# | 0ED6 | :red_circle: | SPM | No | | -# | 0H5E | :red_circle: | SPM | No | | +# ... +# | team_id | status | main software | fmriprep used ? | related issues | related pull requests | excluded from NARPS analysis | reproducibility | +# | --- |:---:| --- | --- | --- | --- | --- | --- | +# | Q6O0 | :green_circle: | SPM | Yes | | | No | :star::star::star::black_small_square:
| +# | UK24 | :orange_circle: | SPM | No | [2](url_issue_2), | | No | :star::star::black_small_square::black_small_square:
| # ... ``` diff --git a/narps_open/data/description/analysis_pipelines_comments.tsv b/narps_open/data/description/analysis_pipelines_comments.tsv index 93cd4f24..5ebf2405 100644 --- a/narps_open/data/description/analysis_pipelines_comments.tsv +++ b/narps_open/data/description/analysis_pipelines_comments.tsv @@ -1,71 +1,71 @@ teamID excluded_from_narps_analysis exclusion_comment reproducibility reproducibility_comment -50GV no N/A ? Uses custom software (Denoiser) -9Q6R no N/A -O21U no N/A -U26C no N/A -43FJ no N/A -C88N no N/A -4TQ6 yes Resampled image offset and too large compared to template. -T54A no N/A -2T6S no N/A -L7J7 no N/A -0JO0 no N/A -X1Y5 no N/A -51PW no N/A -6VV2 no N/A -O6R6 no N/A -C22U no N/A ? Custom Matlab script for white matter PCA confounds -3PQ2 no N/A -UK24 no N/A -4SZ2 yes Resampled image offset from template brain. -9T8E no N/A -94GU no N/A ? Multiple software dependencies : SPM + ART + TAPAS + Matlab. -I52Y no N/A -5G9K no N/A ? ? -2T7P yes Missing thresholded images. ? ? -UI76 no N/A -B5I6 no N/A -V55J yes Bad histogram : very small values. -X19V no N/A -0C7Q yes Appears to be a p-value distribution, with slight excursions below and above zero. -R5K7 no N/A -0I4U no N/A -3C6G no N/A -R9K3 no N/A -O03M no N/A -08MQ no N/A -80GC no N/A -J7F9 no N/A -R7D1 no N/A -Q58J yes Bad histogram : bimodal, zero-inflated with a second distribution centered around 5. -L3V8 yes Rejected due to large amount of missing brain in center. -SM54 no N/A -1KB2 no N/A -0H5E yes Rejected due to large amount of missing brain in center. -P5F3 yes Rejected due to large amounts of missing data across brain. -Q6O0 no N/A -R42Q no N/A ? Uses fMRIflows, a custom software based on NiPype. -L9G5 no N/A -DC61 no N/A -E3B6 yes Bad histogram : very long tail, with substantial inflation at a value just below zero. -16IN no N/A ? Multiple software dependencies : matlab + SPM + FSL + R + TExPosition + neuroim -46CD no N/A -6FH5 yes Missing much of the central brain. -K9P0 no N/A -9U7M no N/A -VG39 no N/A -1K0E yes Used surface-based analysis, only provided data for cortical ribbon. ? ? -X1Z4 yes Used surface-based analysis, only provided data for cortical ribbon. ? Multiple software dependencies : FSL + fmriprep + ciftify + HCP workbench + Freesurfer + ANTs -I9D6 no N/A -E6R3 no N/A -27SS no N/A -B23O no N/A -AO86 no N/A -L1A8 yes Resampled image much smaller than template brain. ? ? -IZ20 no N/A -3TR7 no N/A -98BT yes Rejected due to very bad normalization. -XU70 no N/A ? Uses custom software : FSL + 4drealign -0ED6 no N/A ? ? -I07H yes Bad histogram : bimodal, with second distribution centered around 2.5. -1P0Y no N/A +50GV No N/A 3 Uses custom software (Denoiser) +9Q6R No N/A 2 +O21U No N/A 3 +U26C No N/A 4 Link to shared analysis code : https://github.com/gladomat/narps +43FJ No N/A 2 +C88N No N/A 3 +4TQ6 Yes Resampled image offset and too large compared to template. 3 +T54A No N/A 3 +2T6S No N/A 3 +L7J7 No N/A 3 +0JO0 No N/A 3 +X1Y5 No N/A 2 +51PW No N/A 3 +6VV2 No N/A 2 +O6R6 No N/A 3 +C22U No N/A 1 Custom Matlab script for white matter PCA confounds +3PQ2 No N/A 2 +UK24 No N/A 2 +4SZ2 Yes Resampled image offset from template brain. 3 +9T8E No N/A 3 +94GU No N/A 1 Multiple software dependencies : SPM + ART + TAPAS + Matlab. +I52Y No N/A 2 +5G9K Yes Values in the unthresholded images are not z / t stats 3 +2T7P Yes Missing thresholded images. 2 Link to shared analysis code : https://osf.io/3b57r +UI76 No N/A 3 +B5I6 No N/A 3 +V55J Yes Bad histogram : very small values. 2 +X19V No N/A 3 +0C7Q Yes Appears to be a p-value distribution, with slight excursions below and above zero. 2 +R5K7 No N/A 2 +0I4U No N/A 2 +3C6G No N/A 2 +R9K3 No N/A 3 +O03M No N/A 3 +08MQ No N/A 2 +80GC No N/A 3 +J7F9 No N/A 3 +R7D1 No N/A 3 Link to shared analysis code : https://github.com/IMTAltiStudiLucca/NARPS_R7D1 +Q58J Yes Bad histogram : bimodal, zero-inflated with a second distribution centered around 5. 3 Link to shared analysis code : https://github.com/amrka/NARPS_Q58J +L3V8 Yes Rejected due to large amount of missing brain in center. 2 +SM54 No N/A 3 +1KB2 No N/A 2 +0H5E Yes Rejected due to large amount of missing brain in center. 2 +P5F3 Yes Rejected due to large amounts of missing data across brain. 2 +Q6O0 No N/A 3 +R42Q No N/A 2 Uses fMRIflows, a custom software based on NiPype. Code available here : https://github.com/ilkayisik/narps_R42Q +L9G5 No N/A 2 +DC61 No N/A 3 +E3B6 Yes Bad histogram : very long tail, with substantial inflation at a value just below zero. 4 Link to shared analysis code : doi.org/10.5281/zenodo.3518407 +16IN Yes Values in the unthresholded images are not z / t stats 2 Multiple software dependencies : matlab + SPM + FSL + R + TExPosition + neuroim. Link to shared analysis code : https://github.com/jennyrieck/NARPS +46CD No N/A 1 +6FH5 Yes Missing much of the central brain. 2 +K9P0 No N/A 3 +9U7M No N/A 2 +VG39 Yes Performed small volume corrected instead of whole-brain analysis 3 +1K0E Yes Used surface-based analysis, only provided data for cortical ribbon. 1 +X1Z4 Yes Used surface-based analysis, only provided data for cortical ribbon. 1 Multiple software dependencies : FSL + fmriprep + ciftify + HCP workbench + Freesurfer + ANTs +I9D6 No N/A 2 +E6R3 No N/A 2 +27SS No N/A 2 +B23O No N/A 3 +AO86 No N/A 2 +L1A8 Yes Not in MNI standard space. 2 +IZ20 No N/A 1 +3TR7 No N/A 3 +98BT Yes Rejected due to very bad normalization. 2 +XU70 No N/A 1 Uses custom software : FSL + 4drealign +0ED6 No N/A 2 +I07H Yes Bad histogram : bimodal, with second distribution centered around 2.5. 2 +1P0Y No N/A 2 diff --git a/narps_open/utils/status.py b/narps_open/utils/status.py index 6dced5bc..0058b40b 100644 --- a/narps_open/utils/status.py +++ b/narps_open/utils/status.py @@ -76,10 +76,18 @@ def generate(self): # Get software used in the pipeline, from the team description description = TeamDescription(team_id) - self.contents[team_id]['softwares'] = \ + self.contents[team_id]['software'] = \ description.categorized_for_analysis['analysis_SW'] self.contents[team_id]['fmriprep'] = description.preprocessing['used_fmriprep_data'] + # Get comments about the pipeline + self.contents[team_id]['excluded'] = \ + description.comments['excluded_from_narps_analysis'] + self.contents[team_id]['reproducibility'] = \ + int(description.comments['reproducibility']) + self.contents[team_id]['reproducibility_comment'] = \ + description.comments['reproducibility_comment'] + # Get issues and pull requests related to the team issues = {} pulls = {} @@ -109,10 +117,11 @@ def generate(self): else: self.contents[team_id]['status'] = '1-progress' - # Sort contents with the following priorities : 1-"status", 2-"softwares" and 3-"fmriprep" + # Sort contents with the following priorities : + # 1-"status", 2-"softwares", 3-"fmriprep" self.contents = OrderedDict(sorted( self.contents.items(), - key=lambda k: (k[1]['status'], k[1]['softwares'], k[1]['fmriprep']) + key=lambda k: (k[1]['status'], k[1]['software'], k[1]['fmriprep']) )) def markdown(self): @@ -124,14 +133,23 @@ def markdown(self): output_markdown += '
:red_circle: not started yet\n' output_markdown += '
:orange_circle: in progress\n' output_markdown += '
:green_circle: completed\n' - output_markdown += '

The *softwares used* column gives a simplified version of ' + output_markdown += '

The *main software* column gives a simplified version of ' output_markdown += 'what can be found in the team descriptions under the ' output_markdown += '`general.software` column.\n' + output_markdown += '

The *reproducibility* column rates the pipeline as follows:\n' + output_markdown += ' * default score is :star::star::star::star:;\n' + output_markdown += ' * -1 if the team did not use fmriprep data;\n' + output_markdown += ' * -1 if the team used several pieces of software ' + output_markdown += '(e.g.: FSL and AFNI);\n' + output_markdown += ' * -1 if the team used custom or marginal software ' + output_markdown += '(i.e.: something else than SPM, FSL, AFNI or nistats);\n' + output_markdown += ' * -1 if the team did not provided his source code.\n' # Start table - output_markdown += '| team_id | status | softwares used | fmriprep used ? |' - output_markdown += ' related issues | related pull requests |\n' - output_markdown += '| --- |:---:| --- | --- | --- | --- |\n' + output_markdown += '\n| team_id | status | main software | fmriprep used ? |' + output_markdown += ' related issues | related pull requests |' + output_markdown += ' excluded from NARPS analysis | reproducibility |\n' + output_markdown += '| --- |:---:| --- | --- | --- | --- | --- | --- |\n' # Add table contents for team_key, team_values in self.contents.items(): @@ -146,7 +164,7 @@ def markdown(self): status = ':red_circle:' output_markdown += f'| {status} ' - output_markdown += f'| {team_values["softwares"]} ' + output_markdown += f'| {team_values["software"]} ' output_markdown += f'| {team_values["fmriprep"]} ' issues = '' @@ -159,8 +177,15 @@ def markdown(self): for issue_number, issue_url in team_values['pulls'].items(): pulls += f'[{issue_number}]({issue_url}), ' - output_markdown += f'| {pulls} |\n' + output_markdown += f'| {pulls} ' + output_markdown += f'| {team_values["excluded"]} ' + reproducibility_ranking = '' + for _ in range(team_values['reproducibility']): + reproducibility_ranking += ':star:' + for _ in range(4-team_values['reproducibility']): + reproducibility_ranking += ':black_small_square:' + output_markdown += f'| {reproducibility_ranking}
{team_values["reproducibility_comment"]} |\n' return output_markdown diff --git a/tests/data/test_description.py b/tests/data/test_description.py index c66e23b3..9c8d633c 100644 --- a/tests/data/test_description.py +++ b/tests/data/test_description.py @@ -55,7 +55,7 @@ def test_arguments_properties(): assert description['analysis.RT_modeling'] == 'duration' assert description['categorized_for_analysis.analysis_SW_with_version'] == 'SPM12' assert description['derived.func_fwhm'] == '8' - assert description['comments.excluded_from_narps_analysis'] == 'no' + assert description['comments.excluded_from_narps_analysis'] == 'No' # 4 - Check properties assert isinstance(description.general, dict) @@ -84,7 +84,7 @@ def test_arguments_properties(): assert description.analysis['RT_modeling'] == 'duration' assert description.categorized_for_analysis['analysis_SW_with_version'] == 'SPM12' assert description.derived['func_fwhm'] == '8' - assert description.comments['excluded_from_narps_analysis'] == 'no' + assert description.comments['excluded_from_narps_analysis'] == 'No' # 6 - Test another team description = TeamDescription('9Q6R') diff --git a/tests/test_data/data/description/test_markdown.md b/tests/test_data/data/description/test_markdown.md index 1749e7c1..3763eee3 100644 --- a/tests/test_data/data/description/test_markdown.md +++ b/tests/test_data/data/description/test_markdown.md @@ -97,7 +97,7 @@ Model EVs (2): eq_indiff, eq_range * `func_fwhm` : 5 * `con_fwhm` : ## Comments -* `excluded_from_narps_analysis` : no +* `excluded_from_narps_analysis` : No * `exclusion_comment` : N/A -* `reproducibility` : +* `reproducibility` : 2 * `reproducibility_comment` : diff --git a/tests/test_data/data/description/test_str.json b/tests/test_data/data/description/test_str.json index c2550fcd..a5919f4b 100644 --- a/tests/test_data/data/description/test_str.json +++ b/tests/test_data/data/description/test_str.json @@ -54,8 +54,8 @@ "derived.excluded_participants": "018, 030, 088, 100", "derived.func_fwhm": "5", "derived.con_fwhm": "", - "comments.excluded_from_narps_analysis": "no", + "comments.excluded_from_narps_analysis": "No", "comments.exclusion_comment": "N/A", - "comments.reproducibility": "", + "comments.reproducibility": "2", "comments.reproducibility_comment": "" } \ No newline at end of file diff --git a/tests/test_data/utils/status/test_markdown.md b/tests/test_data/utils/status/test_markdown.md index 7e0ff2d2..500bbc52 100644 --- a/tests/test_data/utils/status/test_markdown.md +++ b/tests/test_data/utils/status/test_markdown.md @@ -3,11 +3,18 @@ The *status* column tells whether the work on the pipeline is :
:red_circle: not started yet
:orange_circle: in progress
:green_circle: completed -

The *softwares used* column gives a simplified version of what can be found in the team descriptions under the `general.software` column. -| team_id | status | softwares used | fmriprep used ? | related issues | related pull requests | -| --- |:---:| --- | --- | --- | --- | -| Q6O0 | :green_circle: | SPM | Yes | | | -| UK24 | :orange_circle: | SPM | No | [2](url_issue_2), | | -| 2T6S | :orange_circle: | SPM | Yes | [5](url_issue_5), | [3](url_pull_3), | -| 1KB2 | :red_circle: | FSL | No | | | -| C88N | :red_circle: | SPM | Yes | | | +

The *main software* column gives a simplified version of what can be found in the team descriptions under the `general.software` column. +

The *reproducibility* column rates the pipeline as follows: + * default score is :star::star::star::star:; + * -1 if the team did not use fmriprep data; + * -1 if the team used several pieces of software (e.g.: FSL and AFNI); + * -1 if the team used custom or marginal software (i.e.: something else than SPM, FSL, AFNI or nistats); + * -1 if the team did not provided his source code. + +| team_id | status | main software | fmriprep used ? | related issues | related pull requests | excluded from NARPS analysis | reproducibility | +| --- |:---:| --- | --- | --- | --- | --- | --- | +| Q6O0 | :green_circle: | SPM | Yes | | | No | :star::star::star::black_small_square:
| +| UK24 | :orange_circle: | SPM | No | [2](url_issue_2), | | No | :star::star::black_small_square::black_small_square:
| +| 2T6S | :orange_circle: | SPM | Yes | [5](url_issue_5), | [3](url_pull_3), | No | :star::star::star::black_small_square:
| +| 1KB2 | :red_circle: | FSL | No | | | No | :star::star::black_small_square::black_small_square:
| +| C88N | :red_circle: | SPM | Yes | | | No | :star::star::star::black_small_square:
| diff --git a/tests/test_data/utils/status/test_str.json b/tests/test_data/utils/status/test_str.json index 638a06f7..3c590881 100644 --- a/tests/test_data/utils/status/test_str.json +++ b/tests/test_data/utils/status/test_str.json @@ -1,14 +1,20 @@ { "Q6O0": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "Yes", + "excluded": "No", + "reproducibility": 3, + "reproducibility_comment": "", "issues": {}, "pulls": {}, "status": "0-done" }, "UK24": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "No", + "excluded": "No", + "reproducibility": 2, + "reproducibility_comment": "", "issues": { "2": "url_issue_2" }, @@ -16,8 +22,11 @@ "status": "1-progress" }, "2T6S": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "Yes", + "excluded": "No", + "reproducibility": 3, + "reproducibility_comment": "", "issues": { "5": "url_issue_5" }, @@ -27,15 +36,21 @@ "status": "1-progress" }, "1KB2": { - "softwares": "FSL", + "software": "FSL", "fmriprep": "No", + "excluded": "No", + "reproducibility": 2, + "reproducibility_comment": "", "issues": {}, "pulls": {}, "status": "2-idle" }, "C88N": { - "softwares": "SPM", + "software": "SPM", "fmriprep": "Yes", + "excluded": "No", + "reproducibility": 3, + "reproducibility_comment": "", "issues": {}, "pulls": {}, "status": "2-idle" diff --git a/tests/utils/test_status.py b/tests/utils/test_status.py index 17170b71..2e0df22a 100644 --- a/tests/utils/test_status.py +++ b/tests/utils/test_status.py @@ -232,32 +232,42 @@ def test_generate(mock_api_issue, mocker): report.generate() test_pipeline = report.contents['2T6S'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'Yes' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 3 assert test_pipeline['issues'] == {5: 'url_issue_5'} assert test_pipeline['pulls'] == {3: 'url_pull_3'} assert test_pipeline['status'] == '1-progress' test_pipeline = report.contents['UK24'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'No' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 2 assert test_pipeline['issues'] == {2: 'url_issue_2'} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '1-progress' test_pipeline = report.contents['Q6O0'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'Yes' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 3 assert test_pipeline['issues'] == {} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '0-done' test_pipeline = report.contents['1KB2'] - assert test_pipeline['softwares'] == 'FSL' + assert test_pipeline['software'] == 'FSL' assert test_pipeline['fmriprep'] == 'No' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 2 assert test_pipeline['issues'] == {} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '2-idle' test_pipeline = report.contents['C88N'] - assert test_pipeline['softwares'] == 'SPM' + assert test_pipeline['software'] == 'SPM' assert test_pipeline['fmriprep'] == 'Yes' + assert test_pipeline['excluded'] == 'No' + assert test_pipeline['reproducibility'] == 3 assert test_pipeline['issues'] == {} assert test_pipeline['pulls'] == {} assert test_pipeline['status'] == '2-idle' From 69c729613a6f9a3a9b2db0acae6d5dc4a1c173ee Mon Sep 17 00:00:00 2001 From: Camille Maumet Date: Tue, 14 Nov 2023 16:21:37 +0100 Subject: [PATCH 4/5] narps study --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ab80db18..c042855d 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ ## Project presentation -**The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS project (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**. +**The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS study (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**. We base our reproductions on the [original descriptions provided by the teams](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) and test the quality of the reproductions by comparing our results with the original results published on NeuroVault. From 83a58ed62cefdea3f5af6af2739263eccd82d7e5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= <117362283+bclenet@users.noreply.github.com> Date: Mon, 20 Nov 2023 11:40:12 +0100 Subject: [PATCH 5/5] Common pipeline functions in `narps_open.core` (#128) * [BUG] inside unit_tests workflow * [ENH] add a remove_files method the new module * [TEST] add test for remove_files * [ENH] add voxel_dimensions [TEST] add test for voxel_dimensions * Adding sorting utils to narps_open.core.common[skip ci] * [TEST] for the core functions + adding get_group to narps_open.data.participants * [DOC] for the core module * [DOC] codespell --- docs/README.md | 1 + docs/core.md | 117 +++++++ narps_open/core/__init__.py | 0 narps_open/core/common.py | 65 ++++ narps_open/core/image.py | 25 ++ narps_open/data/participants.py | 8 + tests/core/__init__.py | 0 tests/core/test_common.py | 319 ++++++++++++++++++ tests/core/test_image.py | 48 +++ tests/data/test_participants.py | 51 ++- tests/test_data/core/image/test_image.nii.gz | Bin 0 -> 13326 bytes .../data/participants/participants.tsv | 5 + 12 files changed, 627 insertions(+), 12 deletions(-) create mode 100644 docs/core.md create mode 100644 narps_open/core/__init__.py create mode 100644 narps_open/core/common.py create mode 100644 narps_open/core/image.py create mode 100644 tests/core/__init__.py create mode 100644 tests/core/test_common.py create mode 100644 tests/core/test_image.py create mode 100644 tests/test_data/core/image/test_image.nii.gz create mode 100644 tests/test_data/data/participants/participants.tsv diff --git a/docs/README.md b/docs/README.md index 8c4fd662..f9f6d193 100644 --- a/docs/README.md +++ b/docs/README.md @@ -11,4 +11,5 @@ Here are the available topics : * :microscope: [testing](/docs/testing.md) details the testing features of the project, i.e.: how is the code tested ? * :package: [ci-cd](/docs/ci-cd.md) contains the information on how continuous integration and delivery (knowned as CI/CD) is set up. * :writing_hand: [pipeline](/docs/pipelines.md) tells you all you need to know in order to write pipelines +* :compass: [core](/docs/core.md) a list of helpful functions when writing pipelines * :vertical_traffic_light: [status](/docs/status.md) contains the information on how to get the work progress status for a pipeline. diff --git a/docs/core.md b/docs/core.md new file mode 100644 index 00000000..2ea8e536 --- /dev/null +++ b/docs/core.md @@ -0,0 +1,117 @@ +# Core functions you can use to write pipelines + +Here are a few functions that could be useful for creating a pipeline with Nipype. These functions are meant to stay as unitary as possible. + +These are intended to be inserted in a nipype.Workflow inside a [nipype.Function](https://nipype.readthedocs.io/en/latest/api/generated/nipype.interfaces.utility.wrappers.html#function) interface, or for some of them (see associated docstring) as part of a [nipype.Workflow.connect](https://nipype.readthedocs.io/en/latest/api/generated/nipype.pipeline.engine.workflows.html#nipype.pipeline.engine.workflows.Workflow.connect) method. + +In the following example, we use the `list_intersection` function of `narps_open.core.common`, in both of the mentioned cases. + +```python +from nipype import Node, Function, Workflow +from narps_open.core.common import list_intersection + +# First case : a Function Node +intersection_node = Node(Function( + function = list_intersection, + input_names = ['list_1', 'list_2'], + output_names = ['output'] + ), name = 'intersection_node') +intersection_node.inputs.list_1 = ['001', '002', '003', '004'] +intersection_node.inputs.list_2 = ['002', '004', '005'] +print(intersection_node.run().outputs.output) # ['002', '004'] + +# Second case : inside a connect node +# We assume that there is a node_0 returning ['001', '002', '003', '004'] as `output` value +test_workflow = Workflow( + base_dir = '/path/to/base/dir', + name = 'test_workflow' + ) +test_workflow.connect([ + # node_1 will receive the evaluation of : + # list_intersection(['001', '002', '003', '004'], ['002', '004', '005']) + # as in_value + (node_0, node_1, [(('output', list_intersection, ['002', '004', '005']), 'in_value')]) + ]) +test_workflow.run() +``` + +> [!TIP] +> Use a [nipype.MapNode](https://nipype.readthedocs.io/en/latest/api/generated/nipype.pipeline.engine.nodes.html#nipype.pipeline.engine.nodes.MapNode) to run these functions on lists instead of unitary contents. E.g.: the `remove_file` function of `narps_open.core.common` only removes one file at a time, but feel free to pass a list of files using a `nipype.MapNode`. + +```python +from nipype import MapNode, Function +from narps_open.core.common import remove_file + +# Create the MapNode so that the `remove_file` function handles lists of files +remove_files_node = MapNode(Function( + function = remove_file, + input_names = ['_', 'file_name'], + output_names = [] + ), name = 'remove_files_node', iterfield = ['file_name']) + +# ... A couple of lines later, in the Worlflow definition +test_workflow = Workflow(base_dir = '/home/bclenet/dev/tests/nipype_merge/', name = 'test_workflow') +test_workflow.connect([ + # ... + # Here we assume the select_node's output `out_files` is a list of files + (select_node, remove_files_node, [('out_files', 'file_name')]) + # ... + ]) +``` + +## narps_open.core.common + +This module contains a set of functions that nearly every pipeline could use. + +* `remove_file` remove a file when it is not needed anymore (to save disk space) + +```python +from narps_open.core.common import remove_file + +# Remove the /path/to/the/image.nii.gz file +remove_file('/path/to/the/image.nii.gz') +``` + +* `elements_in_string` : return the first input parameter if it contains one element of second parameter (None otherwise). + +```python +from narps_open.core.common import elements_in_string + +# Here we test if the file 'sub-001_file.nii.gz' belongs to a group of subjects. +elements_in_string('sub-001_file.nii.gz', ['005', '006', '007']) # Returns None +elements_in_string('sub-001_file.nii.gz', ['001', '002', '003']) # Returns 'sub-001_file.nii.gz' +``` + +> [!TIP] +> This can be generalised to a group of files, using a `nipype.MapNode`! + +* `clean_list` : remove elements of the first input parameter (list) if it is equal to the second parameter. + +```python +from narps_open.core.common import clean_list + +# Here we remove subject 002 from a group of subjects. +clean_list(['002', '005', '006', '007'], '002') +``` + +* `list_intersection` : return the intersection of two lists. + +```python +from narps_open.core.common import list_intersection + +# Here we keep only subjects that are in the equalRange group and selected for the analysis. +equal_range_group = ['002', '004', '006', '008'] +selected_for_analysis = ['002', '006', '010'] +list_intersection(equal_range_group, selected_for_analysis) # Returns ['002', '006'] +``` + +## narps_open.core.image + +This module contains a set of functions dedicated to computations on images. + + * `get_voxel_dimensions` : returns the voxel dimensions of an image + +```python +# Get dimensions of voxels along x, y, and z in mm (returns e.g.: [1.0, 1.0, 1.0]). +get_voxel_dimensions('/path/to/the/image.nii.gz') +``` diff --git a/narps_open/core/__init__.py b/narps_open/core/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/narps_open/core/common.py b/narps_open/core/common.py new file mode 100644 index 00000000..e40d4e9a --- /dev/null +++ b/narps_open/core/common.py @@ -0,0 +1,65 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Common functions to write pipelines """ + +def remove_file(_, file_name: str) -> None: + """ + Fully remove files generated by a Node, once they aren't needed anymore. + This function is meant to be used in a Nipype Function Node. + + Parameters: + - _: input only used for triggering the Node + - file_name: str, a single absolute filename of the file to remove + """ + # This import must stay inside the function, as required by Nipype + from os import remove + + try: + remove(file_name) + except OSError as error: + print(error) + +def elements_in_string(input_str: str, elements: list) -> str: #| None: + """ + Return input_str if it contains one element of the elements list. + Return None otherwise. + This function is meant to be used in a Nipype Function Node. + + Parameters: + - input_str: str + - elements: list of str, elements to be searched in input_str + """ + if any(e in input_str for e in elements): + return input_str + return None + +def clean_list(input_list: list, element = None) -> list: + """ + Remove elements of input_list that are equal to element and return the resultant list. + This function is meant to be used in a Nipype Function Node. It can be used inside a + nipype.Workflow.connect call as well. + + Parameters: + - input_list: list + - element: any + + Returns: + - input_list with elements equal to element removed + """ + return [f for f in input_list if f != element] + +def list_intersection(list_1: list, list_2: list) -> list: + """ + Returns the intersection of two lists. + This function is meant to be used in a Nipype Function Node. It can be used inside a + nipype.Workflow.connect call as well. + + Parameters: + - list_1: list + - list_2: list + + Returns: + - list, the intersection of list_1 and list_2 + """ + return [e for e in list_1 if e in list_2] diff --git a/narps_open/core/image.py b/narps_open/core/image.py new file mode 100644 index 00000000..35323e45 --- /dev/null +++ b/narps_open/core/image.py @@ -0,0 +1,25 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Image functions to write pipelines """ + +def get_voxel_dimensions(image: str) -> list: + """ + Return the voxel dimensions of a image in millimeters. + + Arguments: + image: str, string that represent an absolute path to a Nifti image. + + Returns: + list, size of the voxels in the image in millimeters. + """ + # This import must stay inside the function, as required by Nipype + from nibabel import load + + voxel_dimensions = load(image).header.get_zooms() + + return [ + float(voxel_dimensions[0]), + float(voxel_dimensions[1]), + float(voxel_dimensions[2]) + ] diff --git a/narps_open/data/participants.py b/narps_open/data/participants.py index a9cc65a5..835e834f 100644 --- a/narps_open/data/participants.py +++ b/narps_open/data/participants.py @@ -49,3 +49,11 @@ def get_participants(team_id: str) -> list: def get_participants_subset(nb_participants: int = 108) -> list: """ Return a list of participants of length nb_participants """ return get_all_participants()[0:nb_participants] + +def get_group(group_name: str) -> list: + """ Return a list containing all the participants inside the group_name group + + Warning : the subject ids are return as written in the participants file (i.e.: 'sub-*') + """ + participants = get_participants_information() + return participants.loc[participants['group'] == group_name]['participant_id'].values.tolist() diff --git a/tests/core/__init__.py b/tests/core/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/tests/core/test_common.py b/tests/core/test_common.py new file mode 100644 index 00000000..3e00fd1b --- /dev/null +++ b/tests/core/test_common.py @@ -0,0 +1,319 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Tests of the 'narps_open.core.common' module. + +Launch this test with PyTest + +Usage: +====== + pytest -q test_common.py + pytest -q test_common.py -k +""" +from os import mkdir +from os.path import join, exists, abspath +from shutil import rmtree +from pathlib import Path + +from pytest import mark, fixture +from nipype import Node, Function, Workflow + +from narps_open.utils.configuration import Configuration +import narps_open.core.common as co + +TEMPORARY_DIR = join(Configuration()['directories']['test_runs'], 'test_common') + +@fixture +def remove_test_dir(): + """ A fixture to remove temporary directory created by tests """ + + rmtree(TEMPORARY_DIR, ignore_errors = True) + mkdir(TEMPORARY_DIR) + yield # test runs here + rmtree(TEMPORARY_DIR, ignore_errors = True) + +class TestCoreCommon: + """ A class that contains all the unit tests for the common module.""" + + @staticmethod + @mark.unit_test + def test_remove_file(remove_test_dir): + """ Test the remove_file function """ + + # Create a single file + test_file_path = abspath(join(TEMPORARY_DIR, 'file1.txt')) + Path(test_file_path).touch() + + # Check file exist + assert exists(test_file_path) + + # Create a Nipype Node using remove_files + test_remove_file_node = Node(Function( + function = co.remove_file, + input_names = ['_', 'file_name'], + output_names = [] + ), name = 'test_remove_file_node') + test_remove_file_node.inputs._ = '' + test_remove_file_node.inputs.file_name = test_file_path + test_remove_file_node.run() + + # Check file is removed + assert not exists(test_file_path) + + @staticmethod + @mark.unit_test + def test_node_elements_in_string(): + """ Test the elements_in_string function as a nipype.Node """ + + # Inputs + string = 'test_string' + elements_false = ['z', 'u', 'warning'] + elements_true = ['z', 'u', 'warning', '_'] + + # Create a Nipype Node using elements_in_string + test_node = Node(Function( + function = co.elements_in_string, + input_names = ['input_str', 'elements'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_str = string + test_node.inputs.elements = elements_true + out = test_node.run().outputs.output + + # Check return value + assert out == string + + # Change input and check return value + test_node = Node(Function( + function = co.elements_in_string, + input_names = ['input_str', 'elements'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_str = string + test_node.inputs.elements = elements_false + out = test_node.run().outputs.output + assert out is None + + @staticmethod + @mark.unit_test + def test_connect_elements_in_string(remove_test_dir): + """ Test the elements_in_string function as evaluated in a connect """ + + # Inputs + string = 'test_string' + elements_false = ['z', 'u', 'warning'] + elements_true = ['z', 'u', 'warning', '_'] + function = lambda in_value: in_value + + # Create Nodes + node_1 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_1') + node_1.inputs.in_value = string + node_true = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_true') + node_false = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_false') + + # Create Workflow + test_workflow = Workflow( + base_dir = TEMPORARY_DIR, + name = 'test_workflow' + ) + test_workflow.connect([ + # elements_in_string is evaluated as part of the connection + (node_1, node_true, [( + ('out_value', co.elements_in_string, elements_true), 'in_value')]), + (node_1, node_false, [( + ('out_value', co.elements_in_string, elements_false), 'in_value')]) + ]) + + test_workflow.run() + + test_file_t = join(TEMPORARY_DIR, 'test_workflow', 'node_true', '_report', 'report.rst') + with open(test_file_t, 'r', encoding = 'utf-8') as file: + assert '* out_value : test_string' in file.read() + + test_file_f = join(TEMPORARY_DIR, 'test_workflow', 'node_false', '_report', 'report.rst') + with open(test_file_f, 'r', encoding = 'utf-8') as file: + assert '* out_value : None' in file.read() + + @staticmethod + @mark.unit_test + def test_node_clean_list(): + """ Test the clean_list function as a nipype.Node """ + + # Inputs + input_list = ['z', '_', 'u', 'warning', '_', None] + element_to_remove_1 = '_' + output_list_1 = ['z', 'u', 'warning', None] + element_to_remove_2 = None + output_list_2 = ['z', '_', 'u', 'warning', '_'] + + # Create a Nipype Node using clean_list + test_node = Node(Function( + function = co.clean_list, + input_names = ['input_list', 'element'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_list = input_list + test_node.inputs.element = element_to_remove_1 + + # Check return value + assert test_node.run().outputs.output == output_list_1 + + # Change input and check return value + test_node = Node(Function( + function = co.clean_list, + input_names = ['input_list', 'element'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.input_list = input_list + test_node.inputs.element = element_to_remove_2 + + assert test_node.run().outputs.output == output_list_2 + + @staticmethod + @mark.unit_test + def test_connect_clean_list(remove_test_dir): + """ Test the clean_list function as evaluated in a connect """ + + # Inputs + input_list = ['z', '_', 'u', 'warning', '_', None] + element_to_remove_1 = '_' + output_list_1 = ['z', 'u', 'warning', None] + element_to_remove_2 = None + output_list_2 = ['z', '_', 'u', 'warning', '_'] + function = lambda in_value: in_value + + # Create Nodes + node_0 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_0') + node_0.inputs.in_value = input_list + node_1 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_1') + node_2 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_2') + + # Create Workflow + test_workflow = Workflow( + base_dir = TEMPORARY_DIR, + name = 'test_workflow' + ) + test_workflow.connect([ + # elements_in_string is evaluated as part of the connection + (node_0, node_1, [(('out_value', co.clean_list, element_to_remove_1), 'in_value')]), + (node_0, node_2, [(('out_value', co.clean_list, element_to_remove_2), 'in_value')]) + ]) + test_workflow.run() + + test_file_1 = join(TEMPORARY_DIR, 'test_workflow', 'node_1', '_report', 'report.rst') + with open(test_file_1, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_1}' in file.read() + + test_file_2 = join(TEMPORARY_DIR, 'test_workflow', 'node_2', '_report', 'report.rst') + with open(test_file_2, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_2}' in file.read() + + @staticmethod + @mark.unit_test + def test_node_list_intersection(): + """ Test the list_intersection function as a nipype.Node """ + + # Inputs / outputs + input_list_1 = ['001', '002', '003', '004'] + input_list_2 = ['002', '004'] + input_list_3 = ['001', '003', '005'] + output_list_1 = ['002', '004'] + output_list_2 = ['001', '003'] + + # Create a Nipype Node using list_intersection + test_node = Node(Function( + function = co.list_intersection, + input_names = ['list_1', 'list_2'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.list_1 = input_list_1 + test_node.inputs.list_2 = input_list_2 + + # Check return value + assert test_node.run().outputs.output == output_list_1 + + # Change input and check return value + test_node = Node(Function( + function = co.list_intersection, + input_names = ['list_1', 'list_2'], + output_names = ['output'] + ), name = 'test_node') + test_node.inputs.list_1 = input_list_1 + test_node.inputs.list_2 = input_list_3 + + assert test_node.run().outputs.output == output_list_2 + + @staticmethod + @mark.unit_test + def test_connect_list_intersection(remove_test_dir): + """ Test the list_intersection function as evaluated in a connect """ + + # Inputs / outputs + input_list_1 = ['001', '002', '003', '004'] + input_list_2 = ['002', '004'] + input_list_3 = ['001', '003', '005'] + output_list_1 = ['002', '004'] + output_list_2 = ['001', '003'] + function = lambda in_value: in_value + + # Create Nodes + node_0 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_0') + node_0.inputs.in_value = input_list_1 + node_1 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_1') + node_2 = Node(Function( + function = function, + input_names = ['in_value'], + output_names = ['out_value'] + ), name = 'node_2') + + # Create Workflow + test_workflow = Workflow( + base_dir = TEMPORARY_DIR, + name = 'test_workflow' + ) + test_workflow.connect([ + # elements_in_string is evaluated as part of the connection + (node_0, node_1, [(('out_value', co.list_intersection, input_list_2), 'in_value')]), + (node_0, node_2, [(('out_value', co.list_intersection, input_list_3), 'in_value')]) + ]) + test_workflow.run() + + test_file_1 = join(TEMPORARY_DIR, 'test_workflow', 'node_1', '_report', 'report.rst') + with open(test_file_1, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_1}' in file.read() + + test_file_2 = join(TEMPORARY_DIR, 'test_workflow', 'node_2', '_report', 'report.rst') + with open(test_file_2, 'r', encoding = 'utf-8') as file: + assert f'* out_value : {output_list_2}' in file.read() diff --git a/tests/core/test_image.py b/tests/core/test_image.py new file mode 100644 index 00000000..d3b83ac5 --- /dev/null +++ b/tests/core/test_image.py @@ -0,0 +1,48 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Tests of the 'narps_open.core.image' module. + +Launch this test with PyTest + +Usage: +====== + pytest -q test_image.py + pytest -q test_image.py -k +""" + +from os.path import abspath, join +from numpy import isclose + +from pytest import mark +from nipype import Node, Function + +from narps_open.utils.configuration import Configuration +import narps_open.core.image as im + +class TestCoreImage: + """ A class that contains all the unit tests for the image module.""" + + @staticmethod + @mark.unit_test + def test_get_voxel_dimensions(): + """ Test the get_voxel_dimensions function """ + + # Path to the test image + test_file_path = abspath(join( + Configuration()['directories']['test_data'], + 'core', + 'image', + 'test_image.nii.gz')) + + # Create a Nipype Node using get_voxel_dimensions + test_get_voxel_dimensions_node = Node(Function( + function = im.get_voxel_dimensions, + input_names = ['image'], + output_names = ['voxel_dimensions'] + ), name = 'test_get_voxel_dimensions_node') + test_get_voxel_dimensions_node.inputs.image = test_file_path + outputs = test_get_voxel_dimensions_node.run().outputs + + # Check voxel sizes + assert isclose(outputs.voxel_dimensions, [8.0, 8.0, 9.6]).all() diff --git a/tests/data/test_participants.py b/tests/data/test_participants.py index d30cd23e..f36f0a05 100644 --- a/tests/data/test_participants.py +++ b/tests/data/test_participants.py @@ -10,28 +10,46 @@ pytest -q test_participants.py pytest -q test_participants.py -k """ +from os.path import join -from pytest import mark +from pytest import mark, fixture import narps_open.data.participants as part +from narps_open.utils.configuration import Configuration + +@fixture +def mock_participants_data(mocker): + """ A fixture to provide mocked data from the test_data directory """ + + mocker.patch( + 'narps_open.data.participants.Configuration', + return_value = { + 'directories': { + 'dataset': join( + Configuration()['directories']['test_data'], + 'data', 'participants') + } + } + ) class TestParticipants: """ A class that contains all the unit tests for the participants module.""" @staticmethod @mark.unit_test - def test_get_participants_information(): + def test_get_participants_information(mock_participants_data): """ Test the get_participants_information function """ - """p_info = part.get_participants_information() - assert len(p_info) == 108 - assert p_info.at[5, 'participant_id'] == 'sub-006' - assert p_info.at[5, 'group'] == 'equalRange' - assert p_info.at[5, 'gender'] == 'M' - assert p_info.at[5, 'age'] == 30 - assert p_info.at[12, 'participant_id'] == 'sub-015' - assert p_info.at[12, 'group'] == 'equalIndifference' - assert p_info.at[12, 'gender'] == 'F' - assert p_info.at[12, 'age'] == 26""" + + p_info = part.get_participants_information() + assert len(p_info) == 4 + assert p_info.at[1, 'participant_id'] == 'sub-002' + assert p_info.at[1, 'group'] == 'equalRange' + assert p_info.at[1, 'gender'] == 'M' + assert p_info.at[1, 'age'] == 25 + assert p_info.at[2, 'participant_id'] == 'sub-003' + assert p_info.at[2, 'group'] == 'equalIndifference' + assert p_info.at[2, 'gender'] == 'F' + assert p_info.at[2, 'age'] == 27 @staticmethod @mark.unit_test @@ -87,3 +105,12 @@ def test_get_participants_subset(): assert len(participants_list) == 80 assert participants_list[0] == '020' assert participants_list[-1] == '003' + + @staticmethod + @mark.unit_test + def test_get_group(mock_participants_data): + """ Test the get_group function """ + + assert part.get_group('') == [] + assert part.get_group('equalRange') == ['sub-002', 'sub-004'] + assert part.get_group('equalIndifference') == ['sub-001', 'sub-003'] diff --git a/tests/test_data/core/image/test_image.nii.gz b/tests/test_data/core/image/test_image.nii.gz new file mode 100644 index 0000000000000000000000000000000000000000..06fb9aa32a92ef81f03316e4327804b20738c685 GIT binary patch literal 13326 zcmbVzRZtvI*CbBx1a}Ay!QC}LfFQx$9R>y&oFPDPhv4oyxVr>*cXuD$VZVQCtG2#} zec9@#`+Cp0bx-%{ril3TseMkx9O1*-6X?ic1afe-cD8V_W_7T!fjQ4aR>vEyeURY% z5ud|A^y@oE4g-~UnKS|y5_=9DlMF(gskpdYJPmOVDLV`mD$>2WN$)2rBy`FYJPt~_ zpA;VC51tcv2$wB(9WU!5tjDPr$In`CUY5$2saYGl|N4pRe}vkE9f#Y5hJ8$SiD{{R znf=JxMxK|uO#2{Nu+u2!SrvO1n>vg+zZCkQc%0p16P-fg;22IYHAR^}e&ByEZm0Wr zQN_`_Q>~k%7^QFmZ+7vXdY{a9j%GJXlTg9^zwtqY5M1v97oxWKA>CelDehXN(>liacyEzH=5jsgQ>gy7exFRvRa*yW)AIXth=QujBkOe7!%GIkD~C}%D;4E`0qPggT|Fbe(W1gHoi2`rkdhH2vO@$WYyQ}M$1>%6VfxM?#m9hrL4)qd8<8$}Y^ zwYhz)@*}NV-gZ~-e$6^_(H?N?&PJsM(lKCWScNqINrP{ z)W4L2Esf6#maXpf7N9G#m5h7BObLvyH#TV6jLzl7|Hbs**Rd(n4V}ePM`4hKysUOU z%*WGmImIEwTweZ|>KZYv@+r#Mi(;~ER_y6~)=XKECqwUR?7n{0U_ZX6Vl92e%pzr! z3NF+FmuDY=mZH7uy)j#XCMm0Z(KnG@!&R3W4db?Eagf>f1%XinR>@OlKHm)#lU{FkU@JcL9Cpm$G}$P0deQ0; zeWw_|=M2(>cLLUFW%92)8sUYB10y06@4p{mLADmK+H1DowbO@4%7 zE8mKFbOK7WUFg&8p3_O}sfbVa+_Db4QX2}QmfRH6v5WP=C|*>GV~puj zrzU^J7^2tan!vw0IaCs!XD#~g^_qiE1v#a`F!d1}eXdtPVJ}NsL-*<)^cR=4^LT#i z1|f{>(zY#|P1zKao$nwHM{BUBOe6Go$b;F@J1HB|fZQ=Dy`}Jx8FjF!($Qts#nE>- zb-B{=As*s7SZ}T33wag!E6|3McrWxm>hg_K1U8w5oPeAoEmSAOaM$`$Y{2quL>c~} zkNfmWXI#KA8KSjo zHuFU|eJ+MA{iVzL0b@@@H-~X#50D(3WIVnTJl9>0QA|xNDcK;O zQCbuQj{thdCVo?J!}nBC$rRr8uIZ~t>`y7XIEV}92ZE12?vy`R0H}4Hh5n10e4&`r z!KphS=`N{~Igr{CY+hVLss=J3IWIFr@} z2(zhl)8s;3+9e~w^5;ora9oTm(q=)#XP++DAbrD5iN&G%9RIZv**+5^4|h%7wIsZ3 z9~chRMPqO;U92A!e6eaUv?@Bt36Z5T7uqMfCRmTNcF~D^y#pulFzoW~+COVF!1*R= zq+7BSiM?-;)3l}sbt0OedDY`Q7G`7bB8VUtq0ga}mhhP(Xm0(P3`nWR{Cm&nZ+0b9 zDF;T+s%h7%3T|Cvb_R@C5Bm^#IWdsaxZfIMqP1)H2OWw5Cf5{A&#o6A*fmZ`>bd4* zhG|c3?FT72U56{xtp4N^dir!kM6=-l1C$j~T3v~837)zaln}+{8HninR{+2rI@YoP zofktVi+mw9Nh}^~~RZ;7F)y4l%yc9QIz@ZyF4qA6eZK)X0=7+Htf$2(^y#1D3dyXk=T}x!iZ8VcJxif0#fZ}x` zP)A-BUB~!#P*VcX`Z;Cz@^+;WL1Cwm62hP#?La37GxUWM%$ zK1KdU{_e$`P^U<=dA;t*-(bb&azfr4U)3f#k>D0<(Lp-0N#K+w&nY>mFew7@>&2CA+ zdRhz(Avlb3^9p^h@JX`hj8Ect(=+4%>`B|l^VA*`-o;>T+ z9%T^A)mUO!+Bgvh;jMVIX9L*{Xb$54IV%8jesKZ&0i&^G_kl!uF;q8%C8~eYll90@ zFXYRw^U#LuS@w23JXcENdZrj&DdDmFj@QhZ-T!<9*>O3L=~>BFpxbSY-TUq&M_SoKdzPJOdgCIA6&J*b+Ow0RMIu9s^>Vm;?R-kj8px zc8lNW`93-++p6jpcQu-}NBcjoExM35dhnyWE0w%t#zJ0+R8$hp?a;Y(PRgx--8by+ zdm5~ye{~*WZcdll7(b3ChgUU&k#>P8{EtW?I}JE#N^NmE3Ev6j1mA{(B{S0n%NpJ`M%7JQ2dUP z5VI_zHWuS}DQ5oGW(Rarr8T#Y4ym?5fhrw-YV_8(1r#-5srDIlkF#qm?91b6L-b)M zbwx|Sl{gVBm>D5*0wG&cF~UbSE8EG$JOL1?NT}hR5Lp%eNS@X!N9WrG+S=cy3F_E}3I>new}l;-AArvi5Bc;XQZv;t)f_i)Ka?C$bCq47qox$?)4sm>+~HYg#kjlQZXxh}?d4Vy z$`g^}XRQKy;HGuE&&n)tw?|prtQX9KloUk39v~ms1#PM!5 zVMXMDuZ*`n*nhp+{jspIaK7`ZSx&xI1sW^}Rr~Q{Er@nTxC}Pr3E{A4fub?nexM6?23jP@DWKjOTC6*|@S5AIrDvXs%ACb!&+~6=$;r3>vn4Lyb&00RIE&j25 zHit&w=9u}d-0KOIH*uh6*lC*CrUBa2Fk!ftB0bo7_OpaS;qtuaVAHl=)OIZ(Y#5-k zH7ig(Voypu5o&=0n$fOPuiBicsXy;utGYEmJNIxiqQ&0etwEU0)W~pH1hP-a|322a z6o{%n7z5u?CQ;94`TiPtL>>&T7Bca|r_+zMv^r=qOZ&Qz1FaEoTdqxL%i|(}TafNv z5>iZ&bmK4h9{GKEi5YsMWisw!DXAH)8aOaQ8nF@Ks94myMbbJp`okZPNltT=G)H1x zD$Fp{xj!*Bzo)z2g{jk&)Ymu8@%ry~;%85-U_jyn?EEVT;~8wsSfq@{wWhNEFBnpn zIY1umH@QBq0bj~z)Dl4mV>RQLr;~=w=lvwlU+M`%dMDMh{bthKMf=Hgx1xr#f_8q; zcJGVwwdj0|rskwOm-BCaG+)6C1A`aRf`R3}zIFPXkaXUJu6%d}D-5LB{C?o8nqWrC zZS*Xu7rIP^C)Fo|VkN$V=Ed^qXVcG#2&{C0RvQK24h+OlK*-(?HtmKf0@D@SUanm& z;9jUu^1~8AQAqnAGDl4yBZXyejvWDi8}0U^`56x>_Iij+foqwX71Z9Hsf+xFo{VDw z=^@^n-OCsDRsIZ}Di?&(JCc@2mfV20%tTMFRf+aoal}9i91f#qj7Fs;?0gOQ~WJ7y0x8F`8^~e%kF7b)I(7SvhIh;igX5#6=r`?6GL6^dl@#V zLFKgUj0GLK3}d^IEJ^4Fb!n2jz(4R`%JRbedDl^}+(VR+u$=(TQgI;1jX*x0q< zvOiNqXKzrP0Hoyz6XFL3c7uY}#Fkdl+zO|phnrcYl6_+A%w8fK5o@gStCv9Zhxk+j znF1VG+=jG)7nN$DfnPWga&DZV7-FcE0Jz85I-#5m>UBTsG2pmNr!;Gp*X+KKV>vIl z`Rddu*E|BaA)Pno1?WQgyg>y^^=we8wR`utee_uR!-Fw+Fq%W{*x1HSP!B3gK{Mg{ z)u+Mj&l8*04bOW;bwdGvVI@sdE6Xs`!40H>a4&{movk>{i)VC?WIaqdG!NN_y-*OV zFZQJ2g*N7w-57c2+fI#2PD1yn7HRK-nr416fR0z>2@&llmYMx*^b(*cM}nAAiQ88f zRk0#X{7_p>5cdLc!KwUdXo^f9CxklIa|yXf)Nj!)Cy+V2_LM8FDEya$odEGcx$|=65}aiSrb%Lx?eUR+6Y{5InebhXc=EO4#dDQ%ub2cidJ2qjzXh7;s*P*9w;l)o&`}1DxLV}w8Ss5g>MbRSnP$tzx9>5?qNYQE`-wrQQ?FC)YWnMe&R#01w z3iEtcuUg$7PYC-3lej!T>nnT_#QV2V4H-gPa-VLLAAD6RU;X&`E5nC-lGonsAUf`m4upbFJx+3O27B$EH?vlAhN0FHSY>mZT8 z2QtR}srq94!hj_(;OVj@f0c#!RtKj5@o4p`6bRlHC)^-5iEwl;Qg?$ZLNGi~FlcfL z?D#8id|2MBTKYcaw4j*M(TNxV^+5Ee^Pg&!Y?Y`%KVAvf)GnJzq&sM?>*F+eT~4Yn zSBERWyoVK0MAY>1!dS|#f|*5QG#5lToJ%Ppljowem|U#e9DGSywYRPq2CU7W12 zeKM65k==7j=vX`tQBHfNb{UrlMfTWq(%3~vsNI^6nEsL9ni~LQIRMFlv2$D`nn;y@ z01^667}@C4w!6Vzo_HBv5hDuf)(6!DOV?bQM#;ig>cvFB=^d8NA8`Xq;Fqq=Kpy=# z6bpKR#&rOfQuU`lr}Qi0t1nrE4>TEme!TcNn`Hl?dHbS!j`V{l&vJbIxw9DT^+7b~ zuIF%?icMFlJ8q8J6GFO{A#F9uQ;LH zgHVxzfaw_{nb=j3SP@3LVlua^>in#t{?bLXdgGaZC{MIBeb8|z`6_g30`fRq=XFuQ zMj~J_{sgKm6KG_0Q#w={CH-=iJl-FqfVp!&(edl7fCs>GER#PuQOKHL=p>?#%jnj} z=Sq|-p^LU?I>U|FzTm=E_1Z(S=ZzJNRyT9<2KMeXdT z%&v{m43nS`@bfakZGNWR*wF017`jrgxP|g*?yrv6zSk#8hcZaJdCfG@m;1d0aa=@8 zcB(HIHz&dlcKq|qgWU0QTCvLV6nww-^)*hC7Q+m?1cJ3jXc-wP$%Q^qT*s|xaeE7V z5h|#c`bfOIf>J4_wLz$d2V=g5U~1m3PqDq9Z8oQo^JFSU=c{^L^9KfMk?x+wkF*$^ zOx8?kl2a97{YJFo)^?Y|x#gf+(@%_O%cwMcZ7JHTv|gjcXdoItTCJf1OT{9W_$daw znd$ZBhs*}aUf%xWLq6=0>x$b-6_kW7D23y738Qk+8wAhSQA-s1k$<2)eEjn1(D)iH z+)SWz1T#i<6!X=AqPEjKefw>0cPYQ(k0Zw@!6t1#G#s(f=Xq7!&~3!+$bsV1x3q{w zZ{atoEk8}9il)4MOf?&gSl%d+*HEn`_dj~Gk+Nc=Cf03gIGo<~A!omm)($lg1L%3V z_~~s-0?i(OeG~D71uUrKI-uMj2E?HC^stCey;lmjSNZYkM-W9|&T8&fz85`it3w@^ ztLANYc_IXD7=8`jTKS|$UH7tZ%IoXCTLWE)LuYBe0m+n}_iJ?RPPLBRSU|d<+ZYau z34^x0y79fP@%yTJ>HYt3b}f{j(FcTC4)6(`VxIt`;Z#HBX?{3q*TtT^J+-7O{ z74b2-(F_o5Tm3tle*5{9`G~U?#XD!i-(b2E{+tf6`y}SHmRThK>FO9UO7N0aRI@7~ z?Hs<-40I+@7h?T zW5PobsY}mJYmdH+5B}Z&zZ_S1W<{44l$9F~vs7znN)n=F`3_~lHNua`#E=^W!QES@ zQE{2h=gtMECnTG_3R^{ry!n#xe0}?68zI%|Zw@y)NrkId;3~38;SAJ?;DwmEl$WW5 zSWS^=!|P~tep@~NeJQiEyE5vmU*twr6VG8}Q${=p34edqOqQ;`dOP4$wQi%6hCZPO zWm6fyrEta)aiZ*@wpC*&yGtoIaKWkB>Cw454nGz0Nc7Oq&w-Y-s|MET4t6g zvL4M-SR2kt)l&QLKR2&GDJ6FX2Hu1}f&(W^ryp~VE?!`=^wkCkWZ8Nmu#FsM-dcKT z1BC+n^w>;VjGrDlA10%ViCz?*9PZ!Saov;o>_0vubBiHx=?h3S*ME6r(Py*Y)FNJS z+#DUZJr6#2r4KCZK^q$-!xYRG?Uo&{0(M;ff--#i{pXeoWNS|>@;ANWECfDb-+CcO z;PzyfCwJn2=k@HDgU5ZWSPMG)m#n#+oT+x6;YJ1-Gb}0Sl?O89P#6jAqi?Eg@_Vyd z#;0<3>Ap$RlHt<^ezMi7vhPv^E?(;vpL8NIdkykiUBdZR@~#QsZQWjDto8>Nm_8TX zeie&AT=HB5Sf{6;2pRUfJRl)%7v_>pl~vnhF1pjduwSUx)0?x^zz5|?u}{bZ>dn)^ zO^Ar!lC{JKaG#Y95E$^N%^c@R3~S9z-&NBO`EgmIiqI_ol|WqEJ(w<`NHwx*lPv}O zX0fAP3PeC#P|(MRqmA%DVCxi_*tyQg2;)w z`$RF>`B?jKO9UK-1H)f%aFUYa7Nk?a3C;0!BS(~F~+^P(ZJ zZa`kW)yNyC&1d&${Jc8i`-RZq$UJuNmJ-Z=fZ6Hi zr(0LflXeG=ti@7*ruWlH?6gBXoNk@Dg@DOjhAkMAC+{0=&%x7GHT6f|WFXl29LJZ{ z$8LVEHkzSVXc<9m2J;M_VPDoi|H(-<{n;^iVnp-?T0M_KNF);L?{gM7;;H`Ds;ne_ zquOn7lRgz8pbQ+QU$B<(oBs@m_!RnSPPzM|1;b^SV&eXw!KA@kGs^ZQ=5D2ZQ*IO= zS#mLf34I`yW0}BLJ41=DD1~5xT;8J=t4^q zn73Z-4b%LjU!wWbLqg)vwH-nvvn!lc7;H}a6AgWR#Gaby=u|DI9H*<}BiD8}1H5zl z5=BI^CQ$TzpvJl;&c5PaNS)jrwo91zw_FSjx zo^L2WK29gYrL+*=7Zb)@UB5j5JQWRx)-&H7MiNa zvJ4oNHD+vS98^A*ZiqG69GdcLPK(r3f=LZ-&?Q<7O@MhI(lgRJ#43pPL^49eE`M9Y zD@rEz>DJERl!-ds(S7p29>h1fo`vu{XFWBRo9sgPHSg&ORvbo92X<=FV9vifvJBBC zSZBHz%ZK%*->UNp8Ge>b0FQgP#7q{ChsJrcl0I6U@Um~}KMZ8=EeMj7(?d>L!$F2> zl_$}ZEz=?;dp*AD!rrg|*$J4RerPhpWoN_VHCqfDX}I_0adWqsug~Tpn!u4jkU(=* z2bP+pTAteWeC@pcuohsd4a*mV7~G z{U>wW&CeU<{jlxNmT2~)4x?jzE7cKYMHR2ob_=!o*hoi$yTwkaAX{_;!YCt)v7r_@ zu+{3I@Bav+OlhMeDB>o zCsy$*^l%FH2YXX?BbKAAZFnVBM6ePr>_;-R9|)Nb9YwaRYY1>r>crVxkt5qd^>U65GFkY!*=cVBTzBIil-d_MXjYvXzt?Kn^nrGG6<~}+ z5<0(>7yAnoC;xL}O9cfV_jv-^cqyo=yq2>SPRW9%@9{0T?U%yA7cCzX>OWOpa0YN_ z&4B>bs|#KmZ0}Z;N6w}ctWKLkyh(MV&LxZTZ6lbupiw;+u_+ zS{B>$#e!=m)VcRcgdjHJ0QcLf{Oxz(T%(?Jj$qXF%g|IH0Ux{l5oo(~qNjw<5r+vp zYxkceE5v}hYCAcZPCc(7lu0m1b-B=l=9E34k?brWfeG-VQjgQE^;?9pz2BRD;ljb> z3_eyDWj3+W#VM7|kd8Fd**94$4cQ!a^-P(QA+;N5`iE-vo9wu|qz0!LmCSov;!xST zWIIo_i3{PIwbcJMqd@%d+HZ8a#VOT6t74JB>_H5L(+XI_DAdozCQxq$_=|j zIEnAY>Vv0|_m*O}bW~_A2TrX4{!NLQ%PBMlMf|PQvu&%<>EDhNqUM=4J#fe*G`T3(qnNY z+Yb3FzS3bf)ad%o1`!rn_J}q%0#US9oI;bDBAoHWCAbPo(@sWD=(N>b!WA+BqmOiYE%%pk&bo zemIFZ_ohD{c=VbxS^?6!)7klW4+6-&O82(Ucu};cWW$`B z7_sq~7D~+;=ROZwJ?Ez!+S%8hd;zx;{4tcU>;4I%AF7Y51Uo+9_Im3Oo6t!+l@p*S zGmQE7U3oW~^{vKzFG1)Z_wDh+nfx9WkqEk{n49Nw_8jOx5^KQ$rE0*eMyP8q?-EUU zpJ6DeDI%JtV&2W@Dt=p`p13YdDeE;nQq zTZs}_E;cc|YwNkhOcM7`7jSJYhigo*0^-oqZ4c@Ddc6yxqK&eN(-Nw{k9 z74-*8{J~G;Kc;bMbX~+a;=5jq|5%3uiH%P6lrE2@ToQ4Q9iBb@dQEI!yk$O>?W^Xn z-~WYzm0-jU=hU|`qS#$YyKuym6t{koLu7Z;;~p(+KHXO~hXyPKk6WdBNMr_A)K-XM zfsPX5$)4Ss{L{C+A5#iwtxq}Nmui+hHZV9mUvxy(=t=mVMgIMf*AWEw{5@kmRq^GR zOS+3lQ~V;|gImO%veQ*5-cKKSGjY-*CSvpbeKi8wrzW6#NvS;cgn`sXIFqT4=d8k) zzi;7w3OLn$X(+XS{VYEcn#ZAm>|_nn;_#{^v8tsFR$V_kF33ZDGOy~1lQ2SL5=qYQ ztQ8p7u8%~8c&h^5f(wYTs&>uZl4#sH1K4a}W0S`G|4qR>3)``h%^yiU^YNnTJA#6A z0`HuPmG#P8D__v1g$1Mb-a1U?m)kDrsK+0FK2Nki}rw{)Aq>0cy0GQkI82rB?Nav{?Y9V zloFogNM|CE$(HDOW&c{cLO!&${*NCHdP(wbyb_W>$K$tmQ1j*8YZ!4f@1}+(w%V<6|Lw%}d>stVDN~wFatXCp$e}e&b)~ z&De~MuJ4+ZqpT$IP8oyN#AS_{?Qhg11!}4yMgi$koQjueKQwO6kIL4`grM*Wj*dS|t z=S(_`j#t-6jVY(a8}1*-x;UH9VW!tniB|dz7?B6M>;NxTavj>=V-a)iKkaZkbKkIo1`Q1l z?>lh1{pm@9jMVYB@DEwdt@p+2xb}~f>NLCN`18I{X>aMEZ~l2%hO&@omp*>xZCRGP z2-`>QoZLrw`1pKjSv){YR0?UqbicN#_C#Ta|1>0_yPL^cJSGf@JU{ zZbE~{KRm6xtvq-p|L{4r3{%u54((8=lElr`f6y+Y!@(N+u_MdEE4RPEpxSN+i=-!Yje$z%>H4z z@Y%7*w7lut!TaG^FDxy1N0;f+xL4YFlMJ}Z$9hlHslOj~V9FGZ$OHWK2vT?nvTW@r zp(#fAiDJW9jxQ2?i<)cBI zFBF7ijHC}o$3s)IyUBcipGEro(B%IMzOBhUtsA_}jsK%(z3?5)0PD7!x{~N`<8lP) zWRo!6VI{Nm0~SXsW4aevx3B>VNw@p+-N9mm~bATvxx_*FyaU ztFdMs?^ex#;O~zn1}M?7h5=2en|eY7T9aZIoheKB$SdVSX`Si~bP^hOXY*|CANN)- zwY#0>%RLFgg`3cx(rNbAC{f1x2&$%dIb6Ebg08pSgCL7)vddE8RyYQuZFgL2|Bxi+V5ko#)PKhgH4YRcRJyNT;7>i>FK= zXktW9ZDb;m=EW4}uGK>6R3>=0u^kdiw2ur%mdqK&=UA`|`TF8k^Vw|#8JdQi9>0~u zQ);bfTRo8TKwt9|qCvyK76%^u~^+h>_(*5GS-agdbRIX#+f2{V6@(19wIVt2EyulKOKf{bl}EYu=bP* zc-dE)urDzz|1Jn3%p7W-+A_0u7?rlSduL8^f55E>o2F}jfRUZlI^<7qUkgwQ-2Qp@ zm@o&?Sit(W2<&5J{+BGIDX|}*0Bq~P3*`aoY*1z0b&?Indxh7SQ)g8s9?p*Mp4;Hz zsGQdLdwdLC?ZTG+VlT@<7ExP~O=ThIQGD`ew(Ij)5J#yw!a^MzgM(yuT_}5Z1*_D) zepwN+#JR`BD45+hE4|%e7E~7*91NJ)z}26L8JM!I(7(>@;h1~Q6||&nKt-WS#ey6i z_jQ_AamW2Ud$oDi^c!J6RnFv;QFF8+@#4y12?7J;1!Z6{93pfv+Nh6RU0q;kGf#ew zX1qr;9=8`=ysO4K&<=VJ?hTg)_&qoQrzeA8>=ZI~}Agd{^k-jt<7(n_5Rt^uU zxO}#Wkl`5*mKJ5qqT*WEIjatCA|Ka+LdJFQe;x*#WAbf5CZapACXJ#$YNSZ*y!_BC zTtM#JkRng?g@wVj^`^_bOY?Adi&2Me=?peOni7M(}g9i9IRQxS~4g7PkU%Qrea`xSRv7&*pkmYUhlhgU|Qv*lGgz{hr zE1Sp1Zu+*1j=*8w7<4HWYpKwbYTl~2vX-^bP@Nw{*N`?7j(OxT=%k7x?z;rmHmk!) zO5!oQaZcwXp*8Zz`ie}Vd!IX|7}d3CRo0b>&DJ&r;~p&Qgy{q9LUb(1?mKjF?{=vq zbU~dliQDN1<~9hEa8Sprx>Y;vqT8B!`)u*s2qwnbZ2a_E10!cdP8mz4#!QVpqaWS$ zPk|aZ6dX^wTRWH#viE6o5>Hl+E^`K>#LesTL=$mrIyVaN^}4e;imT$OE5Hr?Q9 znVzH-3V>*v4fVo^YTjdAG$Q8vJ33$2N?C^HD^i%(oJ)fA;nwfti?e)#x2HxS9wgc9 zo%@D3Bb%qnyzvWJvlX{5PWiyAHCEcLYo%8KTp0z z2ITkBR5NQ-VXOJN0Knx0y&Fw^Ia zKMc-nk*C>WMXYUN2a_xDF@8AcD(%K0ga)ej`sq;U;# zfh_MX-`waSf276#i`O=k$d0)i