From 6986e90351bca4071ea61387bee1f52f1125aceb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 31 Aug 2023 14:35:10 +0200 Subject: [PATCH 01/15] [BUG] inside unit_tests workflow --- .github/workflows/unit_tests.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/unit_tests.yml b/.github/workflows/unit_tests.yml index 20f20ea3..d0097882 100644 --- a/.github/workflows/unit_tests.yml +++ b/.github/workflows/unit_tests.yml @@ -34,7 +34,7 @@ jobs: - name: Checkout repository uses: actions/checkout@v3 - - name: Load configuration for self-hosted runner + - name: Load configuration for self-hosted runner run: cp /home/neuro/local_testing_config.toml narps_open/utils/configuration/testing_config.toml - name: Install dependencies From 671595011cc613e419a5f5b51559fc39cc489b80 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 22 Sep 2023 16:05:41 +0200 Subject: [PATCH 02/15] [DOC] fix some broken links --- docs/ci-cd.md | 2 +- docs/data.md | 2 +- docs/pipelines.md | 4 ++-- docs/running.md | 2 +- docs/testing.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/ci-cd.md b/docs/ci-cd.md index c292eed1..ad3f33bc 100644 --- a/docs/ci-cd.md +++ b/docs/ci-cd.md @@ -35,7 +35,7 @@ For now, the following workflows are set up: | Name / File | What does it do ? | When is it launched ? | Where does it run ? | How can I see the results ? | | ----------- | ----------- | ----------- | ----------- | ----------- | | [code_quality](/.github/workflows/code_quality.yml) | A static analysis of the python code (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request if there are changes on `.py` files. | On GitHub servers. | Outputs (logs of pylint) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | -| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](codespell-project/codespell: check code for common misspellings). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | +| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | | [pipeline_tests](/.github/workflows/pipelines.yml) | Runs all the tests for changed pipelines. | For every push or pull_request, if a pipeline file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | | [test_changes](/.github/workflows/test_changes.yml) | It runs all the changed tests for the project. | For every push or pull_request, if a test file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | | [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On GitHub servers. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | diff --git a/docs/data.md b/docs/data.md index 1e6b4fc3..edf5a757 100644 --- a/docs/data.md +++ b/docs/data.md @@ -2,7 +2,7 @@ The datasets used for the project can be downloaded using one of the two options below. -The path to these datasets must conform with the information located in the configuration file you plan to use (cf. [documentation about configuration](docs/configuration.md)). By default, these paths are in the repository: +The path to these datasets must conform with the information located in the configuration file you plan to use (cf. [documentation about configuration](/docs/configuration.md)). By default, these paths are in the repository: * `data/original/`: original data from NARPS * `data/results/`: results from NARPS teams diff --git a/docs/pipelines.md b/docs/pipelines.md index fb7d2afc..db60c831 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -126,6 +126,6 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin ## Test your pipeline -First have a look at the [testing topic of the documentation](./docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline. +First have a look at the [testing topic of the documentation](/docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline. -Feel free to have a look at [tests/pipelines/test_team_2T6S.py](./tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example. +Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example. diff --git a/docs/running.md b/docs/running.md index 6344c042..6bc15647 100644 --- a/docs/running.md +++ b/docs/running.md @@ -61,4 +61,4 @@ python narps_open/runner.py -t 2T6S -r 4 -f python narps_open/runner.py -t 2T6S -r 4 -f -c # Check the output files without launching the runner ``` -In this usecase, the paths where to store the outputs and to the dataset are picked by the runner from the [configuration](docs/configuration.md). +In this usecase, the paths where to store the outputs and to the dataset are picked by the runner from the [configuration](/docs/configuration.md). diff --git a/docs/testing.md b/docs/testing.md index 2bd96584..5294ea9b 100644 --- a/docs/testing.md +++ b/docs/testing.md @@ -55,7 +55,7 @@ Use pytest [markers](https://docs.pytest.org/en/7.1.x/example/markers.html) to i | Type of test | marker | Description | | ----------- | ----------- | ----------- | | unit tests | `unit_test` | Unitary test a method/function | -| pipeline tests | `pieline_test` | Compute a whole pipeline and check its outputs are close enough with the team's results | +| pipeline tests | `pipeline_test` | Compute a whole pipeline and check its outputs are close enough with the team's results | ## Save time by downsampling data From 57b8c86d289f79c0a29c96b0c5ec1b1d9b0f3a5f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 22 Sep 2023 16:52:22 +0200 Subject: [PATCH 03/15] [DOC] adding template for pipeline testing --- docs/ci-cd.md | 4 +- docs/pipelines.md | 7 +- tests/pipelines/templates/test_team_XXXX.py | 101 ++++++++++++++++++++ 3 files changed, 108 insertions(+), 4 deletions(-) create mode 100644 tests/pipelines/templates/test_team_XXXX.py diff --git a/docs/ci-cd.md b/docs/ci-cd.md index ad3f33bc..92175866 100644 --- a/docs/ci-cd.md +++ b/docs/ci-cd.md @@ -35,10 +35,10 @@ For now, the following workflows are set up: | Name / File | What does it do ? | When is it launched ? | Where does it run ? | How can I see the results ? | | ----------- | ----------- | ----------- | ----------- | ----------- | | [code_quality](/.github/workflows/code_quality.yml) | A static analysis of the python code (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request if there are changes on `.py` files. | On GitHub servers. | Outputs (logs of pylint) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | -| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | +| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `main` branch. | On GitHub servers. | Typos are displayed in the workflow summary. | | [pipeline_tests](/.github/workflows/pipelines.yml) | Runs all the tests for changed pipelines. | For every push or pull_request, if a pipeline file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | | [test_changes](/.github/workflows/test_changes.yml) | It runs all the changed tests for the project. | For every push or pull_request, if a test file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | -| [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On GitHub servers. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | +| [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | ### Cache diff --git a/docs/pipelines.md b/docs/pipelines.md index db60c831..414533d4 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -126,6 +126,9 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin ## Test your pipeline -First have a look at the [testing topic of the documentation](/docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline. +First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. -Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example. +All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. + +> [!NOTE] +> Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. diff --git a/tests/pipelines/templates/test_team_XXXX.py b/tests/pipelines/templates/test_team_XXXX.py new file mode 100644 index 00000000..9946a070 --- /dev/null +++ b/tests/pipelines/templates/test_team_XXXX.py @@ -0,0 +1,101 @@ +#!/usr/bin/python +# coding: utf-8 + +""" This template can be use to test a pipeline. + + - Replace all occurrences of XXXX by the actual id of the team. + - All lines starting with [INFO], are meant to help you during the reproduction, these can be removed + eventually. + - Also remove lines starting with [TODO], once you did what they suggested. + - Remove this docstring once you are done with coding the tests. +""" + +""" Tests of the 'narps_open.pipelines.team_XXXX' module. + +Launch this test with PyTest + +Usage: +====== + pytest -q test_team_XXXX.py + pytest -q test_team_XXXX.py -k +""" + +# [INFO] About these imports : +# [INFO] - pytest.helpers allows to use the helpers registered in tests/conftest.py +# [INFO] - pytest.mark allows to categorize tests as unitary or pipeline tests +from pytest import helpers, mark + +from nipype import Workflow + +# [INFO] Of course, import the class you want to test, here the Pipeline class for the team XXXX +from narps_open.pipelines.team_XXXX import PipelineTeamXXXX + +# [INFO] All tests should be contained in the following class, in order to sort them. +class TestPipelinesTeamXXXX: + """ A class that contains all the unit tests for the PipelineTeamXXXX class.""" + + # [TODO] Write one or several unit_test (and mark them as such) + # [TODO] ideally for each method of the class you test. + + # [INFO] Here is one example for the __init__() method + @staticmethod + @mark.unit_test + def test_create(): + """ Test the creation of a PipelineTeamXXXX object """ + + pipeline = PipelineTeamXXXX() + assert pipeline.fwhm == 8.0 + assert pipeline.team_id == 'XXXX' + + # [INFO] Here is one example for the methods returning workflows + @staticmethod + @mark.unit_test + def test_workflows(): + """ Test the workflows of a PipelineTeamXXXX object """ + + pipeline = PipelineTeamXXXX() + assert pipeline.get_preprocessing() is None + assert pipeline.get_run_level_analysis() is None + assert isinstance(pipeline.get_subject_level_analysis(), Workflow) + group_level = pipeline.get_group_level_analysis() + + assert len(group_level) == 3 + for sub_workflow in group_level: + assert isinstance(sub_workflow, Workflow) + + # [INFO] Here is one example for the methods returning outputs + @staticmethod + @mark.unit_test + def test_outputs(): + """ Test the expected outputs of a PipelineTeamXXXX object """ + pipeline = PipelineTeamXXXX() + + # 1 - 1 subject outputs + pipeline.subject_list = ['001'] + assert len(pipeline.get_preprocessing_outputs()) == 0 + assert len(pipeline.get_run_level_outputs()) == 0 + assert len(pipeline.get_subject_level_outputs()) == 7 + assert len(pipeline.get_group_level_outputs()) == 63 + assert len(pipeline.get_hypotheses_outputs()) == 18 + + # 2 - 4 subjects outputs + pipeline.subject_list = ['001', '002', '003', '004'] + assert len(pipeline.get_preprocessing_outputs()) == 0 + assert len(pipeline.get_run_level_outputs()) == 0 + assert len(pipeline.get_subject_level_outputs()) == 28 + assert len(pipeline.get_group_level_outputs()) == 63 + assert len(pipeline.get_hypotheses_outputs()) == 18 + + # [TODO] Feel free to add other methods, e.g. to test the custom node functions of the pipeline + + # [TODO] Write one pipeline_test (and mark it as such) + + # [INFO] The pipeline_test will most likely be exactly written this way : + @staticmethod + @mark.pipeline_test + def test_execution(): + """ Test the execution of a PipelineTeamXXXX and compare results """ + + # [INFO] We use the `test_pipeline_evaluation` helper which is responsible for running the + # [INFO] pipeline, iterating over subjects and comparing output with expected results. + helpers.test_pipeline_evaluation('XXXX') From 2c891c27ec628634fe575950a20d16cf23a8c3d6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 22 Sep 2023 17:10:20 +0200 Subject: [PATCH 04/15] [DOC] adding template for pipeline testing --- docs/pipelines.md | 6 ++++-- tests/conftest.py | 5 +++++ tests/pipelines/templates/test_team_XXXX.py | 7 ++++--- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/docs/pipelines.md b/docs/pipelines.md index 414533d4..226204c5 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -5,6 +5,7 @@ Here are a few principles you should know before creating a pipeline. Further in Please apply these principles in the following order. ## Create a file containing the pipeline + The pipeline must be contained in a single file named `narps_open/pipelines/team_.py`. ## Inherit from `Pipeline` @@ -89,7 +90,8 @@ def get_group_level_outputs(self): """ Return the names of the files the group level analysis is supposed to generate. """ ``` -:warning: Do not declare the method if no files are generated by the corresponding step. For example, if no preprocessing was done by the team, the `get_preprocessing_outputs` method must not be implemented. +> [!WARNING] +> Do not declare the method if no files are generated by the corresponding step. For example, if no preprocessing was done by the team, the `get_preprocessing_outputs` method must not be implemented. You should use other pipeline attributes to generate the lists of outputs dynamically. E.g.: @@ -128,7 +130,7 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. -All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. +All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. > [!NOTE] > Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. diff --git a/tests/conftest.py b/tests/conftest.py index e1530e48..a30315af 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -18,6 +18,11 @@ from narps_open.utils.configuration import Configuration from narps_open.data.results import ResultsCollection +# A list of test files to be ignored +collect_ignore = [ + 'tests/pipelines/templates/test_team_XXXX.py' # test template + ] + # Init configuration, to ensure it is in testing mode Configuration(config_type='testing') diff --git a/tests/pipelines/templates/test_team_XXXX.py b/tests/pipelines/templates/test_team_XXXX.py index 9946a070..0c3683fd 100644 --- a/tests/pipelines/templates/test_team_XXXX.py +++ b/tests/pipelines/templates/test_team_XXXX.py @@ -4,8 +4,8 @@ """ This template can be use to test a pipeline. - Replace all occurrences of XXXX by the actual id of the team. - - All lines starting with [INFO], are meant to help you during the reproduction, these can be removed - eventually. + - All lines starting with [INFO], are meant to help you during the reproduction, + these can be removed eventually. - Also remove lines starting with [TODO], once you did what they suggested. - Remove this docstring once you are done with coding the tests. """ @@ -25,6 +25,7 @@ # [INFO] - pytest.mark allows to categorize tests as unitary or pipeline tests from pytest import helpers, mark +# [INFO] Only for type testing from nipype import Workflow # [INFO] Of course, import the class you want to test, here the Pipeline class for the team XXXX @@ -36,7 +37,7 @@ class TestPipelinesTeamXXXX: # [TODO] Write one or several unit_test (and mark them as such) # [TODO] ideally for each method of the class you test. - + # [INFO] Here is one example for the __init__() method @staticmethod @mark.unit_test From 552e18cb6c3fd203dddbb4ba4020ac245238ceca Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Mon, 25 Sep 2023 11:36:32 +0200 Subject: [PATCH 05/15] About implemented_pipelines --- docs/pipelines.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/docs/pipelines.md b/docs/pipelines.md index 226204c5..ed233579 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -126,6 +126,22 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin * `fwhm` : full width at half maximum for the smoothing kernel (in mm) : * `tr` : repetition time of the fMRI acquisition (equals 1.0s) +## Set your pipeline as implemented + +Inside `narps_open/pipelines/__init__.py`, set the pipeline as implemented. I.e.: if the pipeline you reproduce is 2T6S, update the line : + +```python + '2T6S': None, +``` + +with : + +```python + '2T6S': 'PipelineTeam2T6S', +``` + +inside the `implemented_pipelines` dictionary. + ## Test your pipeline First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. From b6f21f490158f9eb2793c73882ceff2ba5e415eb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Mon, 25 Sep 2023 13:57:58 +0200 Subject: [PATCH 06/15] Deal with test template --- docs/pipelines.md | 2 +- .../pipelines/templates/{test_team_XXXX.py => template_test.py} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename tests/pipelines/templates/{test_team_XXXX.py => template_test.py} (100%) diff --git a/docs/pipelines.md b/docs/pipelines.md index ed233579..d59f9a9e 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -146,7 +146,7 @@ inside the `implemented_pipelines` dictionary. First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. -All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. +All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/template_test.py](/tests/pipelines/templates/template_test.py) inside the `tests/pipelines/` directory, renaming it accordingly. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document. > [!NOTE] > Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. diff --git a/tests/pipelines/templates/test_team_XXXX.py b/tests/pipelines/templates/template_test.py similarity index 100% rename from tests/pipelines/templates/test_team_XXXX.py rename to tests/pipelines/templates/template_test.py From 0436fe40900c89c6e655d813a5338ac3928914a2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 13 Oct 2023 12:01:58 +0200 Subject: [PATCH 07/15] [DOC] new readme for the doc --- docs/README.md | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/docs/README.md b/docs/README.md index 8c4fd662..f2c9a77e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -2,13 +2,16 @@ :mega: This is the starting point for the documentation of the NARPS open pipelines project. -Here are the available topics : - -* :runner: [running](/docs/running.md) tells you how to run pipelines in NARPS open pipelines -* :brain: [data](/docs/data.md) contains instructions to handle the data needed by the project -* :hammer_and_wrench: [environment](/docs/environment.md) contains instructions to handle the software environment needed by the project -* :goggles: [description](/docs/description.md) tells you how to get convenient descriptions of the pipelines, as written by the teams involved in NARPS. -* :microscope: [testing](/docs/testing.md) details the testing features of the project, i.e.: how is the code tested ? -* :package: [ci-cd](/docs/ci-cd.md) contains the information on how continuous integration and delivery (knowned as CI/CD) is set up. -* :writing_hand: [pipeline](/docs/pipelines.md) tells you all you need to know in order to write pipelines -* :vertical_traffic_light: [status](/docs/status.md) contains the information on how to get the work progress status for a pipeline. +## Use the project +* :brain: [data](/docs/data.md) - handle the data needed by the project +* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment needed by the project +* :rocket: [running](/docs/running.md) - launch pipelines in NARPS open pipelines + +## Contribute to the code +* :goggles: [description](/docs/description.md) - conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. +* :writing_hand: [pipeline](/docs/pipelines.md) - how to write pipelines. + +## Main +* :vertical_traffic_light: [status](/docs/status.md) - work progress status for a pipeline. +* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? +* :package: [ci-cd](/docs/ci-cd.md) - how continuous integration and delivery (knowned as CI/CD) is set up. From bfcf3dda05a8546b10f19f92badcc8d879fe52b9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 13 Oct 2023 13:21:07 +0200 Subject: [PATCH 08/15] Changes in README.md --- README.md | 2 +- docs/README.md | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 20125d83..5a7505a0 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ We also created a [shared spreadsheet](https://docs.google.com/spreadsheets/d/1F - :snake: :package: `narps_open/` contains the Python package with all the pipelines logic. - :brain: `data/` contains data that is used by the pipelines, as well as the (intermediate or final) results data. Instructions to download data are available in [INSTALL.md](/INSTALL.md#data-download-instructions). -- :blue_book: `docs/` contains the documentation for the project. Start browsing it with the entry point [docs/README.md](/docs/README.md) +- :blue_book: `docs/` contains the documentation for the project. Start browsing it [here](/docs/README.md) ! - :orange_book: `examples/` contains notebooks examples to launch of the reproduced pipelines. - :microscope: `tests/` contains the tests of the narps_open package. diff --git a/docs/README.md b/docs/README.md index f2c9a77e..55112428 100644 --- a/docs/README.md +++ b/docs/README.md @@ -3,15 +3,15 @@ :mega: This is the starting point for the documentation of the NARPS open pipelines project. ## Use the project -* :brain: [data](/docs/data.md) - handle the data needed by the project -* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment needed by the project -* :rocket: [running](/docs/running.md) - launch pipelines in NARPS open pipelines +* :brain: [data](/docs/data.md) - handle the needed data +* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment +* :rocket: [running](/docs/running.md) - launch pipelines ## Contribute to the code * :goggles: [description](/docs/description.md) - conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. * :writing_hand: [pipeline](/docs/pipelines.md) - how to write pipelines. +* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? -## Main +## For maintainers * :vertical_traffic_light: [status](/docs/status.md) - work progress status for a pipeline. -* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? * :package: [ci-cd](/docs/ci-cd.md) - how continuous integration and delivery (knowned as CI/CD) is set up. From d212e1d8aa521c5103ddeb7de5bd5426b3e25092 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 13 Oct 2023 14:06:14 +0200 Subject: [PATCH 09/15] [DOC] slight changes to docs/README.md --- docs/README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/README.md b/docs/README.md index 55112428..32d33986 100644 --- a/docs/README.md +++ b/docs/README.md @@ -3,15 +3,15 @@ :mega: This is the starting point for the documentation of the NARPS open pipelines project. ## Use the project -* :brain: [data](/docs/data.md) - handle the needed data -* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment -* :rocket: [running](/docs/running.md) - launch pipelines +* :brain: [data](/docs/data.md) - Handle the needed data. +* :hammer_and_wrench: [environment](/docs/environment.md) - Handle the software environment. +* :rocket: [running](/docs/running.md) - Launch pipelines ! ## Contribute to the code -* :goggles: [description](/docs/description.md) - conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. -* :writing_hand: [pipeline](/docs/pipelines.md) - how to write pipelines. -* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? +* :goggles: [description](/docs/description.md) - Conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. +* :writing_hand: [pipelines](/docs/pipelines.md) - How to write pipelines. +* :microscope: [testing](/docs/testing.md) - How to test the code. ## For maintainers -* :vertical_traffic_light: [status](/docs/status.md) - work progress status for a pipeline. -* :package: [ci-cd](/docs/ci-cd.md) - how continuous integration and delivery (knowned as CI/CD) is set up. +* :vertical_traffic_light: [status](/docs/status.md) - Work progress status for pipelines. +* :package: [ci-cd](/docs/ci-cd.md) - Continuous Integration and Delivery (a.k.a. CI/CD). From 29870d5243ddfc31b9c88283912536ef3e47c02d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 30 Nov 2023 15:30:40 +0100 Subject: [PATCH 10/15] Add links to past events --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e008bf53..ab933bc5 100644 --- a/README.md +++ b/README.md @@ -50,8 +50,8 @@ This project is supported by Région Bretagne (Boost MIND) and by Inria (Explora This project is developed in the Empenn team by Boris Clenet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau. In addition, this project was presented and received contributions during the following events: - - OHBM Brainhack 2022 (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet. - - e-ReproNim FENS NENS Cluster Brainhack (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. + - [OHBM Brainhack 2022](https://ohbm.github.io/hackathon2022/) (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet. + - [e-ReproNim FENS NENS Cluster Brainhack](https://repro.school/2023-e-repronim-brainhack/) (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. - [OHBM Brainhack 2023](https://ohbm.github.io/hackathon2023/) (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor. - [ORIGAMI lab](https://neurodatascience.github.io/) hackathon (September 2023): - [Brainhack Marseille 2023](https://brainhack-marseille.github.io/) (December 2023): From e4f369dc6963a98696bbdad3331bc1b0e04d0179 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 30 Nov 2023 15:51:49 +0100 Subject: [PATCH 11/15] Changes in readme.md --- README.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index ab933bc5..7172e25b 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# The NARPS Open Pipelines project +# NARPS Open Pipelines

@@ -23,16 +23,17 @@ We base our reproductions on the [original descriptions provided by the teams](h ## Contributing -NARPS open pipelines uses [nipype](https://nipype.readthedocs.io/en/latest/index.html) as a workflow manager and provides a series of templates and examples to help reproduce the different teams’ analyses. +There are many ways you can contribute 🤗 :wave: Any help is welcome ! -There are many ways you can contribute 🤗 :wave: Any help is welcome ! Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! +NARPS Open Pipelines uses [nipype](https://nipype.readthedocs.io/en/latest/index.html) as a workflow manager and provides a series of templates and examples to help reproducing the different teams’ analyses. Nevertheless knowing Python or Nipype is not required to take part in the project. -### Installation +Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! -To get the pipelines running, please follow the installation steps in [INSTALL.md](/INSTALL.md) +## Using the codebase -## Getting started -If you are interested in using the codebase to run the pipelines, see the [user documentation (work-in-progress)]. +To get the pipelines running, please follow the installation steps in [INSTALL.md](/INSTALL.md). + +If you are interested in using the codebase, see the user documentation in [docs](/docs/) (work-in-progress). ## References From 142f89ca6a761f2ee2c472a600d60d139b08bdac Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 30 Nov 2023 17:19:44 +0100 Subject: [PATCH 12/15] fMRI trail --- CONTRIBUTING.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7429acdd..30253929 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -23,11 +23,19 @@ Feel free to have a look to the following pipelines, these are examples : | team_id | softwares | fmriprep used ? | pipeline file | | --- | --- | --- | --- | | 2T6S | SPM | Yes | [/narps_open/pipelines/team_2T6S.py](/narps_open/pipelines/team_2T6S.py) | -| X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_2T6S.py) | +| X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_X19V.py) | ## 👩‍🎤 fMRI software trail -... +From the description provided by the team you chose, perform the analysis on the associated software to get as many log / configuration files / as possible from the execution. + +Complementary hints on the process would definitely be , to description + +Especially these files contain valuable information about model desing: +* for FSL pipelines, `design.fsf` setup files coming from FEAT. +* for SPM pipelines, + +spm matlabbatch ## Find or propose an issue :clipboard: Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. From 7b7fb8922facf50fea6780c3c753f09bee23ef32 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 6 Dec 2023 11:05:48 +0100 Subject: [PATCH 13/15] Adding trail description in contribution guide --- CONTRIBUTING.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 30253929..1cffca90 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -27,15 +27,11 @@ Feel free to have a look to the following pipelines, these are examples : ## 👩‍🎤 fMRI software trail -From the description provided by the team you chose, perform the analysis on the associated software to get as many log / configuration files / as possible from the execution. +From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints on the process would definitely be welcome, to enrich the description. -Complementary hints on the process would definitely be , to description - -Especially these files contain valuable information about model desing: -* for FSL pipelines, `design.fsf` setup files coming from FEAT. -* for SPM pipelines, - -spm matlabbatch +Especially these files contain valuable information about model design: +* for FSL pipelines, `design.fsf` setup files coming from FEAT ; +* for SPM pipelines, matlabbatch files. ## Find or propose an issue :clipboard: Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. From 23b93f602037231138e22d9a070ec97579e05ae6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 6 Dec 2023 13:16:36 +0100 Subject: [PATCH 14/15] Separate trails in contribution guide --- CONTRIBUTING.md | 97 +++++++++++++++++++++++-------------------------- 1 file changed, 45 insertions(+), 52 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1cffca90..5ebe8af3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,78 +1,71 @@ # How to contribute to NARPS Open Pipelines ? For the reproductions, we are especially looking for contributors with the following profiles: - - 👩‍🎤 SPM, FSL, AFNI or nistats has no secrets for you? You know this fMRI analysis software by heart 💓. Please help us by reproducing the corresponding NARPS pipelines. 👣 after step 1, follow the fMRI expert trail. - - 🧑‍🎤 You are a nipype guru? 👣 after step 1, follow the nipype expert trail. + - `🧠 fMRI soft` SPM, FSL, AFNI or nistats has no secrets for you ; you know one of these fMRI analysis tools by :heart:. + - `🐍 Python` You are a Python guru, willing to use [Nipype](https://nipype.readthedocs.io/en/latest/). -# Step 1: Choose a pipeline to reproduce :keyboard: -:thinking: Not sure which pipeline to start with ? 🚦The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is in red (not started). +In the following, read the instruction sections where the badge corresponding to your profile appears. -Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. +## 1 - Choose a pipeline +`🧠 fMRI soft` `🐍 Python` -When you are ready, [start an issue](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose **Pipeline reproduction**! +Not sure which pipeline to start with :thinking:? The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is not fully reproduced, i.e.: not started :red_circle: or in progress :orange_circle: . -# Step 2: Reproduction +> [!NOTE] +> Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. -## 🧑‍🎤 NiPype trail +## 2 - Interact using issues +`🧠 fMRI soft` `🐍 Python` -We created templates with modifications to make and holes to fill to create a pipeline. You can find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). +Browse [issues](https://github.com/Inria-Empenn/narps_open_pipelines/issues/) before starting a new one. If the pipeline is :orange_circle: the associated issues are listed on the [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status). -If you feel it could be better explained, do not hesitate to suggest modifications for the templates. +You can either: +* comment on an existing issue with details or your findings about the pipeline; +* [start an issue](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose **Pipeline reproduction**. -Feel free to have a look to the following pipelines, these are examples : -| team_id | softwares | fmriprep used ? | pipeline file | -| --- | --- | --- | --- | -| 2T6S | SPM | Yes | [/narps_open/pipelines/team_2T6S.py](/narps_open/pipelines/team_2T6S.py) | -| X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_X19V.py) | +> [!WARNING] +> As soon as the issue is marked as `🏁 status: ready for dev` you can proceed to the next step. -## 👩‍🎤 fMRI software trail +## 3 - Use pull requests +`🧠 fMRI soft` `🐍 Python` -From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints on the process would definitely be welcome, to enrich the description. +1. If needed, [fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository; +2. create a separate branch for the issue you're working on (do not make changes to the default branch of your fork). +3. push your work to the branch as soon as possible; +4. visit [this page](https://github.com/Inria-Empenn/narps_open_pipelines/pulls) to start a draft pull request. -Especially these files contain valuable information about model design: -* for FSL pipelines, `design.fsf` setup files coming from FEAT ; -* for SPM pipelines, matlabbatch files. - -## Find or propose an issue :clipboard: -Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. - -### Answering an existing issue :label: -To answer an existing issue, make a new comment with the following information: - - Your name and/or github username - - The step you want to contribute to - - The approximate time needed +> [!WARNING] +> Make sure you create a **Draft Pull Request** as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), and please stick to the description of the pull request template as much as possible. -### Proposing a new issue :bulb: -In order to start a new issue, click [here](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose the type of issue you want: - - **Feature request** if you aim at improving the project with your ideas ; - - **Bug report** if you encounter a problem or identified a bug ; - - **Classic issue** to ask question, give feedbacks... +## 4 - Reproduction -Some issues are (probably) already open, please browse them before starting a new one. If your issue was already reported, you may want complete it with details or other circumstances in which a problem appear. +Continue writing your work and push it to the branch. Make sure you perform all the items of the pull request checklist. -## Pull Requests :inbox_tray: -Pull requests are the best way to get your ideas into this repository and to solve the problems as fast as possible. +### Translate the pipeline description into code +`🐍 Python` -### Make A Branch :deciduous_tree: -Create a separate branch for each issue you're working on. Do not make changes to the default branch (e.g. master, develop) of your fork. +From the description provided by the team you chose, write Nipype workflows that match the steps performed by the teams (preprocessing, run level analysis, subject level analysis, group level analysis). -### Push Your Code :outbox_tray: -Push your code as soon as possible. +We created templates with modifications to make and holes to fill to help you with that. Find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). -### Create the Pull Request (PR) :inbox_tray: -Once you pushed your first lines of code to the branch in your fork, visit [this page](https://github.com/Inria-Empenn/narps_open_pipelines/pulls) to start creating a PR for the NARPS Open Pipelines project. +> [!TIP] +> Have a look to the already reproduced pipelines, as examples : +> | team_id | softwares | fmriprep used ? | pipeline file | +> | --- | --- | --- | --- | +> | Q6O0 | SPM | Yes | [/narps_open/pipelines/team_Q6O0.py](/narps_open/pipelines/team_Q6O0.py) | -:warning: Please create a **Draft Pull Request** as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), and please stick to the PR description template as much as possible. +### Run the pipeline and produce evidences +`🧠 fMRI soft` -Continue writing your code and push to the same branch. Make sure you perform all the items of the PR checklist. +From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints and commend on the process would definitely be welcome, to enrich the description (e.g.: relevant parameters not written in the description, etc.). -### Request Review :disguised_face: -Once your PR is ready, you may add a reviewer to your PR, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review) in the GitHub documentation. - -Please turn your Draft Pull Request into a "regular" Pull Request, by clicking **Ready for review** in the Pull Request page. +Especially these files contain valuable information about model design: +* for FSL pipelines, `design.fsf` setup files coming from FEAT ; +* for SPM pipelines, `matlabbatch` files. -**:wave: Thank you in advance for contributing to the project!** +### Request Review +`🧠 fMRI soft` `🐍 Python` -## Additional resources +Once your work is ready, you may add a reviewer to your PR, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review). Please turn your draft pull request into a *regular* pull request, by clicking **Ready for review** in the pull request page. - - git and Gitub: general guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) in the GitHub documentation. +**:wave: Thank you for contributing to the project!** From 6f3dd73f227c8f72db54e31873b3336d6ebe69ef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 6 Dec 2023 14:53:14 +0100 Subject: [PATCH 15/15] [TEST] Solving pytest issues with template test --- CONTRIBUTING.md | 21 ++++++++++----------- pytest.ini | 2 +- tests/conftest.py | 5 ----- 3 files changed, 11 insertions(+), 17 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5ebe8af3..98b31c06 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -17,7 +17,7 @@ Not sure which pipeline to start with :thinking:? The [pipeline dashboard](https ## 2 - Interact using issues `🧠 fMRI soft` `🐍 Python` -Browse [issues](https://github.com/Inria-Empenn/narps_open_pipelines/issues/) before starting a new one. If the pipeline is :orange_circle: the associated issues are listed on the [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status). +Browse [issues](https://github.com/Inria-Empenn/narps_open_pipelines/issues/) before starting a new one. If the pipeline is :orange_circle:, the associated issues are listed on the [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status). You can either: * comment on an existing issue with details or your findings about the pipeline; @@ -27,9 +27,9 @@ You can either: > As soon as the issue is marked as `🏁 status: ready for dev` you can proceed to the next step. ## 3 - Use pull requests -`🧠 fMRI soft` `🐍 Python` +`🐍 Python` -1. If needed, [fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository; +1. [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository; 2. create a separate branch for the issue you're working on (do not make changes to the default branch of your fork). 3. push your work to the branch as soon as possible; 4. visit [this page](https://github.com/Inria-Empenn/narps_open_pipelines/pulls) to start a draft pull request. @@ -37,13 +37,13 @@ You can either: > [!WARNING] > Make sure you create a **Draft Pull Request** as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), and please stick to the description of the pull request template as much as possible. -## 4 - Reproduction - -Continue writing your work and push it to the branch. Make sure you perform all the items of the pull request checklist. +## 4 - Reproduce pipeline ### Translate the pipeline description into code `🐍 Python` +Write your code and push it to the branch. Make sure you perform all the items of the pull request checklist. + From the description provided by the team you chose, write Nipype workflows that match the steps performed by the teams (preprocessing, run level analysis, subject level analysis, group level analysis). We created templates with modifications to make and holes to fill to help you with that. Find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). @@ -54,18 +54,17 @@ We created templates with modifications to make and holes to fill to help you wi > | --- | --- | --- | --- | > | Q6O0 | SPM | Yes | [/narps_open/pipelines/team_Q6O0.py](/narps_open/pipelines/team_Q6O0.py) | +Once your work is ready, you may ask a reviewer to your pull request, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review). Please turn your draft pull request into a *regular* pull request, by clicking **Ready for review** in the pull request page. + ### Run the pipeline and produce evidences `🧠 fMRI soft` -From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints and commend on the process would definitely be welcome, to enrich the description (e.g.: relevant parameters not written in the description, etc.). +From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints and comments on the process would definitely be welcome, to enrich the description (e.g.: relevant parameters not written in the description, etc.). Especially these files contain valuable information about model design: * for FSL pipelines, `design.fsf` setup files coming from FEAT ; * for SPM pipelines, `matlabbatch` files. -### Request Review -`🧠 fMRI soft` `🐍 Python` - -Once your work is ready, you may add a reviewer to your PR, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review). Please turn your draft pull request into a *regular* pull request, by clicking **Ready for review** in the pull request page. +You can attach these files as comments on the pipeline reproduction issue. **:wave: Thank you for contributing to the project!** diff --git a/pytest.ini b/pytest.ini index 14522dc7..f949712a 100644 --- a/pytest.ini +++ b/pytest.ini @@ -1,5 +1,5 @@ [pytest] -addopts = --strict-markers +addopts = --strict-markers --ignore=tests/pipelines/templates/ testpaths = tests markers = diff --git a/tests/conftest.py b/tests/conftest.py index d6488236..7c57c1f9 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -18,11 +18,6 @@ from narps_open.utils.configuration import Configuration from narps_open.data.results import ResultsCollection -# A list of test files to be ignored -collect_ignore = [ - 'tests/pipelines/templates/test_team_XXXX.py' # test template - ] - # Init configuration, to ensure it is in testing mode Configuration(config_type='testing')