Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[0H5E] Pipeline reproduction (SPM - raw) #223

Open
9 tasks
bclenet opened this issue Dec 11, 2024 · 4 comments
Open
9 tasks

[0H5E] Pipeline reproduction (SPM - raw) #223

bclenet opened this issue Dec 11, 2024 · 4 comments
Labels
✨ goal: improvement Improvement to an existing feature raw SPM

Comments

@bclenet
Copy link
Collaborator

bclenet commented Dec 11, 2024

Softwares

  • SPM: We are Luddites and used SPM8 because most of our scripts were written for that and we didn't feel like changing them, and most of the pre-processing steps we use haven't really changed much since SPM5. Don't judge us.
  • NeuroSynth (online) was used to generate masks of anatomical regions for purposes of verifying our anatomical judgments.
  • Custom Matlab scripts for everything not handled entirely within SPM.

Input data

raw data

Additional context

see description below

List of tasks

Please tick the boxes below once the corresponding task is finished. 👍

  • 👌 A maintainer of the project approved the issue, by assigning a 🏁status: ready for dev label to it.
  • 🌳 Create a branch on your fork to start the reproduction.
  • 🌅 Create a file team_{team_id}.py inside the narps_open/pipelines/ directory. You can use a file inside narps_open/pipelines/templates as a template if needed.
  • 📥 Create a pull request as soon as you completed the previous task.
  • 🧠 Write the code for the pipeline, using Nipype and the file architecture described in docs/pipelines.md.
  • 📘 Make sure your code is documented enough.
  • 🐍 Make sure your code is explicit and conforms with PEP8.
  • 🔬 Create tests for your pipeline. You can use files in tests/pipelines/test_team_* as examples.
  • 🔬 Make sure your code passes all the tests you created (see docs/testing.md).

NARPS team description : 0H5E

General

Our responses are complicated somewhat by the fact that SPM masks out voxels with low signal, and in group analyses, if any one subject is missing a voxel, that voxel is masked out of the entire analysis. In this dataset, the amygdala ended up masked out of the whole-brain group analysis for both groups, and the ventral striatum ended up masked out of the equal indifference group only. So, by the rules that we are supposed to report whole-brain analysis results, we have to say "No" for any regions that did not appear in the whole-brain analysis, but really a "Not applicable" response would be more appropriate. To help supplement the whole-brain corrected-threshold results, we also looked at whole-brain results at an uncorrected threshold of p<.05 (cluster size = 5 voxels) and ran ROI analyses based on manually selected coordinates for ventral striatum and amygdala within a small sphere. Our confidence values are thus partially based on the outcomes of these supplementary analyses. We also recognize that not every analysis package masks out low-signal voxels as aggressively as SPM, which has partially informed our judgments about similarity to other teams. Further notes on each hypothesis are below.

Hypothesis 1 -- Did not show up at corrected threshold but did show up at an uncorrected threshold (and was marginally significant in an ROI analysis), so we think a Type II error is reasonably likely and the hypothesis might have been confirmed in a larger sample.

Hypothesis 2 -- Did not show up at any threshold (corrected or uncorrected up to p<.05) and also did not show up in ROI analysis, so we are reasonably confident in our rejection of this one.

Hypothesis 3 -- Did not show up at any threshold, but we confirmed that ventral striatum was not in the mask for the analysis, so it would not be expected to. However, even with manually selected ROIs, the hypothesis was not confirmed anyway (p=.5).

Hypothesis 4 -- Did show up at the corrected threshold (and thus enough of ventral striatum was in-mask for the analysis to work), and also came out significantly in our manually selected ROI analysis (p=.03).

Hypothesis 5 -- There was a very large and obvious cluster even at the corrected threshold, and it was also highly significant in our ROI analysis (p=1.6e-5).

Hypothesis 6 -- Did not show up at corrected threshold but at an uncorrected threshold there was a fairly clear cluster, although the ROI analysis was not significant (p=.14). So we think there is some chance of a Type II error and that the hypothesis might be confirmed in a larger sample, but with a bit less confidence than we had for Hypothesis 1. (Less confident of a possible Type II error, that is -- i.e., a bit more confident in our main result that Hypothesis 6 was not confirmed.)

Hypothesis 7 -- Did not show up at any threshold but we confirmed that amygdala was not in the mask for the analysis, so it would not be expected to. The effect WAS significant with manually selected ROIs (p=2.3e-5) -- but in the opposite direction of what was hypothesized (i.e., we found a negative effect of loss). Thus, by our interpretation of the rules, we are reporting this as "hypothesis not confirmed" with fairly high confidence, as the whole-brain analysis did not have any information about the amygdala and the ROI analysis actually contradicted the hypothesis.

Hypothesis 8 -- Did not show up at any threshold but again was not in the analysis mask. However, nothing was significant in the ROI analysis either (p=.41).

Hypothesis 9 -- This one is tricky to report, as we knew the amygdala did not appear in the whole-brain analysis mask for either group, and thus it was formally impossible to test with a whole-brain group analysis as instructed. However, in our ROI analysis, the hypothesis appeared to be confirmed (p=.019), although both means were negative, so really the "greater response to losses" for the equal range group would be more accurately described as a "less negative response to losses." By our interpretation of the rules, we are reporting this as "hypothesis not confirmed" in the whole-brain analyses because it can't really be tested with our standard pipeline, although we know the ROI analysis does seem to confirm the hypothesis, hence the low confidence we reported for the whole-brain "result." Note that we did run the stats model we would have run for the whole-brain analyses merely for purposes of uploading to NeuroVault, but we know there are no amygdala voxels in it.

  • preregistered : No
  • link_preregistration_form : NA
  • regions_definition :

For the whole-brain analyses, we mostly used our own expertise in neuroanatomy combined with searches of the literature, especially for the subcortical structures. For vmPFC, whole-brain results were fortunately all unambiguous (either large portions of what was unmistably vmPFC were activated, or nothing in that general region was), but in all cases, we did also compare our judgments to NeuroSynth masks that matched those anatomical search terms. Typically we would not frame our hypothesis exactly the way they were in this study -- e.g., testing the hypothesis "is there an effect in vmPFC?" with a whole-brain analysis, as instructed. Either we would have an anatomical hypothesis that we would test with an ROI analysis (based either on coordinates from an atlas/meta-analysis, a functional localizer, or manual definition of the anatomial boundaries), OR we would run a whole-brain analysis and simply report loci of activity descriptively (i.e., listing the significant clusters and assigning anatomical labels as accurately as possible to the voxels within those clusters, but with no specific a priori hypotheses about the regions activated). So the way the project was framed did not fit exactly into our standard way of making inferences, and we simply did our best to accommodate the requirements without changing too much of our standard process.

  • softwares : SPM: We are Luddites and used SPM8 because most of our scripts were written for that and we didn't feel like changing them, and most of the pre-processing steps we use haven't really changed much since SPM5. Don't judge us. , NeuroSynth (online) was used to generate masks of anatomical regions for purposes of verifying our anatomical judgments. Custom Matlab scripts for everything not handled entirely within SPM.
  • general_comments :

Exclusions

  • n_participants : 100
  • exclusions_details :
    • sub-100: taken out because inter-subject registration was extremely poor even after manually re-orienting the original images
    • sub-030, sub-088, sub-116: taken out for extreme head motion; this was subjectively based on visual inspection of all subjects' motion parameters for all runs and our own judgment (treating the motion holistically, including considerations for factors like overall movement vs. the number/magnitude of sudden motion spikes, but without explicit enumeration of the weighting of those factors), but was only performed once (i.e., we did not go back on our decisions after running statistical analyses)
    • sub-025, sub-043, sub-094, sub-113: taken out for their post-registration voxel masks being a poor fit to the overall sample mask (>2 standard deviations worse than average), as this meant it would further reduce the set of voxels SPM would include in the group analysis. (Note that this would not normally be a part of our workflow – we deemed it necessary after seeing how many voxels SPM had cut out of the analysis mask, but that is not an issue for us in our typical studies.)

Preprocessing

  • used_fmriprep_data : No
  • preprocessing_order :
    • Removal of "dummy" scans (deleting the first four volumes from each run -- timing files were adjusted accordingly)
    • motion correction
    • rigid coregistration ("Coregister" SPM command) of subject anatomical to MNI template
    • rigid coregistration of each subject's functional data to their coregistered anatomical image
    • resample images with all of those linear operations applied
    • non-rigid registration/warping ("Normalise" SPM command) of anatomical to MNI template
    • application of warping ("Normalise" again -- but only writing the transformation, not estimating it) to functional images
    • spatial smoothing.
  • brain_extraction : Not performed.
  • segmentation : Not performed (we use the older form of spatial normalization in SPM that does not require segmentation).
  • slice_time_correction : Not performed. (Typically we follow the recommendation not to use slice-timing correction for TRs under 2000ms or so.)
  • motion_correction :

SPM8 "Realign" command -- Estimation parameters as follows:

Quality=1 (highest)
separation=4mm (default)
smoothing=5mm (default)
number of passes="register to mean" (does a first pass registering everything to the first image in the run, creates a temporary mean image, and then does a second pass registering everything to that mean)
interpolation="7th Degree B-spline" (although we do not permanently write out the interpolated images at this step; we save only a mean image and the affine transformations in the NIfTI headers [see below])
wrapping="No wrap"
weighting=none.

Reslicing parameters:

Resliced images="Mean image only",
interpolation="7th Degree B-spline",
wrapping="No wrap",
masking="Mask images".

Other notes: No non-rigid registration at this stage; no unwarping; similarity metric is least squares (no option to change in SPM8), no slice timing correction as noted above.

  • motion :
  • gradient_distortion_correction : Not performed.
  • intra_subject_coreg :

SPM8 "Coregister" command, "Estimate" procedure only (we do not write out interpolated images at this stage; we only save the affine transformations in the NIfTI headers, as with motion correction).

Reference image=subject's anatomical (already coregistered to MNI template in an earlier step),
source image=mean image from each run (generated during motion correction),
other images=all other images from that run (post motion correction),
objective function="Normalized Mututal Information",
separation=[4 2] (default),
tolerances=default values (a 12-item matrix),
histogram smoothing=[7 7] (default).

Other notes: All rigid at this stage.

  • distortion_correction : Not performed.
  • inter_subject_reg : Two main steps -- rigid and non-rigid (see overall order of pre-processing above).

Rigid intersubject registration: SPM8 "Coregister" command, "Estimate" and "Reslice" procedures.

"Estimate" options:

Reference image=MNI T1.nii template provided with SPM8 (2mm isotropic voxels),
source image=subject's raw T1 anatomical,
other images=none,
objective function="Normalized Mututal Information",
separation=[4 2] (default),
tolerances=default values (a 12-item matrix),
histogram smoothing=[7 7] (default).

"Reslice" options:

Interpolation="7th degree b-spline",
wrapping="No wrap",
masking="Don't mask images".

Non-rigid registration: SPM8 "Normalise" command, "Estimate" and "Write" procedures.
"Estimate" options:

Source image=subject's T1 anatomical (post rigid coregistration),
source weighting image=none,
images to write=normalized version of same T1 anatomical (no other images written out at this stage),
template image=MNI T1.nii template (same as for coregistration),
template weighting image=none,
source image smoothing=8mm [default],
template image smoothing=none,
affine regularisation="ICBM space template",
nonlinear frequency cutoff=25 (default),
nonlinear iterations=16 (default),
nonlinear regularisation=1 (default).

"Write" options:

Preserve="preserve concentrations" (default),
bounding box=default coordinates,
voxel sizes=2mm x 2mm x 2mm,
interpolation="7th degree b-spline",
wrapping=none.

Non-rigid registration parameters from the anatomical non-rigid registration were then applied to all functional images from that subject (after rigid coregistration of the functional images to the subject's anatomical image), with all of the same "Write" options under the SPM8 "Normalise" command except for voxel size, which was set to 2.5mm isotropic.

Other notes: No surface-based registration; all volume-based. All based on T1 anatomical; no bias field correction, no segmentation, no Talairach transformations.

  • intensity_correction : Not performed.
  • intensity_normalization : Not performed (we used SPM but selected "None" rather than "Scaling" for the "Global normalisation" parameter during individual-subject stats).
  • noise_removal : Not performed. (In particular, we did not include motion parameters in our statistical estimation.)
  • volume_censoring : Not performed. (Any subject with extreme motion spikes enough to warrant de-spiking was removed from analysis entirely.)
  • spatial_smoothing :

SPM8 "Smooth" command. Options:

Images to smooth=all the images from each run (after all other pre-processing steps),
FWHM=9mm x 9mm x 9mm,
data type="same",
implicit masking=none.

Other notes: Basic fixed (non-iterative) 9mm FWHM isotropic Gaussian smoothing kernel. All in volume space after all registration steps (i.e., after all subjects are in MNI space). True confession time: We originally used a 6mm kernel but after running some preliminary sanity checks on the group analyses, we decided too many voxels were getting masked out of the group analysis due to anatomical inconsistencies between subjects, so we re-ran the smoothing at 9mm and used that for the final analyses. We would normally use a smaller kernel but also our typical sample sizes are smaller than in this dataset (closer to N=20 and a single subject group), and because of the way SPM masks out voxels from group analyses, the higher the N, the more voxels are going to get masked out -- hence our decision to smooth more in order not to over-mask the group analyses. (This still did not work perfectly in the case of amygdala and ventral striatum, but we did our best.)

  • preprocessing_comments : The one pre-processing step we did that wasn't discussed here (but which was mentioned in our description of overal pre-processing sequence) was a resampling step we did between all the linear transformations on the functional images and the nonlinear (warping) transformation. We only do this as a separate step because we try to avoid resampling the functional images a bunch of times without need if all the transformations are doing is updating the NIfTI header's affine transformation matrix, which is true for our initial motion correction and linear coregistration steps, but we do like to save a resampled copy of the images with all of those affine transformations applied before any subsequent steps. (This is normally so we can do certain statistical analyses in individual-subject space that has been loosely affine-registered with standard space but not warped... we did not have to do any such analyses in this particular case but it is part of our standard processing stream, so we left it in.) Unfortunately SPM8's coregistration routine has a weird feature where it will only resample output images at the same voxel size as the reference image (even though the normalization routine will let you freely select a bounding box and voxel size). We did not want to resample all functional images at the resolution of our anatomical template, and unfortunately SPM8 does not provide a pure image-resampling function, so our hacked-together solution is to generate a spatial normalization matrix defining a null transformation and apply that to the functional images through the SPM8 normalization routine, which effectively resamples the images to the desired resolution without transforming them (but with any affine transformations from the NIfTI header applied prior to resampling and then zero'ed out in the resampled images). We chose to resample to 2.5mm isotropic voxels as that is the usual size at which we currently acquire our own fMRI data (and a convenient size to work with in general), although arguments could be made that another size could be more optimal for the NARPS dataset that was originally acquired at a different voxel size.

Analysis

  • data_submitted_to_model : All timepoints except for the removal of the first four volumes in each run (as discussed above), and all subjects except those specifically removed (as described above). We used all runs from every subject left in the analysis.

  • spatial_region_modeled : Full brain -- although as we noted at the top, this didn't really work as intended because SPM8 masked out essentially all amygdala voxels from the group analysis, due to one or more subject(s) having low signal, as well as most/all of the ventral striatum voxels for one of the participant groups. As such, inferences for those regions were not really possible following our standard pre-processing and stats procedures, even after expanding our initially chosen smoothing kernel and excluding subjects whose brains did not fit the groupwise in-brain voxel mask very well. This is not normally an issue for us because 1) we typically focus on areas that do not suffer from low signal, e.g. visual cortex and frontoparietal regions; 2) we typically have smaller samples (~20), and the more participants in a sample, the more voxels SPM8 is going to mask out of the group analysis; and 3) we don't do too many groupwise whole-brain analyses these days anyway (most of our analyses are things like MVPA which are all run at the single-subject level and do not require second-level fMRI group analyses), and if we were focusing on tricky areas like the amygdala or ventral striatum, we would typically use an ROI approach. As such, we did do the supplementary ROI analyses described earlier, where we essentially chose a single voxel from each region of interest for each subject (left amygdala, right amygdala, left ventral striatum, right ventral striatum) based on examining the functional and anatomical images, extracted a small (6mm radius) sphere around each one, averaged the relevant values from the first-level analysis in each sphere (excluding any NaN values using Matlab's 'nanmean' function), averaged left and right (excluding either one if all of its voxels were NaN values), and used that resultant value as the subject's value for that ROI in the subsequent group analysis. ROI analyses for VMPFC were similar, except we used the same seed voxel for everyone ([0 48 -8], the maximum voxel in the Neurosynth "ventromedial" meta-analysis) and a slightly larger sphere (10mm radius), and did not run separate analyses for left and right since the ROI was medial.

  • independent_vars_first_level :

    • We used SPM8's "parametric modulations" feature, using the values specified in the NARPS data for each gain/loss parameter (1st order/linear effects only).
    • The durations were modeled as 4sec because that was the value in the timing files we were given.
    • Not sure what you mean exactly by "block design" predictors -- if you mean how runs were modeled, we did what most people do in SPM analyses -- modeled all of the runs in a single first-level model for each subject, with SPM automatically adding a regressor of all 1's for each run to model out the mean/baseline for that run.
    • We used the canonical HRF included in SPM8 with no derivatives.
    • We did not include regressors for motion or drift, although we did use SPM's default high-pass filter (128sec period).
    • No other nuisance regressors.
    • Other first-level parameters included:
microtime resolution=16 (default),
microtime onset=1 (default),
model Volterra interactions=no,
Global normalisation=none,
no explicit mask,
serial correlations=AR(1).

Orthogonalization of regressors -- this got a little weird. It is our understanding that SPM orthogonalizes parametric regressors in the order they are entered (i.e. the second parametric modulator essentially gets whatever variance is left over from the first; see e.g. http://andysbrainblog.blogspot.com/2014/08/parametric-modulation-with-spm-why.html). In theory this should not be a big deal if the regressors were properly orthogonalized in the design, but just in case, our solution was actually to run the first-level stats for each subject twice -- always putting both regressors in the model, but once entering the effect of gains first (followed by losses), and once entering losses first (followed by gains). The reasoning was that we wanted to emulate the logic of doing Type III sum of squares e.g. in an ANOVA model -- considering only the unique contributions of the regressor in question after the other effects had been accounted for. Thus, to get the effect of losses, we used the model in which gains had been entered first, and to get the effect of gains, we used the model in which losses had been entered first.

At the first level, we then entered a contrast to get the positive effect of gains for the losses-first model or the positive effect of losses for the gains-first model, e.g. [0 0 .25 0 0 .25 0 0 .25 0 0 .25 0 0 0 0] to get a contrast image that effectively averaged the beta values across runs for each subject.

  • RT_modeling : none
  • movement_modeling : 0
  • independent_vars_higher_level :

Group stats were very simple in most cases.

We just used the "one-sample t-test" 2nd-level model type in SPM8 and used the contrast images from the 1st-level analysis, with no covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).

Separate analyses were used for the effect of gains for range group, gains/indifference group, losses/range, and losses/indifference.

For the effect of losses, in which we were asked to provide results in both positive and negative directions, we just specified two contrasts in the group model -- one of [+1] and one of [-1], with the latter simply reversing the direction of the test.

For hypothesis 9, the question of how between-groups effects were modeled was moot, because we knew the amygdala values would be masked out. However, our supplementary ROI analysis for that hypothesis was done using a simple two-sample t-test (equal variances assumed since the assumption was not violated, although it did not affect the outcome either way to assume equal variances or not).

Purely for purposes of uploading to NeuroVault, we did run the whole-brain between-groups model we would have run for hypothesis 9, which was a "two-sample t-test" SPM8 group model with independence=yes, variance=unequal, grand mean scaling=no, ANCOVA=no, no covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).

  • model_type : Mass univariate.
  • model_settings : Included above but repeated here for 1st-level: We used SPM's default high-pass filter (128sec period). No other nuisance regressors. Other first-level parameters included: microtime resolution=16 (default), microtime onset=1 (default), model Volterra interactions=no, Global normalisation=none, no explicit mask, serial correlations=AR(1).

Included above but repeated here for 2nd-level (everything but hypothesis 9): No covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).

Included above but repeated here for 2nd-level (hypothesis 9): Independence=yes, variance=unequal, grand mean scaling=no, ANCOVA=no, no covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).

Further details: 2nd level was SPM default OLS approach. Variance for hypothesis 9 was SPM8 default re: pooling. Everything else in this section is N/A.

  • inference_contrast_effect : Noted above, but simple contrasts of [0 0 .25 0 0 .25 0 0 .25 0 0 .25 0 0 0 0] used at first level essentially to average across runs; at the second level, all models were simple t-tests so the only contrasts entered were [+1] and [-1]. We did not double p-values since the hypotheses in each region were directional (although in normal life we don't typically use or approve of one-tailed tests, for some reason they are still the norm in mass univariate fMRI analyses). Supplementary ROI analyses all used two-tailed tests.

  • search_region : Whole brain; no small volume correction. (Except for supplementary ROI analyses, but those were condensed to single values before statistical inference so no corrections were needed.)

  • statistic_type : Voxel-wise. No cluster significance corrections, but we did set a minimum cluster size of 5 voxels for ease of viewing results. (In theory, this would make our stats a bit over-conservative as we did not account for this minimum cluster size in any way, but it should be negligible for all practical purposes.)

  • pval_computation : Standard parametric inference.

  • multiple_testing_correction : Typical B&H voxel-wise FDR, as implemented in SPM8 (although in SPM8 it has to be enabled by a special preference setting).

  • comments_analysis : N/A on our end, but please let us know if anything is unclear / missing / appears to be wrong.

Categorized for analysis

  • region_definition_vmpfc : neurosynth, visually
  • region_definition_striatum : visually
  • region_definition_amygdala : visually
  • analysis_SW : SPM
  • analysis_SW_with_version : SPM8
  • smoothing_coef : 9
  • testing : parametric
  • testing_thresh : adaptive
  • correction_method : FDR voxelwise
  • correction_thresh_ : k>5

Derived

  • n_participants : 100
  • excluded_participants : 100, 030, 088, 116, 025, 043, 094, 113
  • func_fwhm : 9
  • con_fwhm :

Comments

  • excluded_from_narps_analysis : No
  • exclusion_comment : Rejected due to large amount of missing brain in center.
  • reproducibility : 2
  • reproducibility_comment :
@bclenet bclenet converted this from a draft issue Dec 11, 2024
@bclenet bclenet added ❓ question Further information is requested SPM raw labels Dec 11, 2024
@bclenet bclenet moved this from Not started to Backlog in NARPS Open Pipelines | Reproductions Dec 11, 2024
@bclenet
Copy link
Collaborator Author

bclenet commented Dec 11, 2024

warning : the pipeline might not be reproducible, as it seems to use custom code that is not shared.

@bclenet bclenet added the 🧠 hackathon To assess during the hackathon label Dec 12, 2024
@cmaumet cmaumet self-assigned this Dec 12, 2024
@cmaumet
Copy link
Contributor

cmaumet commented Dec 12, 2024

@bclenet - I've reviewed the description of this pipeline and I did not identify steps that would make it impossible for us to reproduce. I agree with your reading that the "custom" code here might be related to building the flow of the pipeline more than adding custom steps. So I would keep this pipeline into the "to be reproduced" group.

@cmaumet cmaumet removed their assignment Dec 12, 2024
@bclenet bclenet added 🏁 status: ready for dev Ready for work and removed ❓ question Further information is requested labels Dec 12, 2024
This was referenced Dec 17, 2024
@bclenet bclenet self-assigned this Dec 19, 2024
@bclenet bclenet moved this from Backlog to In progress in NARPS Open Pipelines | Reproductions Dec 19, 2024
@bclenet bclenet added 🚀 status: ready for test Ready for running and testing and removed 🏁 status: ready for dev Ready for work labels Dec 20, 2024
@bclenet
Copy link
Collaborator Author

bclenet commented Jan 10, 2025

Pipeline works on 4 subjects after #231 was merged. Needs to be tested on 108 subjects.

@bclenet
Copy link
Collaborator Author

bclenet commented Jan 14, 2025

Correlation results with 99 subjects. Along with the 8 subjects (100, 030, 088, 116, 025, 043, 094, 113) originally excluded by team 0H5E, subject 119 was excluded due to a registration issue during preprocessing (file smwrrsub-119_task-MGT_run-03_bold_roi.nii cf. attached picture).

[0.81, 0.88, 0.81, 0.88, 0.94, 0.71, 0.94, 0.71, -0.88]

Image

@bclenet bclenet moved this from In progress to Done in NARPS Open Pipelines | Reproductions Jan 14, 2025
@bclenet bclenet added ✨ goal: improvement Improvement to an existing feature and removed 🧠 hackathon To assess during the hackathon 🚀 status: ready for test Ready for running and testing labels Jan 14, 2025
@bclenet bclenet removed their assignment Jan 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
✨ goal: improvement Improvement to an existing feature raw SPM
Projects
Development

No branches or pull requests

2 participants