-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0H5E] Pipeline reproduction (SPM - raw) #223
Comments
warning : the pipeline might not be reproducible, as it seems to use custom code that is not shared. |
@bclenet - I've reviewed the description of this pipeline and I did not identify steps that would make it impossible for us to reproduce. I agree with your reading that the "custom" code here might be related to building the flow of the pipeline more than adding custom steps. So I would keep this pipeline into the "to be reproduced" group. |
Pipeline works on 4 subjects after #231 was merged. Needs to be tested on 108 subjects. |
Correlation results with 99 subjects. Along with the 8 subjects (100, 030, 088, 116, 025, 043, 094, 113) originally excluded by team 0H5E, subject 119 was excluded due to a registration issue during preprocessing (file [0.81, 0.88, 0.81, 0.88, 0.94, 0.71, 0.94, 0.71, -0.88] |
Softwares
Input data
raw data
Additional context
see description below
List of tasks
Please tick the boxes below once the corresponding task is finished. 👍
status: ready for dev
label to it.team_{team_id}.py
inside thenarps_open/pipelines/
directory. You can use a file insidenarps_open/pipelines/templates
as a template if needed.tests/pipelines/test_team_*
as examples.NARPS team description : 0H5E
General
teamID
: 0H5ENV_collection_link
: https://neurovault.org/collections/4936/results_comments
:Our responses are complicated somewhat by the fact that SPM masks out voxels with low signal, and in group analyses, if any one subject is missing a voxel, that voxel is masked out of the entire analysis. In this dataset, the amygdala ended up masked out of the whole-brain group analysis for both groups, and the ventral striatum ended up masked out of the equal indifference group only. So, by the rules that we are supposed to report whole-brain analysis results, we have to say "No" for any regions that did not appear in the whole-brain analysis, but really a "Not applicable" response would be more appropriate. To help supplement the whole-brain corrected-threshold results, we also looked at whole-brain results at an uncorrected threshold of p<.05 (cluster size = 5 voxels) and ran ROI analyses based on manually selected coordinates for ventral striatum and amygdala within a small sphere. Our confidence values are thus partially based on the outcomes of these supplementary analyses. We also recognize that not every analysis package masks out low-signal voxels as aggressively as SPM, which has partially informed our judgments about similarity to other teams. Further notes on each hypothesis are below.
Hypothesis 1 -- Did not show up at corrected threshold but did show up at an uncorrected threshold (and was marginally significant in an ROI analysis), so we think a Type II error is reasonably likely and the hypothesis might have been confirmed in a larger sample.
Hypothesis 2 -- Did not show up at any threshold (corrected or uncorrected up to p<.05) and also did not show up in ROI analysis, so we are reasonably confident in our rejection of this one.
Hypothesis 3 -- Did not show up at any threshold, but we confirmed that ventral striatum was not in the mask for the analysis, so it would not be expected to. However, even with manually selected ROIs, the hypothesis was not confirmed anyway (p=.5).
Hypothesis 4 -- Did show up at the corrected threshold (and thus enough of ventral striatum was in-mask for the analysis to work), and also came out significantly in our manually selected ROI analysis (p=.03).
Hypothesis 5 -- There was a very large and obvious cluster even at the corrected threshold, and it was also highly significant in our ROI analysis (p=1.6e-5).
Hypothesis 6 -- Did not show up at corrected threshold but at an uncorrected threshold there was a fairly clear cluster, although the ROI analysis was not significant (p=.14). So we think there is some chance of a Type II error and that the hypothesis might be confirmed in a larger sample, but with a bit less confidence than we had for Hypothesis 1. (Less confident of a possible Type II error, that is -- i.e., a bit more confident in our main result that Hypothesis 6 was not confirmed.)
Hypothesis 7 -- Did not show up at any threshold but we confirmed that amygdala was not in the mask for the analysis, so it would not be expected to. The effect WAS significant with manually selected ROIs (p=2.3e-5) -- but in the opposite direction of what was hypothesized (i.e., we found a negative effect of loss). Thus, by our interpretation of the rules, we are reporting this as "hypothesis not confirmed" with fairly high confidence, as the whole-brain analysis did not have any information about the amygdala and the ROI analysis actually contradicted the hypothesis.
Hypothesis 8 -- Did not show up at any threshold but again was not in the analysis mask. However, nothing was significant in the ROI analysis either (p=.41).
Hypothesis 9 -- This one is tricky to report, as we knew the amygdala did not appear in the whole-brain analysis mask for either group, and thus it was formally impossible to test with a whole-brain group analysis as instructed. However, in our ROI analysis, the hypothesis appeared to be confirmed (p=.019), although both means were negative, so really the "greater response to losses" for the equal range group would be more accurately described as a "less negative response to losses." By our interpretation of the rules, we are reporting this as "hypothesis not confirmed" in the whole-brain analyses because it can't really be tested with our standard pipeline, although we know the ROI analysis does seem to confirm the hypothesis, hence the low confidence we reported for the whole-brain "result." Note that we did run the stats model we would have run for the whole-brain analyses merely for purposes of uploading to NeuroVault, but we know there are no amygdala voxels in it.
preregistered
: Nolink_preregistration_form
: NAregions_definition
:For the whole-brain analyses, we mostly used our own expertise in neuroanatomy combined with searches of the literature, especially for the subcortical structures. For vmPFC, whole-brain results were fortunately all unambiguous (either large portions of what was unmistably vmPFC were activated, or nothing in that general region was), but in all cases, we did also compare our judgments to NeuroSynth masks that matched those anatomical search terms. Typically we would not frame our hypothesis exactly the way they were in this study -- e.g., testing the hypothesis "is there an effect in vmPFC?" with a whole-brain analysis, as instructed. Either we would have an anatomical hypothesis that we would test with an ROI analysis (based either on coordinates from an atlas/meta-analysis, a functional localizer, or manual definition of the anatomial boundaries), OR we would run a whole-brain analysis and simply report loci of activity descriptively (i.e., listing the significant clusters and assigning anatomical labels as accurately as possible to the voxels within those clusters, but with no specific a priori hypotheses about the regions activated). So the way the project was framed did not fit exactly into our standard way of making inferences, and we simply did our best to accommodate the requirements without changing too much of our standard process.
softwares
: SPM: We are Luddites and used SPM8 because most of our scripts were written for that and we didn't feel like changing them, and most of the pre-processing steps we use haven't really changed much since SPM5. Don't judge us. , NeuroSynth (online) was used to generate masks of anatomical regions for purposes of verifying our anatomical judgments. Custom Matlab scripts for everything not handled entirely within SPM.general_comments
:Exclusions
n_participants
: 100exclusions_details
:Preprocessing
used_fmriprep_data
: Nopreprocessing_order
:brain_extraction
: Not performed.segmentation
: Not performed (we use the older form of spatial normalization in SPM that does not require segmentation).slice_time_correction
: Not performed. (Typically we follow the recommendation not to use slice-timing correction for TRs under 2000ms or so.)motion_correction
:SPM8 "Realign" command -- Estimation parameters as follows:
Reslicing parameters:
Other notes: No non-rigid registration at this stage; no unwarping; similarity metric is least squares (no option to change in SPM8), no slice timing correction as noted above.
motion
:gradient_distortion_correction
: Not performed.intra_subject_coreg
:SPM8 "Coregister" command, "Estimate" procedure only (we do not write out interpolated images at this stage; we only save the affine transformations in the NIfTI headers, as with motion correction).
Other notes: All rigid at this stage.
distortion_correction
: Not performed.inter_subject_reg
: Two main steps -- rigid and non-rigid (see overall order of pre-processing above).Rigid intersubject registration: SPM8 "Coregister" command, "Estimate" and "Reslice" procedures.
"Estimate" options:
"Reslice" options:
Non-rigid registration: SPM8 "Normalise" command, "Estimate" and "Write" procedures.
"Estimate" options:
"Write" options:
Non-rigid registration parameters from the anatomical non-rigid registration were then applied to all functional images from that subject (after rigid coregistration of the functional images to the subject's anatomical image), with all of the same "Write" options under the SPM8 "Normalise" command except for voxel size, which was set to 2.5mm isotropic.
Other notes: No surface-based registration; all volume-based. All based on T1 anatomical; no bias field correction, no segmentation, no Talairach transformations.
intensity_correction
: Not performed.intensity_normalization
: Not performed (we used SPM but selected "None" rather than "Scaling" for the "Global normalisation" parameter during individual-subject stats).noise_removal
: Not performed. (In particular, we did not include motion parameters in our statistical estimation.)volume_censoring
: Not performed. (Any subject with extreme motion spikes enough to warrant de-spiking was removed from analysis entirely.)spatial_smoothing
:SPM8 "Smooth" command. Options:
Other notes: Basic fixed (non-iterative) 9mm FWHM isotropic Gaussian smoothing kernel. All in volume space after all registration steps (i.e., after all subjects are in MNI space). True confession time: We originally used a 6mm kernel but after running some preliminary sanity checks on the group analyses, we decided too many voxels were getting masked out of the group analysis due to anatomical inconsistencies between subjects, so we re-ran the smoothing at 9mm and used that for the final analyses. We would normally use a smaller kernel but also our typical sample sizes are smaller than in this dataset (closer to N=20 and a single subject group), and because of the way SPM masks out voxels from group analyses, the higher the N, the more voxels are going to get masked out -- hence our decision to smooth more in order not to over-mask the group analyses. (This still did not work perfectly in the case of amygdala and ventral striatum, but we did our best.)
preprocessing_comments
: The one pre-processing step we did that wasn't discussed here (but which was mentioned in our description of overal pre-processing sequence) was a resampling step we did between all the linear transformations on the functional images and the nonlinear (warping) transformation. We only do this as a separate step because we try to avoid resampling the functional images a bunch of times without need if all the transformations are doing is updating the NIfTI header's affine transformation matrix, which is true for our initial motion correction and linear coregistration steps, but we do like to save a resampled copy of the images with all of those affine transformations applied before any subsequent steps. (This is normally so we can do certain statistical analyses in individual-subject space that has been loosely affine-registered with standard space but not warped... we did not have to do any such analyses in this particular case but it is part of our standard processing stream, so we left it in.) Unfortunately SPM8's coregistration routine has a weird feature where it will only resample output images at the same voxel size as the reference image (even though the normalization routine will let you freely select a bounding box and voxel size). We did not want to resample all functional images at the resolution of our anatomical template, and unfortunately SPM8 does not provide a pure image-resampling function, so our hacked-together solution is to generate a spatial normalization matrix defining a null transformation and apply that to the functional images through the SPM8 normalization routine, which effectively resamples the images to the desired resolution without transforming them (but with any affine transformations from the NIfTI header applied prior to resampling and then zero'ed out in the resampled images). We chose to resample to 2.5mm isotropic voxels as that is the usual size at which we currently acquire our own fMRI data (and a convenient size to work with in general), although arguments could be made that another size could be more optimal for the NARPS dataset that was originally acquired at a different voxel size.Analysis
data_submitted_to_model
: All timepoints except for the removal of the first four volumes in each run (as discussed above), and all subjects except those specifically removed (as described above). We used all runs from every subject left in the analysis.spatial_region_modeled
: Full brain -- although as we noted at the top, this didn't really work as intended because SPM8 masked out essentially all amygdala voxels from the group analysis, due to one or more subject(s) having low signal, as well as most/all of the ventral striatum voxels for one of the participant groups. As such, inferences for those regions were not really possible following our standard pre-processing and stats procedures, even after expanding our initially chosen smoothing kernel and excluding subjects whose brains did not fit the groupwise in-brain voxel mask very well. This is not normally an issue for us because 1) we typically focus on areas that do not suffer from low signal, e.g. visual cortex and frontoparietal regions; 2) we typically have smaller samples (~20), and the more participants in a sample, the more voxels SPM8 is going to mask out of the group analysis; and 3) we don't do too many groupwise whole-brain analyses these days anyway (most of our analyses are things like MVPA which are all run at the single-subject level and do not require second-level fMRI group analyses), and if we were focusing on tricky areas like the amygdala or ventral striatum, we would typically use an ROI approach. As such, we did do the supplementary ROI analyses described earlier, where we essentially chose a single voxel from each region of interest for each subject (left amygdala, right amygdala, left ventral striatum, right ventral striatum) based on examining the functional and anatomical images, extracted a small (6mm radius) sphere around each one, averaged the relevant values from the first-level analysis in each sphere (excluding any NaN values using Matlab's 'nanmean' function), averaged left and right (excluding either one if all of its voxels were NaN values), and used that resultant value as the subject's value for that ROI in the subsequent group analysis. ROI analyses for VMPFC were similar, except we used the same seed voxel for everyone ([0 48 -8], the maximum voxel in the Neurosynth "ventromedial" meta-analysis) and a slightly larger sphere (10mm radius), and did not run separate analyses for left and right since the ROI was medial.independent_vars_first_level
:Orthogonalization of regressors -- this got a little weird. It is our understanding that SPM orthogonalizes parametric regressors in the order they are entered (i.e. the second parametric modulator essentially gets whatever variance is left over from the first; see e.g. http://andysbrainblog.blogspot.com/2014/08/parametric-modulation-with-spm-why.html). In theory this should not be a big deal if the regressors were properly orthogonalized in the design, but just in case, our solution was actually to run the first-level stats for each subject twice -- always putting both regressors in the model, but once entering the effect of gains first (followed by losses), and once entering losses first (followed by gains). The reasoning was that we wanted to emulate the logic of doing Type III sum of squares e.g. in an ANOVA model -- considering only the unique contributions of the regressor in question after the other effects had been accounted for. Thus, to get the effect of losses, we used the model in which gains had been entered first, and to get the effect of gains, we used the model in which losses had been entered first.
At the first level, we then entered a contrast to get the positive effect of gains for the losses-first model or the positive effect of losses for the gains-first model, e.g. [0 0 .25 0 0 .25 0 0 .25 0 0 .25 0 0 0 0] to get a contrast image that effectively averaged the beta values across runs for each subject.
RT_modeling
: nonemovement_modeling
: 0independent_vars_higher_level
:Group stats were very simple in most cases.
We just used the "one-sample t-test" 2nd-level model type in SPM8 and used the contrast images from the 1st-level analysis, with no covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).
Separate analyses were used for the effect of gains for range group, gains/indifference group, losses/range, and losses/indifference.
For the effect of losses, in which we were asked to provide results in both positive and negative directions, we just specified two contrasts in the group model -- one of [+1] and one of [-1], with the latter simply reversing the direction of the test.
For hypothesis 9, the question of how between-groups effects were modeled was moot, because we knew the amygdala values would be masked out. However, our supplementary ROI analysis for that hypothesis was done using a simple two-sample t-test (equal variances assumed since the assumption was not violated, although it did not affect the outcome either way to assume equal variances or not).
Purely for purposes of uploading to NeuroVault, we did run the whole-brain between-groups model we would have run for hypothesis 9, which was a "two-sample t-test" SPM8 group model with independence=yes, variance=unequal, grand mean scaling=no, ANCOVA=no, no covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).
model_type
: Mass univariate.model_settings
: Included above but repeated here for 1st-level: We used SPM's default high-pass filter (128sec period). No other nuisance regressors. Other first-level parameters included: microtime resolution=16 (default), microtime onset=1 (default), model Volterra interactions=no, Global normalisation=none, no explicit mask, serial correlations=AR(1).Included above but repeated here for 2nd-level (everything but hypothesis 9): No covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).
Included above but repeated here for 2nd-level (hypothesis 9): Independence=yes, variance=unequal, grand mean scaling=no, ANCOVA=no, no covariates, implicit masking only, no "global calculation" or "overall grand mean scaling" or "normalisation" (these latter three being PET-only settings).
Further details: 2nd level was SPM default OLS approach. Variance for hypothesis 9 was SPM8 default re: pooling. Everything else in this section is N/A.
inference_contrast_effect
: Noted above, but simple contrasts of [0 0 .25 0 0 .25 0 0 .25 0 0 .25 0 0 0 0] used at first level essentially to average across runs; at the second level, all models were simple t-tests so the only contrasts entered were [+1] and [-1]. We did not double p-values since the hypotheses in each region were directional (although in normal life we don't typically use or approve of one-tailed tests, for some reason they are still the norm in mass univariate fMRI analyses). Supplementary ROI analyses all used two-tailed tests.search_region
: Whole brain; no small volume correction. (Except for supplementary ROI analyses, but those were condensed to single values before statistical inference so no corrections were needed.)statistic_type
: Voxel-wise. No cluster significance corrections, but we did set a minimum cluster size of 5 voxels for ease of viewing results. (In theory, this would make our stats a bit over-conservative as we did not account for this minimum cluster size in any way, but it should be negligible for all practical purposes.)pval_computation
: Standard parametric inference.multiple_testing_correction
: Typical B&H voxel-wise FDR, as implemented in SPM8 (although in SPM8 it has to be enabled by a special preference setting).comments_analysis
: N/A on our end, but please let us know if anything is unclear / missing / appears to be wrong.Categorized for analysis
region_definition_vmpfc
: neurosynth, visuallyregion_definition_striatum
: visuallyregion_definition_amygdala
: visuallyanalysis_SW
: SPManalysis_SW_with_version
: SPM8smoothing_coef
: 9testing
: parametrictesting_thresh
: adaptivecorrection_method
: FDR voxelwisecorrection_thresh_
: k>5Derived
n_participants
: 100excluded_participants
: 100, 030, 088, 116, 025, 043, 094, 113func_fwhm
: 9con_fwhm
:Comments
excluded_from_narps_analysis
: Noexclusion_comment
: Rejected due to large amount of missing brain in center.reproducibility
: 2reproducibility_comment
:The text was updated successfully, but these errors were encountered: