Friday, January 4, 2019

comparing fMRIPrep and HCP Pipelines: introduction

This is the first in a series of posts describing a set of comparisons we've done between fMRIPrep and the HCP Pipelines: if the same task fMRI dataset is run through both pipelines, extracting surfaces and volumes from each, how do the eventual GLM results compare?

These sorts of comparisons are conceptually easy but tricky to design and time-consuming to complete. I submitted this as a poster for OHBM 2019, we will likely write this up as a paper, and  plan to release the code and images. But these blog posts are a start, and feedback is most welcome. Let me know if you want additional details or files and I can try to get them to you sooner than later.

roadmap
The final post also has some summary thoughts. An UPDATE (24 April 2019) describes the same analyses run with fMRIPrep version 1.3.2.

dataset

I am particularly interested in how the preprocessing would affect the statistics for the Dual Mechanisms of Cognitive Control (DMCC) project since that's the project I'm most involved with. I've previously shown bits from this dataset and described the acquisition parameters; briefly, each DMCC scanning session includes two runs each of four cognitive control tasks (AX-CPT, Cued task-switching, Sternberg, and Stroop). Task runs were each approximately 12 minutes, alternating AP/PA. All scanning was with a 3T SIEMENS Prisma scanner with a 32 channel head coil, 2.4 mm isotropic voxels, TR = 1.2 s, and an MB4 CMRR sequence.

The DMCC is a big dataset (currently around 80 people with at least 3 scanning sessions, and we're not done). To keep the preprocessing comparisons tractable I picked just 13 of the participants to include, requiring that they have complete Baseline session data (two runs of each of the four tasks), be unrelated, have acceptable movement (a qualitative judgement), and have visible motor activity in their single-subject Buttons GLMs (with HCP Pipeline preprocessing). This last is an effort for an unbiased QC estimate: in three of our tasks the participants push a response button, so GLMs using these button-pushes as the only events of interest (ignoring all of the cognitive conditions) should show motor activation if the signal quality is acceptable. I used the volumetric HCP Pipeline preprocessing for screening because that's what we've been using (and so have results for all people); that may introduce a bit of bias (favoring people that did well with the volumetric HCP Pipelines).

pipelines, GLMs, and parcels

These analyses use version 3.17.0 of the HCP Pipelines (adapted several years ago for the DMCC acquisitions) and fMRIPrep version 1.1.7; each pipeline began with the same DICOM images.

The same GLMs (event onsets, model types, etc.) were run on the images from each version (surface, volume) of each pipeline; all with AFNI. The "standard" DMCC GLMs are not particularly simple because the task runs are mixed (block and event), and we're fitting GLMs to model both "sustained" (block) and event-related activity. The GLMs use 3dDeconvolve and 3dREMLfit, with TENTzero (FIR-type) for the events (one knot for two TRs). Most of the GLM model parameters were dictated by the experimental design, but tweaking (e.g., how long to make the TENT knots, whether to include regressors for the block onsets and offsets) was done with the HCP pipeline preprocessed images.

To keep these preprocessing comparisons tractable at the GLM level, we only included Baseline session images, and focused on contrasting high and low cognitive control. Which trials correspond to high and low cognitive control varies across the four tasks, and we don't expect identical levels of difficulty or brain activity in the tasks. Regardless, high vs. low cognitive control conditions gives a fairly "apples to apples" basis, and is a reasonably strong effect.

Another complication for preprocessing comparisons is working out the anatomical correspondence: every voxel does not have a single vertex in the surface representation (and each vertex doesn't match a single voxel). While we had both the fMRIPrep and HCP Pipelines output MNI-normalized volumes (so volume-to-volume voxel-level correspondence), the two pipelines use different surface meshes (fsaverage5 vs. HCP), so there isn't an exact match between the vertices (see this thread, for example).

For these comparisons I deal with corresponding across the output formats by making qualitative assessments (e.g., if this GLM output statistic is thresholded the same, do the same areas tend to show up in the volumes and surfaces?), but more practically by summarizing the statistics within anatomic parcels. The parcellation from (Schaefer et al. 2017) is ideal for this: they released it in volume (MNI) and surface (both fsaverage5 and HCP) formats.We used the 400-area version here, allowing direct comparisons: a mean of parcel #6 can be computed in all four output images (HCP volume, HCP surface, fMRIPrep volume, fMRIPrep surface). The number of voxels or vertices making up each parcel varies between the volume and surfaces, of course.


UPDATE 10 January 2019: Here's the command actually used to run fMRIPrep; thanks Mitch Jeffers!
singularity run \
--cleanenv \
-B ${MOUNT}:/tmp \
/home/ccp_hcp/fmriprep/SingularityImages/fmriprep-1.1.7.simg \
--fs-license-file /tmp/.license/freesurfer/license.txt \
-w /tmp \
/tmp/${INPUT} /tmp/${OUTPUT} \
participant --participant_label ${SUBJECT} \
--output-space fsaverage5 \
-n-cpus 16 --omp-nthreads 4 \
--mem-mb 64000 -vvvv \
--use-plugin /tmp/plugin.yml



UPDATE 6 September 2019: The raw and preprocessed images of the dataset we used for these comparisons is now on openneuro, called DMCC13benchmark

UPDATE 4 January 2021: Corrected DMCC13benchmark openneuro links.

No comments:

Post a Comment