Friday, March 31, 2023

bugfix for niftiPlottingFunctions.R, plot.volume()

If you use or adapted the my plot.volume() function from niftiPlottingFunctions.R, please update your code with the version released 30 March 2023. This post illustrates the bug (reversed color scaling for negative values) below the jump; please read it if you've used the function, and contact me with any questions. 

Huge thanks to Tan Nguyen for spotting and reporting the color discrepancy! 

Thursday, March 2, 2023

reasonable motion censoring thresholds?

Recent participants have gotten me thinking (yet again) about the different types of motion during fMRI; causes and consequences. And more immediately practical: which motion censoring thresholds might be reasonable for particular tasks and analyses. 

I've long used (and recommended) FD > 0.9 as a censoring threshold for our event-related task fMRI studies (not functional correlation or resting state-type analyses). 0.9 is more lenient than many use for task fMRI; e.g., Siegel 2014: advise 0.5 FD for adults (which we have), 0.9 for kids or clinical populations. (I need to update a previous post; I'd misread Siegel's recommendations.)

Consider the following motion plot of a run from a recent participant, using my usual conventions (grey lines at one-minute intervals, frames along the x-axis); see this QC demonstration paper, the DMCC55B dataset descriptor, etc. for more explanation (and code). The lower panel shows FD, with the red line at the FD 0.9 censoring threshold, and red x marking censored frames; the grey horizontal line is at FD 0.5. 

I interpret this as a run with  minimal overt head motion, but pronounced apparent motion (from breathing, strongest in trans_y, consistent with the AP encoding direction). Zero frames are censored at an FD > 0.9 threshold (red line); 130 are censored at FD > 0.5 (grey line). There are 562 frames in the run, so 130/562 = 0.23 of the frames censored at FD 0.5, and we would drop the run at our usual criterion of < 20% censored frames.

Contrast the above plot with the following (marked with a 2 in the upper left); the same task, scanner, etc., but a different participant:

I'd characterize plot 2 as having more pronounced overt than apparent motion; there are oscillations in the trans_y, but these are dwarfed by the head motions, e.g., in the middle of the first minute. Looking at the censoring, 13 frames (corresponding to the largest overt head motions) are marked for censoring with FD 0.9; 40 are marked with FD 0.5, corresponding to more of the overt head motions. 40/562 = 0.07, well under the 20% censoring dropping threshold.

musings

To my eye, the 0.5 FD threshold is pretty reasonable in the second case, since it censors more of the overt head motion irregularities, and only those spikes. But for the first plot the 0.5 FD threshold seems far too aggressive: censoring part of every few breaths, 23% of the total frames. What do you think?

I hope to do some proper analyses of the impact of different amounts of apparent vs. overt motion on statistical analyses, but it is not a trivial problem, particularly with task entrainment. (Synchronizing breathing to task timing.)

As a final bit of food for thought, here are the tSNR and sd images for each of the two runs, without censoring (all 562 frames), after preprocessing, and with the same color scaling. The first strikes me as higher quality, despite the greater (> 0.5 FD) censoring. I believe apparent motion could have less of an impact on image quality than overt because the head is not actually moving, and so not creating the attendant magnetic disruptions; the differences are clear to the eye when viewing these types of runs as movies, but it's not clear how those differences translate to statistical analyses.