This post introduces an expanded and updated demo of how detrending and normalizing ("scaling") of individual voxel (or vertex) timecourses works with afni 3dDeconvolve and 3dDetrend commands.
In my original post I showed how to duplicate what 3dDetrend -normalize -polort 2 does with R code, not to replace the afni function, but to understand what it is doing. This expanded version simplifies the old code a bit, but more importantly, adds sections explaining another common method of preparing timecourses for analysis: using 3dDeconvolve with -num_stimts 0 -polort A -errts (plus censoring, motion regressors, etc.; see below) to create the residual error timeseries.
The compiled demo is afni3dDetrend3dDeconvolve_R.pdf and is a knitr file; its source code (with many comments) and files required for compilation are in a section of my osf site, DOI https://doi.org/10.17605/OSF.IO/NU324 (please include that DOI in any citation).
starting point: a voxel's timecourse
The examples use a left motor grey matter voxel from a preprocessed (with fmriprep 1.3.2) fMRI task run from the DMCC55B dataset; I chose it arbitrarily. The plot below is directly from the preprocessed nifti; this voxel has values around 6900, and the timecourse vector is length 540. DMCC55B's TR was 1.2 s, so this is a 10.8 minute-long run. The grey vertical lines are at one-minute intervals, with TR (frame) number along the x-axis and BOLD amplitude along the y-axis. (See the knitr .rnw for details, plotting code, etc.; this blog post just has a few highlights.)
normalizing and detrending
Scaling alone isn't usually sufficient to prepare fMRI timeseries for analysis; we also need at least a bit of detrending. There's no universally correct degree or type of detrending to use. I generally recommend a modest amount of detrending before parcel-averaging types of analyses, specifically 3dDetrend -normalize -polort 2 .
In the plot below, the same voxel timecourse as above is plotted after normalizing only (tc.np0, black, from 3dDetrend -normalize -polort 0), or with detrending at polort 2 (blue, tc.np2), and the more aggressive polort 5 (green, tc.np5).
Notice that the spiky parts of the timecourses are pretty much the same in all three versions, but the slower changes vary more; e.g., in the first minute without no-detrending line (tc.np0) is furthest from zero, the tc.np2 line is closer, and tc.np5 line closest.It's sensible that larger polort numbers have more of an effect on the timecourse's shape, since, as explained in the afni help for 3dDetrend, -polort ppp gives "the Legendre polynomials of order up to and including 'ppp' in the list of vectors to remove", so larger -polort numbers means removing more complex trends. The R code below shows how to do this type of normalizing and detrending; Legendre() is from Gregor Gorjanc, and requires the orthopolynom and polynom R packages.
# R commands for 3dDetrend -normalize -polort 2
lm.out <- lm(tc.raw ~ Legendre(x=seq(tc.raw), n=2)); # lm with two Legendre polynomials (polort 2)
tmp <- residuals(lm.out); # extract the residuals
tc.Rnp2 <- (tmp-mean(tmp))/sqrt(sum((tmp-mean(tmp))^2)); # normalize the residuals, afni-style residual error via 3dDeconvolve
It's common to have the realignment parameters as nuisance regressors and censor high-motion frames prior to fMRI timecourse analyses, which can be done with 3dDeconvolve. Skipping rather a lot of explanations from the full demo, the afni command is:
# errts.fname is the file made by 3dDeconvolve, from which the single voxel timecourse was extracted:
system2(paste0(afni.path, "3dDeconvolve"),
args=paste0("-input '", scale.fname, "' -polort A -float ",
"-censor '", c.fname, "' -num_stimts 0 ",
"-ortvec '", mot.fname, "' moveregs ",
"-nobucket -errts ", errts.fname), stdout=TRUE);
where -num_stimts 0 means not to include any events in the model, -errts that we want afni to write the residual error time series from the "full model fit to the input data" into file errts.fname (in this case, a .nii.gz), and -polort A that afni should set the polort level according to the run length (here, that gives 5).
Below is the 3dDetrend -normalize -polort 5 (tc.np5) timecourse again in green, with the new (-errts) version from the 3dDeconvolve command in pink:
The errts timecourse is highly correlated with the np5 version, which makes sense, since both included polort 5 detrending. They're not perfectly correlated, though: the 3dDeconvolve command also did censoring and included the motion regressors. I don't have a simple way to describe the differences in the lines; they're clearly very similar, but not identical; sometimes one is more extreme or spiky, sometimes the other.
The errts image has 0 in the censored frames. This is obvious in a 4d nifti (entire frame filled with 0s), but ambiguous in a single voxel (or vertex) timecourse like this (in the plot the censored frames are circled on the tc.errts timecourse, squared on the np5 version). For some analyses in R (e.g., averaging frames after an event for temporal compression) it'd be sensible to use NA for the censored frames.
This R code matches the 3dDeconvolve calculations:
# made in startup code chunk; download at https://osf.io/nu324/files/c6nax
mot.fname <- paste0(demo.path, "sub-f1027ao_ses-wave1bas_task-Stroop_6regressors_demean.txt");
mot.tbl <- read.delim(mot.fname, sep=" ", header=FALSE); # 540 x 6
# made in startup code chunk; download at https://osf.io/nu324/files/bjrk8
c.fname <- paste0(demo.path, "sub-f1027ao_ses-wave1bas_task-Stroop_FD_mask0.5.txt"); # 0 1 censor file
censor.vec <- read.table(c.fname, header=FALSE)[,1];
censor.TRs <- which(censor.vec == 0) # [1] 300 316 431 448 497
# first, remove censored frames from the motion regressors and input timecourse
# The input timecourse tc.scale is the voxel from file scale.fname: the input bold.fname after
# scaling with 3dcalc -expr 'min(200, a/b*100)*step(a)*step(b)'
scale.vec <- tc.scale[-censor.TRs];
col1.vec <- mot.tbl$V1[-censor.TRs]; # demeaned trans_x
col2.vec <- mot.tbl$V2[-censor.TRs];
col3.vec <- mot.tbl$V3[-censor.TRs];
col4.vec <- mot.tbl$V4[-censor.TRs];
col5.vec <- mot.tbl$V5[-censor.TRs];
col6.vec <- mot.tbl$V6[-censor.TRs];
# fit the lm, polort 5, on the censored scale.vec and including 6 censored motion regressors
lm.out <- lm(scale.vec ~ Legendre(x=seq(scale.vec), n=5) + col1.vec + col2.vec + col3.vec + col4.vec + col5.vec + col6.vec);
tc.Rerrts <- residuals(lm.out); # extract the residuals
# put 0s back in where the censored frames were taken out.
for (i in 1:length(censor.TRs)) { tc.Rerrts <- append(tc.Rerrts, 0, (censor.TRs[i]-1)); }
# the R version matches the 3dDeconvolve errts version
cor(tc.errts, tc.Rerrts); # almost perfect
musings
Working out the R code which matched the 3dDeconvolve -errts demystified it for me; the 3dDetrend -normalize -polort 2 detrending and normalizing ("np2") I've treated as a default (for timecourse-averaging type analyses) is closer to these 3dDeconvolve residuals ("errts") than I'd thought.
Am I going to change my default detrending method? Is the 3dDeconvolve errts "better" than the 3dDetrend np2 for an analysis like this? I think incorporating the censoring (to NA, not 0) and polort-picking calculation (which gave 5 in this demo) from 3dDeconvolve (instead of using 2 regardless of run length) would be sensible, modest improvements.
I'm less confident about whether to add in the motion regressors. My sense was that including these would somehow "account for" or "clean up" any motion effects not "corrected" by preprocessing. And including the six realignment parameter columns does change the timecourse produced by the model a bit ... but it's not much different if the actual realignment parameters or random ones are included, making me suspect the change is more due to the change in model degrees of freedom than actually "fixing" the head motion.
Here's code for the random-motion-regressor models:
# permute numbers in each motion regressor separately
lm.out <- lm(scale.vec ~ Legendre(x=seq(scale.vec), n=5) + sample(col1.vec) + sample(col2.vec) + sample(col3.vec) + sample(col4.vec) + sample(col5.vec) + sample(col6.vec));
tc.test1 <- residuals(lm.out); # extract the residuals
for (i in 1:length(censor.TRs)) { tc.test1 <- append(tc.test1, 0, (censor.TRs[i]-1)); } # put 0s back in
# random numbers for the motion regressors
ct <- length(scale.vec); # how long to make each fake motion regressor column
lm.out <- lm(scale.vec ~ Legendre(x=seq(scale.vec), n=5) + rnorm(ct) + rnorm(ct) + rnorm(ct) + rnorm(ct) + rnorm(ct) + rnorm(ct));
tc.test2 <- residuals(lm.out); # extract the residuals
for (i in 1:length(censor.TRs)) { tc.test2 <- append(tc.test2, 0, (censor.TRs[i]-1)); } # put 0s back in
No comments:
Post a Comment