First, we need some data. I picked a run from a handy preprocessed dataset (it doesn't really matter for the demo, but it's from multibandACQtests), then ran the image through afni's 3dDetrend command twice: once with the options -normalize -polort 0, then with -normalize -polort 2. This gives three images, and I picked a voxel at random and extracted its timecourse from each image. Code for this step starts at line 22 of the knitr file (below the jump), and text files with the values are included in the archive for this post.
Here is the voxel's "raw" timecourse: after preprocessing (the HCP pipelines, in this case), but before running 3dDetrend:
And here is the same voxel's timecourse, from the image after 3dDetrend -normalize -polort 2: centered on zero, with the shift of baseline around TR 50 reduced a bit.
From the afni documentation, 3dDetrend -normalize "Normalize[s] each output voxel time series; that is, make the sum-of-squares equal to 1." Note that this is not what R scale() does by default (which is to mean 0 and standard deviation 1; see the full demo for a comparison). Line 113 of the demo code does the afni-style normalization: (tc.raw - mean(tc.raw)) / sqrt(sum((tc.raw - mean(tc.raw))^2)), where tc.raw is a vector with the voxel's timecourse.
For the -polort 2 part, the afni documentation says: "Add Legendre polynomials of order up to and including 'ppp' in the list of vectors to remove." I used the functions from Gregor Gorjanc's blog post (thanks!) and the R orthopolynom package for this; the key part is line 140: residuals(lm(tc.raw ~ seq(tc.raw) + Legendre(x=seq(tc.raw), n=1) + Legendre(x=seq(tc.raw), n=2))). The two Legendre function terms (n=1 and n=2) are to match -polort 2; more (or fewer) terms can be included as desired.
knitr code for the demo; this file, along with the compiled pdf and three .txt input files needed for compiling can also be downloaded here.
\documentclass{article}
\addtolength{\oddsidemargin}{-1.25in}
\addtolength{\evensidemargin}{-1.25in}
\addtolength{\textwidth}{2.5in}
\addtolength{\topmargin}{-.875in}
\addtolength{\textheight}{1.75in}
\begin{document}
<<startup, echo=FALSE, message=FALSE, warning=FALSE>>=
# code written by Joset A. Etzel (jetzel@wustl.edu) and may be adapted, provided this source is cited. 26 June 2018.
# Cognitive Control & Psychopathology Lab, Psychological & Brain Sciences, Washington University in St. Louis (USA)
library(orthopolynom); # for legendre.polynomials()
rm(list=ls());
# get some timecourses; one voxel from an image with 368 TRs.
tc.raw <- read.table("/scratch2/JoEtzel/R3dDetrend/testTC_raw.txt")[,1]; # a voxel from the preprocessed (but otherwise unaltered) image
tc.p0 <- read.table("/scratch2/JoEtzel/R3dDetrend/testTC_polort0norm.txt")[,1]; # same voxel, after 3dDetrend -normalize -polort 0
tc.p2 <- read.table("/scratch2/JoEtzel/R3dDetrend/testTC_polort2norm.txt")[,1]; # same voxel, after 3dDetrend -normalize -polort 2
# how the .txt files were made:
# 3dDetrend -prefix out.nii.gz -normalize -polort 0 in.nii.gz # afni command
# in.img <- readNIfTI("out.nii.gz", reorient=FALSE); # read the 4d volume into R (needs oro.nifti package)
# tc.in <- in.img[20,20,20,]; # extract the timecourse for the voxel at coordinates 20,20,20
# write.table(tc.in, "/scratch2/JoEtzel/R3dDetrend/testTC_polort0norm.txt"); # write voxel's timecourse as a text file
# ------------- functions for generating the Legendre polynomials, taken from Gregor Gorjanc's blog,
# https://ggorjan.blogspot.com/2009/02/fitting-legendre-orthogonal-polynomials.html:
#
# Now we can use this in a model, e.g., lm(y ~ leg4).
# I made this whole process easier - with the functions bellow, we can simply use lm(y ~ Legendre(x=scaleX(x, u=-1, v=1), n=4)).
# I contacted Fred and I hope he will add some version of these functions to his package.
#
# Update 2009-03-16: If we want that the intercept has a value of 1, we need to rescale the polynomial coefficients by multiplying with sqrt(2).
Legendre <- function(x, n, normalized=TRUE, intercept=FALSE, rescale=TRUE)
{
## Create a design matrix for Legendre polynomials
## x - numeric
## n - see orthopolynom
## normalized - logical, see orthopolynom
## intercept - logical, add intercept
tmp <- legendre.polynomials(n=n, normalized=normalized)
if(!intercept) tmp <- tmp[2:length(tmp)]
polynomial.values(polynomials=tmp, x=x, matrix=TRUE)
}
polynomial.values <- function(polynomials, x, matrix=FALSE)
{
## Changed copy of polynomial.vales from orthopolynom in order
## to add matrix argument
require(polynom)
n <- length(polynomials)
if(!matrix) {
values <- vector(mode="list", length=n)
} else {
values <- matrix(ncol=n, nrow=length(x))
}
j <- 1
while(j <= n) {
if(!matrix) {
values[[j]] <- predict(polynomials[[j]], x)
} else {
values[, j] <- predict(polynomials[[j]], x)
}
j <- j + 1
}
values
}
# ------------- end code from https://ggorjan.blogspot.com/2009/02/fitting-legendre-orthogonal-polynomials.html
@
\noindent \texttt{afni3dDetrend\textunderscore R.rnw} compiled \today\ \par
\noindent code written by Joset A. Etzel (\texttt{jetzel@wustl.edu}) on 26 June 2018 and released on \texttt{mvpa.blogspot.com}. \par
\vspace{0.1 cm}
\noindent This file shows how to perform the normalization and detrending done by the afni command \texttt{3dDetrend} in R. \par
\vspace{0.4 cm}
\noindent \textbf{timecourses created by afni 3dDetrend} \par
\noindent These are the timecourses for the same voxel before any detrending or normalization (``raw'', top), with \texttt{3dDetrend -normalize -polort 0} (normalization only) and with \texttt{3dDetrend -normalize -polort 2} (bottom, normalization and detrending). Note that the shape of the timecourses is the same in the first two plots but the y-axis range varies, while in the third plot the range is the same as in the second (and centered on zero) but the slower trends have changed. \par
\vspace{0.1 cm}
<<code1, cache=TRUE, dev='pdf', echo=FALSE, fig.height=2.5, fig.width=7.5, fig.align="center">>=
par(mar=c(1.1, 1.1, 1.25, 0.3), mgp=c(1.1, 0.2, 0), tcl=-0.3); # margin plotting parameters
plot(tc.raw, type='l', main="raw timecourse", xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i');
plot(tc.p0, type='l', main="timecourse with -normalize -polort 0", xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i');
lines(x=c(-1,400), y=rep(0,2), col='grey', lty='dashed');
plot(tc.p2, type='l', main="timecourse with -normalize -polort 2", xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i');
lines(x=c(-1,400), y=rep(0,2), col='grey', lty='dashed');
@
\newpage
\noindent \textbf{normalizing (only, no detrending)} \par
\noindent The first plot shows the \texttt{3dDetrend -normalize -polort 0} (normalization only) timecourse again (in black), with the timecourse produced by R overlaid in blue (the differences are very small). afni 3dDetrend normalizes by making ``the sum-of-squares equal to 1''; the R scale command normalizes to mean 0 and standard deviation 1. So, the timecourses produced by two normalizations are the same shape, but different y-axis ranges. \par
\vspace{0.1 cm}
<<code2, cache=TRUE, dev='pdf', echo=TRUE, fig.height=2.5, fig.width=7.5, fig.align="center">>=
par(mar=c(1.1, 1.1, 1.25, 0.3), mgp=c(1.1, 0.2, 0), tcl=-0.3); # set plotting parameters
ttl <- "-normalize -polort 0. afni in black, R in blue (identical lines).";
plot(x=-5, y=0, main=ttl, xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i',
xlim=c(1,length(tc.p0)), ylim=c(min(tc.p0),max(tc.p0)));
lines(x=c(-1,400), y=rep(0,2), col='grey', lty='dashed'); # zero line
lines(tc.p0, col='black', lwd=1.5);
tc.norm <- (tc.raw-mean(tc.raw))/sqrt(sum((tc.raw-mean(tc.raw))^2)); # R command for -normalize -polort 0
lines(tc.norm, col='blue', lty='dashed');
# R scaling: same shape, different y-axis range.
tc.scale <- scale(tc.raw);
plot(x=-5, y=0, main="R scale function", xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i', xlim=c(1,length(tc.raw)), ylim=c(min(tc.scale),max(tc.scale)));
lines(x=c(-1,400), y=rep(0,2), col='grey', lty='dashed'); # zero line
lines(tc.scale, col='black');
# the differences are very small:
max(tc.norm - tc.p0);
@
\newpage
\noindent \textbf{detrending and normalizing} \par
\noindent The first plot shows the \texttt{3dDetrend -normalize -polort 2} (normalization and detrending) timecourse again (in black), with the timecourse produced by the R code overlaid in blue (the differences are very small). Note that the first part of the timecourse is now a bit closer to 0, rather than clearly below as it was in the raw version. The shape of the timecourse after TR 50 is a bit less flat, however (a side effect of fitting the polynomials). \par
\vspace{0.1 cm}
<<code3a, cache=TRUE, dev='pdf', echo=TRUE, fig.height=2.5, fig.width=7.5, fig.align="center">>=
par(mar=c(1.1, 1.1, 1.25, 0.3), mgp=c(1.1, 0.2, 0), tcl=-0.3); # set plotting parameters
ttl <- "-normalize -polort 2. afni in black, R in blue (identical lines).";
plot(x=-5, y=0, main=ttl, xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i',
xlim=c(1,length(tc.p2)), ylim=c(min(tc.p2),max(tc.p2)));
lines(x=c(-1,400), y=rep(0,2), col='grey', lty='dashed'); # zero line
lines(tc.p2, col='black', lwd=1.5);
# R commands for -normalize -polort 2
lm.out <- lm(tc.raw ~ seq(tc.raw) + Legendre(x=seq(tc.raw), n=1) + Legendre(x=seq(tc.raw), n=2)); # lm with two Legendre polynomials (polort 2)
tmp <- residuals(lm.out);
tc.norm <- (tmp-mean(tmp))/sqrt(sum((tmp-mean(tmp))^2)); # afni-style normalizing
lines(tc.norm, col='blue', lty='dashed');
@
\newpage
\noindent For comparison, here is the effect of removing linear trends only (in black). Compared to the original (R-style) scaled timecourse (in grey), the linear detrending brought the first part of the timecourse up closer to zero, and dropped the last part a bit below zero. \par
\vspace{0.1 cm}
<<code3b, cache=TRUE, dev='pdf', echo=TRUE, fig.height=2.5, fig.width=7.5, fig.align="center">>=
par(mar=c(1.1, 1.1, 1.25, 0.3), mgp=c(1.1, 0.2, 0), tcl=-0.3); # set plotting parameters
# R lm and scaling (linear detrending only)
ttl <- "R lm (black, no Legendre polynomials) overlaid on the timecourse with scaling only";
plot(x=-5, y=0, main=ttl, xlab="", ylab="", cex.main=0.7, cex.axis=0.7, xaxs='i',
xlim=c(1,length(tc.raw)), ylim=c(-4,2));
lines(x=c(-1,400), y=rep(0,2), col='grey', lty='dashed'); # zero line
tc.norm <- scale(tc.raw); # scaling only (no lm) for comparison
lines(tc.norm, col='darkgrey');
lm.out <- lm(tc.raw ~ seq(tc.raw)); # linear detrending (no Legendre polynomials)
tc.norm <- scale(residuals(lm.out)); # then scaling
lines(tc.norm, col='black');
@
\end{document}
Very helpful. Thanks.
ReplyDelete