Friday, February 27, 2026

detrending and normalizing timecourses: afni 3dDetrend and 3dDeconvolve in R

This post introduces an expanded and updated demo of how detrending and normalizing ("scaling") of individual voxel (or vertex) timecourses works with afni 3dDeconvolve and 3dDetrend commands. 

In my original post I showed how to duplicate what 3dDetrend -normalize -polort 2 does with R code, not to replace the afni function, but to understand what it is doing. This expanded version simplifies the old code a bit, but more importantly, adds sections explaining another common method of preparing timecourses for analysis: using 3dDeconvolve with -num_stimts 0 -polort A -errts (plus censoring, motion regressors, etc.; see below) to create the residual error timeseries. 

The compiled demo is afni3dDetrend3dDeconvolve_R.pdf  and is a knitr file; its source code (with many comments) and files required for compilation are in a section of my osf site, DOI https://doi.org/10.17605/OSF.IO/NU324 (please include that DOI in any citation). 

starting point: a voxel's timecourse

The examples use a left motor grey matter voxel from a preprocessed (with fmriprep 1.3.2) fMRI task run from the DMCC55B dataset; I chose it arbitrarily. The plot below is directly from the preprocessed nifti; this voxel has values around 6900, and the timecourse vector is length 540. DMCC55B's TR was 1.2 s, so this is a 10.8 minute-long run. The grey vertical lines are at one-minute intervals, with TR (frame) number along the x-axis and BOLD amplitude along the y-axis. (See the knitr .rnw for details, plotting code, etc.; this blog post just has a few highlights.)

raw (preprocessed) voxel timecourse

normalizing and detrending

Scaling alone isn't usually sufficient to prepare fMRI timeseries for analysis; we also need at least a bit of detrending. There's no universally correct degree or type of detrending to use. I generally recommend a modest amount of detrending before parcel-averaging types of analyses, specifically 3dDetrend -normalize -polort 2 . 

In the plot below, the same voxel timecourse as above is plotted after normalizing only (tc.np0, black, from 3dDetrend -normalize -polort 0), or with detrending at polort 2 (blue, tc.np2), and the more aggressive polort 5 (green, tc.np5). 

same timecourse, after 3dDetrend -normalize -polort 0, 2, or 5
Notice that the spiky parts of the timecourses are pretty much the same in all three versions, but the slower changes vary more; e.g., in the first minute without no-detrending line (tc.np0) is furthest from zero, the tc.np2 line is closer, and tc.np5 line closest. 

It's sensible that larger polort numbers have more of an effect on the timecourse's shape, since, as explained in the afni help for 3dDetrend, -polort ppp gives "the Legendre polynomials of order up to and including 'ppp' in the list of vectors to remove", so larger -polort numbers means removing more complex trends. The R code below shows how to do this type of normalizing and detrending; Legendre() is from Gregor Gorjanc, and requires the orthopolynom and polynom R packages.

 # R commands for 3dDetrend -normalize -polort 2  
 lm.out <- lm(tc.raw ~ Legendre(x=seq(tc.raw), n=2)); # lm with two Legendre polynomials (polort 2)  
 tmp <- residuals(lm.out);  # extract the residuals  
 tc.Rnp2 <- (tmp-mean(tmp))/sqrt(sum((tmp-mean(tmp))^2));  # normalize the residuals, afni-style   

residual error via 3dDeconvolve

It's common to have the realignment parameters as nuisance regressors and censor high-motion frames prior to fMRI timecourse analyses, which can be done with 3dDeconvolve. Skipping rather a lot of explanations from the full demo, the afni command is:

 # errts.fname is the file made by 3dDeconvolve, from which the single voxel timecourse was extracted:  
 system2(paste0(afni.path, "3dDeconvolve"),   
     args=paste0("-input '", scale.fname, "' -polort A -float ",  
           "-censor '", c.fname, "' -num_stimts 0 ",  
           "-ortvec '", mot.fname, "' moveregs ",  
           "-nobucket -errts ", errts.fname), stdout=TRUE);  

where -num_stimts 0 means not to include any events in the model,  -errts that we want afni to write the residual error time series from the "full model fit to the input data" into file errts.fname (in this case, a .nii.gz), and -polort A that afni should set the polort level according to the run length (here, that gives 5).

Below is the 3dDetrend -normalize -polort 5 (tc.np5) timecourse again in green, with the new (-errts) version from the 3dDeconvolve command in pink: 

same voxel timecourse, 3dDeconvolve errts over scale(tc.np5), censored frames marked

The errts timecourse is highly correlated with the np5 version, which makes sense, since both included polort 5 detrending. They're not perfectly correlated, though: the 3dDeconvolve command also did censoring and included the motion regressors. I don't have a simple way to describe the differences in the lines; they're clearly very similar, but not identical; sometimes one is more extreme or spiky, sometimes the other.

The errts image has 0 in the censored frames. This is obvious in a 4d nifti (entire frame filled with 0s), but ambiguous in a single voxel (or vertex) timecourse like this (in the plot the censored frames are circled on the tc.errts timecourse, squared on the np5 version). For some analyses in R (e.g., averaging frames after an event for temporal compression) it'd be sensible to use NA for the censored frames.

This R code matches the 3dDeconvolve calculations:

 # made in startup code chunk; download at https://osf.io/nu324/files/c6nax  
 mot.fname <- paste0(demo.path, "sub-f1027ao_ses-wave1bas_task-Stroop_6regressors_demean.txt");   
 mot.tbl <- read.delim(mot.fname, sep=" ", header=FALSE); # 540 x 6  
   
 # made in startup code chunk; download at https://osf.io/nu324/files/bjrk8  
 c.fname <- paste0(demo.path, "sub-f1027ao_ses-wave1bas_task-Stroop_FD_mask0.5.txt");  # 0 1 censor file  
 censor.vec <- read.table(c.fname, header=FALSE)[,1];  
 censor.TRs <- which(censor.vec == 0) # [1] 300 316 431 448 497  
   
 # first, remove censored frames from the motion regressors and input timecourse  
 # The input timecourse tc.scale is the voxel from file scale.fname: the input bold.fname after  
 # scaling with 3dcalc -expr 'min(200, a/b*100)*step(a)*step(b)'   
 scale.vec <- tc.scale[-censor.TRs];  
 col1.vec <- mot.tbl$V1[-censor.TRs]; # demeaned trans_x  
 col2.vec <- mot.tbl$V2[-censor.TRs];   
 col3.vec <- mot.tbl$V3[-censor.TRs];   
 col4.vec <- mot.tbl$V4[-censor.TRs];   
 col5.vec <- mot.tbl$V5[-censor.TRs];   
 col6.vec <- mot.tbl$V6[-censor.TRs];   
   
 # fit the lm, polort 5, on the censored scale.vec and including 6 censored motion regressors  
 lm.out <- lm(scale.vec ~ Legendre(x=seq(scale.vec), n=5) + col1.vec + col2.vec + col3.vec + col4.vec + col5.vec + col6.vec);   
 tc.Rerrts <- residuals(lm.out);  # extract the residuals  
   
 # put 0s back in where the censored frames were taken out.  
 for (i in 1:length(censor.TRs)) { tc.Rerrts <- append(tc.Rerrts, 0, (censor.TRs[i]-1)); }  
   
 # the R version matches the 3dDeconvolve errts version   
 cor(tc.errts, tc.Rerrts); # almost perfect  

musings

Working out the R code which matched the 3dDeconvolve -errts demystified it for me; the 3dDetrend -normalize -polort 2 detrending and normalizing ("np2") I've treated as a default (for timecourse-averaging type analyses) is closer to these 3dDeconvolve residuals ("errts") than I'd thought. 

Am I going to change my default detrending method? Is the 3dDeconvolve errts "better" than the 3dDetrend np2 for an analysis like this? I think incorporating the censoring (to NA, not 0) and polort-picking calculation (which gave 5 in this demo) from 3dDeconvolve (instead of using 2 regardless of run length) would be sensible, modest improvements.

I'm less confident about whether to add in the motion regressors. My sense was that including these would somehow "account for" or "clean up" any motion effects not "corrected" by preprocessing. And including the six realignment parameter columns does change the timecourse produced by the model a bit ... but it's not much different if the actual realignment parameters or random ones are included, making me suspect the change is more due to the change in model degrees of freedom than actually "fixing" the head motion.

Here's code for the random-motion-regressor models:

 # permute numbers in each motion regressor separately  
 lm.out <- lm(scale.vec ~ Legendre(x=seq(scale.vec), n=5) + sample(col1.vec) + sample(col2.vec) + sample(col3.vec) + sample(col4.vec) + sample(col5.vec) + sample(col6.vec));   
 tc.test1 <- residuals(lm.out);  # extract the residuals  
 for (i in 1:length(censor.TRs)) { tc.test1 <- append(tc.test1, 0, (censor.TRs[i]-1)); } # put 0s back in  
   
 # random numbers for the motion regressors  
 ct <- length(scale.vec);  # how long to make each fake motion regressor column  
 lm.out <- lm(scale.vec ~ Legendre(x=seq(scale.vec), n=5) + rnorm(ct) + rnorm(ct) + rnorm(ct) + rnorm(ct) + rnorm(ct) + rnorm(ct));   
 tc.test2 <- residuals(lm.out);  # extract the residuals  
 for (i in 1:length(censor.TRs)) { tc.test2 <- append(tc.test2, 0, (censor.TRs[i]-1)); } # put 0s back in  

3dDeconvolve errts timecourse plus permuted motion regressor columns (tc.test1)

3dDeconvolve errts timecourse plus random-number motion regressor columns (tc.test2)


Thursday, January 15, 2026

so long, windows

I've always had windows on my work computers, and have built up quite a bit of "muscle memory" over the (ahem) decades for how to use it; the hard part is what to analysis to do or image to look at, not how to do open the image. I didn't want to change my computer setup, but am unwilling to use windows 11, given all its privacy & AI intrusions and limitations. The last few months I've switched over to linux, and it's ... pretty much fine; everything is working more-or-less like it was before.

My hope is that this may smooth the path for other neuroscience folks considering a linux switch but hesitant or unsure how to go about it. For framing, I am most assuredly not a linux guru; I started with some experience using linux servers managed by others, but zero desire to switch my trusty mousepad for a command prompt or to spend days/weeks/months relearning how to do routine work tasks. But now you'd now have to look fairly closely to notice that the software changed on my work computer in the last six months, and I consider that a good thing.

os: zorin linux

There are a lot of "flavors" of linux, and choosing one is daunting. Since my goal was to keep my computers as windows-ish as possible (and not fiddle with settings) I chose Zorin 18 pro, and highly recommend it. I was wondering if hardware would be a hassle (my desktop computer has two monitors, two hard drives, a USB webcam, keyboard, and my beloved vintage Fingerworks iGesture mousepad), but everything just worked no problems whatsoever. (I've since also installed Zorin linux to dual-boot with windows on my laptop, also without hardware difficulty.)

My desktop computer has two drives: a smaller one for the operating system, and a larger for file storage. I had the zorin installation program reformat the smaller drive, but left the larger unchanged; it's still formatted NTFS as it was for windows. I was worried the mixed drive formats would cause trouble but haven't had any, nor any speed issues. 

Zorin comes with an array of menu/desktop appearance layouts: some mimic windows, others mac os or other linux versions. I picked a windows style, and then tweaked the start menu, colors, etc. to my preference. The oddest thing I changed from defaults was the "desktop environment", from wayland to x11, mostly because I wanted a picture gallery screensaver (screensavers are apparently a contentious topic in linux circles). Changing the start menu was non-intuitive: via the (installed) System Tools -> Main Menu program, not via settings or right-clicking on the start menu itself. 

software

Many programs have linux versions (R, zoom, etc.) and so are no problem (install from the Zorin Software collection directly or the programs' website; clicking the start menu and typing a program's name brings up an installer in many cases), but others present more of a challenge. 

The biggest hurdle for me was OneNote: LibreOffice (comes with Zorin) is fine, but doesn't have a OneNote equivalent. I first tried Logseq, but ended up going with Joplin. Which to use is definitely a matter of personal style and preference (e.g., my notes are organized hierarchically and I don't like tags); in my case some of the individual page layouts and formatting got scrambled in the conversion, but the critical text, images, and page organization all came through fine, and I can make new pages without difficulty:


I have a lot of Powerpoint and Word documents; LibreOffice has been managing them fine so far, but wanting Microsoft Office to be available "just in case" is the primary reason I installed Zorin linux on my laptop as dual-boot instead of removing windows entirely. 

In no particular order, here are some of the programs I used on windows and what I'm using instead on linux (I didn't list programs which are available for both, such as RStudio). Many came with Zorin, others I installed via its Software "store".  

  • Notepad++ -> NotepadNext
  • TigerVNC -> Remmina
  • Snipping Tool -> Gradia
  • Foxit & Sumatra (pdfs) -> qpdfview (has tabs; for knitr compiling), Document Viewer (for highlighting & notes), LibreOffice Draw (for complex editing)
  • WinMerge -> Meld
  • WinSCP -> FileZilla 
  • File Manager -> Dolphin (for navigation); Files (GNOME Nautilus; for mounting smb and turtle git)
  • tortoise git -> turtle git
  • MS Office -> LibreOffice 26.2*
  • MS OneNote -> Joplin
  • 7-zip GUI -> File Roller; sudo apt install ark 7zip to have an Extract menu when right-click on .zip files.

remaining headaches

I'm still puzzled why some file operations (smb mounting, turtle git) work differently in Files and Dolphin. I prefer Dolphin's navigation-tree layout and right-click options, but can only access smb-mounted files from Rstudio if I connect first via Files.


MRIcron and mango work fine for nifti and dicom image viewing, but I start the programs by double-clicking on their executables; getting pretty desktop icons or start menu items for them is tricky (you can't just right-click on the executable and make a working shortcut). Programs installed from the Software store don't have this configuration problem, so I assume it's something related to how I did the installation. Though those are less essential now that I now have afni installed locally (I never got it working on windows except through vnc).

FileZilla often seems slower than winscp, despite connecting to the same servers. I think FileZilla disconnects more completely, requiring it to pause and reconnect when I, e.g., double-click a text file for viewing. There may be a setting for this? I had to change several FileZilla defaults, including increasing the Timeout time and changing the Double-click action on files to View/Edit. Setting file associations is still tricky; it doesn't always "see" installed programs, despite messing with the Flatseal permissions. 


* Zorin 18 came with LibreOffice 25 preinstalled. I find version 26 matches my (older windows-style) instincts more closely, especially after setting the "UI Mode" to "Standard Toolbar" and Icons to "Colibre" (Options -> LibreOffice -> Appearance). I did the upgrade in the Terminal, following the uninstall and install commands from this post. The terminal commands don't appear to change the GUI, but after the flatpak install command finishes, the start menu commands open the new version of LibreOffice, with all the file associations, etc. updated properly without rebooting (!).  [added 5 February 2026]