Tuesday, June 25, 2024

tutorial: making new versions of NIfTI parcellation files

This post is a tutorial on making new versions of NIfTI files, particularly parcellation images. For example, suppose you ran an analysis using a set of parcels from the Schaefer 400x7 volumetric parcellation, and you want to make a figure showing just those parcels. One way to make such a figure is to write a new NIfTI image the same size and shape as the full parcellation, but with only the parcels you want to show, and then display it as an overlay in a visualization program or via code.

Or, suppose you want to make a parcel-average timecourse using afni's 3dROIstats function, but of a group of parcels together, not each parcel individually. If you make a new mask image with the same number to all the voxels in all the parcels you want treated as a group, 3dROIstats will average them together into a single timecourse.

The final example assigns a new value to each parcel, such as to show the t-values resulting from a statistical test run on each parcel individually.

The code below uses the same files as my NIfTI knitr tutorialSchaefer2018_400x7_81x96x81.nii.gz  and  S1200_AverageT1w_81x96x81.nii.gz. To run the code, save the two files somewhere locally, and then change path to their location.

new NIfTI with one parcel only

This first block of code sets the variables for all examples, and then makes a new NIfTI file, p24.nii.gz, which has all the voxels in parcel 24 set to 1, and all the other voxels set to 0. 
 library(RNifti); # package for NIfTI image reading https://github.com/jonclayden/RNifti  
   
 rm(list=ls()); # clear memory  
 path <- "//storage1.ris.wustl.edu/tbraver/Active/MURI/BraverLab/JoWorking/tutorial/"; # path to input images  
 p.img <- readNifti(paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  # read the parcellation image  
   
 # make a new nifti with only parcel 24   
 new.img <- array(0, dim(p.img));  # blank "brain" of zeros, same size as p.img  
 new.img[which(p.img == 24)] <- 1;  # voxels with value 24 in p.img set to 1 
 writeNifti(new.img, paste0(path, "p24.nii.gz"), template=paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  
 # note the template= in the writeNifti function: it specifies that the new file should have the same 
 # header settings as the original parcellation image; important for both to match properly.  
Below are two MRIcron windows, both with S1200_AverageT1w_81x96x81.nii.gz as the anatomical underlay. At left is p24.nii.gz; the title bar text giving the value of the voxel under the crosshairs as 1. The right pane overlay is Schaefer2018_400x7_81x96x81.nii.gz, and the voxel under the crosshairs has the value 24, as expected.


new NIfTI with several parcels given the same value

This code is very similar to the first, but instead of only setting the voxels in parcel 24 to 1, it changes the values of the voxels in all the parcels listed in p.ids to 1 (effectively, making a new parcel out of the previous four):
 # make a new nifti with parcels 24, 26, 370, and 231 all given the value 1.  
 p.ids <- c(24, 26, 231, 360);    # parcel ids we want to combine  
 
 new.img <- array(0, dim(p.img));  # blank "brain" of zeros, same size as p.img  
 for (i in 1:length(p.ids)) { new.img[which(p.img == p.ids[i])] <- 1; }  # set voxels in p.ids to 1   
 writeNifti(new.img, paste0(path, "four.nii.gz"), template=paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  
   

new NIfTI with unique numbers for each parcel

To give different numbers to each parcel, loop over all the parcels and assign the corresponding value:
 # assign a value to each parcel, such as from the results of a statistical test performed on each parcel individually.  
 # note: 400 is the total number of parcels in the example parcellation. It is usually best not to hard-code  
 # the number of parcels, but it is here to keep the code as short as possible.  
 stat.vals <- rnorm(400); # random numbers to plot, one for each parcel
   
 new.img <- array(0, dim(p.img));  # blank "brain" of zeros, same size as p.img  
 for (i in 1:400) { new.img[which(p.img == i)] <- stat.vals[i]; }  # set voxels in parcel i to stat.value i  
 writeNifti(new.img, paste0(path, "stat.nii.gz"), template=paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  
Below are views of four.nii.gz (left) and stat.nii.gz (right). Since the vector of "statistics" for each parcel in the example code is random, it will be different each time, but the extreme positive and negative values were plotted below in hot and cool colors as is usual for fMRI statistical images. 



Thursday, April 4, 2024

Corresponding Schaefer2018 400x7 and 400x17 atlas parcels by number

My "default" cortical parcellation is the 400 parcels by 7 networks version of Schaefer2018. I like these parcellations because they're available in all the spaces we use (volume, fsaverage, fsLR (HCP)) and are independent from our analyses; "all parcellations are wrong, but some are useful".

The Schaefer parcellations come in several versions for each resolution, however, and a parcel described by its 7Networks number likely has a different 17Network number. Note that the parcel boundaries are the same for all same-resolution versions: there is only one set of 400 parcels, but which of those 400 parcels is #77 varies between the 7 and 17 network versions. This post describes how to translate parcel numbers between network versions, using a bit of (base) R code.

Logic: since there is only one set of parcels at each resolution, there is only one set of centroid coordinates at each resolution. Thus, we can match parcels across network orderings by centroids.

First, set sch.path to the location of your copy of the Schaefer2018 Parcellations directory and load the 7 and 17 network files:  (apologies for the wonky code formatting)

sch.path <- "/data/nil-bluearc/ccp-hcp/DMCC_ALL_BACKUPS/ATLASES/Schaefer2018_Parcellations/";

cen7.tbl <- read.csv(paste0(sch.path, "MNI/Centroid_coordinates/Schaefer2018_400Parcels_7Networks_order_FSLMNI152_1mm.Centroid_RAS.csv"));
cen17.tbl <- read.csv(paste0(sch.path, "MNI/Centroid_coordinates/Schaefer2018_400Parcels_17Networks_order_FSLMNI152_1mm.Centroid_RAS.csv"));

Next, make vectors for translating the 7Network number to the 17Network number (and the reverse):

x7to17 <- rep(NA, nrow(cen7.tbl));
x17to7 <- rep(NA, nrow(cen7.tbl));
for (i in 1:nrow(cen7.tbl)) { 
  x7to17[i] <- which(cen17.tbl$R == cen7.tbl$R[i] & cen17.tbl$A == cen7.tbl$A[i] & cen17.tbl$S == cen7.tbl$S[i]); 
  x17to7[i] <- which(cen7.tbl$R == cen17.tbl$R[i] & cen7.tbl$A == cen17.tbl$A[i] & cen7.tbl$S == cen17.tbl$S[i]); 
}

Now the vectors can be used to translate parcel numbers: parcel #77 in the 7Networks ordering is parcel #126 in the 17Networks ordering.

x7to17[77]   # [1] 126

cen7.tbl[77,]
#   ROI.Label                     ROI.Name   R   A  S
#77        77 7Networks_LH_DorsAttn_Post_9 -33 -46 41

cen17.tbl[126,]
#    ROI.Label                  ROI.Name   R   A  S
#126       126 17Networks_LH_ContA_IPS_5 -33 -46 41

x17to7[126]    # [1] 77

Note that the parcel's R A S coordinates are the same, but its name and label (number) vary between the two network versions.

Friday, February 23, 2024

OHBM 2023 links

I was a co-organizer (with Paul Taylor, Daniel Glen, and Richard Reynolds, all of afni fame) of the "Making Quality Control Part of Your Analysis: Learning with the FMRI Open QC Project" course at OHBM 2023. FMRI Open QC Project participants described how they approached the Project datasets, and we had several general QC discussions. Thanks to all participants and attendees! I hope we can keep making progress towards a common vocabulary for fMRI QC and encourage researchers to include it if they do not already.

The session was unfortunately not recorded live, though speakers submitted recordings in advance. OHBM is supposed to make all of these recordings available; in the meantime I've linked material already public.

  • Brendan Williams, "Reproducible Decision Making for fMRI Quality Control"
  • CĂ©line Provins, "Quality control in functional MRI studies with MRIQC and fMRIPrep"
  • Daniel Glen, "Quality control practices in FMRI analysis: philosophy, methods and examples using AFNI"
  • Dan Handwerker, "The art and science of using quality control to understand and improve fMRI data"  slides
  • Chao-Gan Yan, "Quality Control Procedures for fMRI in DPABI"
  • Rasmus Birn, "Quality control in resting-state functional connectivity: qualitative and quantitative measures"
  • Xin Di, "QC for resting-state and task fMRI in SPM"
  • Jo Etzel, "Efficient evaluation of the Open QC task fMRI dataset"  video 
  • Rebecca Lepping, "Quality Control in Resting-State fMRI: The Benefits of Visual Inspection"
  • Francesca Morfini, "Functional Connectivity MRI Quality Control Procedures in CONN"


I also presented a poster, "Which Acquisition? Choosing protocols for task fMRI studies", #700. Here's some of the introduction and conclusion for an abstract. The test data is already public; the code isn't written up properly, but I could share if anyone is interested.

When planning a task fMRI study, one necessary choice is the acquisition sequence. Many are available, with recommendations varying with hardware, study population, brain areas of interest, task requirements, etc.; it is rare to have only one suitable option. Acquisition protocols for task studies can be difficult to evaluate, since metrics like tSNR are not specific to task-related activity. But task designs can make choosing easier, since there is a known effect to compare the candidate acquisition protocols against. 

The procedure illustrated here will rarely make the choice of acquisition completely unambiguous, but can indicate which to avoid, and give the experimenters confidence that the chosen sequence will produce usable data. After choosing the acquisition, more piloting should be performed with the study tasks to confirm that image quality and response clarity are sufficient and as expected.   


... it's taken me so long to finish this post (started August 2023!) that I'm publishing it without adding a proper explanation of the protocol-choosing logic. Take a look at the poster pdf, and please ask questions or otherwise nudge me for more information if you're interested.

Thursday, June 8, 2023

US researchers: use extra caution with participant gender-related info

Last summer I wrote several times about pregnancy-related data suddenly becoming much more sensitive and potentially damaging to the participant if released. Unfortunately, now we must add transgender status, biological sex at birth (required, e.g., for GUID creation), gender identity, relationship status, and more to the list of data requiring extra care. 

The NIH Certificate of Confidentiality thankfully means that researchers can't be required to release data on something like someone's abortion or transgender history for criminal or other proceedings. We are responsible for ensuring that our data is handled and stored securely, so that it isn't accidentally (or purposely) shared and then possibly cause the participant harm. I suggest researchers review their data collection with an eye towards information that may be more sensitive now than it was in the past; is this information required? If so, consider how to store and use it securely and safely.

Also consider what you ask potential participants during screening (and how screening is done in general): people may not wish to answer screening questions about things like pregnancy or hormone treatments. If these possibly-sensitive questions must be asked, consider how to do so while minimizing potential discomfort or risk.

For example, one of our studies can't include pregnant people, so we must ask about pregnancy during the initial phone screenings. We used to ask potential participants about pregnancy separately, but then changed the script so that this sensitive question was in a list, and the participant is asked if any apply. This way, the participant doesn't have to state explicitly that they are pregnant, and the researcher doesn't have any specific notes, or even respond in a specific way (e.g., we don't want them to say something like "sorry Jane Doe, but you can't be in our study now because you're pregnant").

Here's the relevant part of the new screening script:

In order to determine your eligibility for our study, I need to ask you some questions. This will take about 15 minutes. 

Before we collect your demographic information, I will ready you a list of four exclusionary criteria. If any of these describe you, please answer yes after I read them all; if none apply, please answer no. You do not need to say which ones apply. 
  • You are a non-native English speaker (you learned to speak English as an adult); 
  • You are over the age of 45 or under the age of 18; 
  • You are pregnant or breastfeeding; 
  • You were born prematurely (before 37 weeks, or if twin, before 34 weeks) 
 Do any of these describe you? Yes (I am sorry, you do not qualify to be in our study) or No (continue with questions)

We switched to this "any of the above" style screening script for pregnancy last summer, and it has been working well. We recently reviewed our procedures again, and confirmed that we do not ask questions about sex or gender status or history. But if we did, we'd be looking closely about how exactly the questions were asked and responses recorded, with the aim of collecting the absolute minimum of information required.

Friday, March 31, 2023

bugfix for niftiPlottingFunctions.R, plot.volume()

If you use or adapted the my plot.volume() function from niftiPlottingFunctions.R, please update your code with the version released 30 March 2023. This post illustrates the bug (reversed color scaling for negative values) below the jump; please read it if you've used the function, and contact me with any questions. 

Huge thanks to Tan Nguyen for spotting and reporting the color discrepancy! 

Thursday, March 2, 2023

reasonable motion censoring thresholds?

Recent participants have gotten me thinking (yet again) about the different types of motion during fMRI; causes and consequences. And more immediately practical: which motion censoring thresholds might be reasonable for particular tasks and analyses. 

I've long used (and recommended) FD > 0.9 as a censoring threshold for our event-related task fMRI studies (not functional correlation or resting state-type analyses). 0.9 is more lenient than many use for task fMRI; e.g., Siegel 2014: advise 0.5 FD for adults (which we have), 0.9 for kids or clinical populations. (I need to update a previous post; I'd misread Siegel's recommendations.)

Consider the following motion plot of a run from a recent participant, using my usual conventions (grey lines at one-minute intervals, frames along the x-axis); see this QC demonstration paper, the DMCC55B dataset descriptor, etc. for more explanation (and code). The lower panel shows FD, with the red line at the FD 0.9 censoring threshold, and red x marking censored frames; the grey horizontal line is at FD 0.5. 

I interpret this as a run with  minimal overt head motion, but pronounced apparent motion (from breathing, strongest in trans_y, consistent with the AP encoding direction). Zero frames are censored at an FD > 0.9 threshold (red line); 130 are censored at FD > 0.5 (grey line). There are 562 frames in the run, so 130/562 = 0.23 of the frames censored at FD 0.5, and we would drop the run at our usual criterion of < 20% censored frames.

Contrast the above plot with the following (marked with a 2 in the upper left); the same task, scanner, etc., but a different participant:

I'd characterize plot 2 as having more pronounced overt than apparent motion; there are oscillations in the trans_y, but these are dwarfed by the head motions, e.g., in the middle of the first minute. Looking at the censoring, 13 frames (corresponding to the largest overt head motions) are marked for censoring with FD 0.9; 40 are marked with FD 0.5, corresponding to more of the overt head motions. 40/562 = 0.07, well under the 20% censoring dropping threshold.

musings

To my eye, the 0.5 FD threshold is pretty reasonable in the second case, since it censors more of the overt head motion irregularities, and only those spikes. But for the first plot the 0.5 FD threshold seems far too aggressive: censoring part of every few breaths, 23% of the total frames. What do you think?

I hope to do some proper analyses of the impact of different amounts of apparent vs. overt motion on statistical analyses, but it is not a trivial problem, particularly with task entrainment. (Synchronizing breathing to task timing.)

As a final bit of food for thought, here are the tSNR and sd images for each of the two runs, without censoring (all 562 frames), after preprocessing, and with the same color scaling. The first strikes me as higher quality, despite the greater (> 0.5 FD) censoring. I believe apparent motion could have less of an impact on image quality than overt because the head is not actually moving, and so not creating the attendant magnetic disruptions; the differences are clear to the eye when viewing these types of runs as movies, but it's not clear how those differences translate to statistical analyses.


Wednesday, December 21, 2022

What happened in this fMRI run? ... happened again.

Back in July I posted about a strangely-failed fMRI run, and yesterday I discovered that we had another case not quite two weeks ago. This is the same study, scanner (3T Siemens Prisma), headcoil (32 channel), task, and acquisition protocol (CMRR MB4) as the July case, but a different participant. I've contacted our physicists, but we probably can't investigate properly until after the holidays, and are hampered by no longer having access to some of the intermediate files (evidently some of the more raw k-space/etc. files are overwritten every few days). 

I've asked our experimenters to be on the lookout, and while hopefully it won't happen again, if it does, I hope they can catch it during the session so all the files can be saved. If anyone has ideas for spotting this in real time, please let me know.

A possibly-relevant data point: the participant asked to have the earbuds adjusted after the first task run. The technician pulled the person out of the bore to fix the earbuds, but did not change the head position, and did not do a new set of localizers and spin echo fieldmaps before starting the second task run (the one with the problem). I've recommended that the localizers and spin echo fieldmaps be repeated whenever the person is moved out of the bore, whether they get up from the table or not, but the technician for this scan did not think it necessary. What are your protocols? Do you suggest repeating localizers? No one entered the scanning room before the problematic July run, so this (pulling the person out) might be a coincidence.

Here's what the this most recent case looks like. First, the three functional runs' DICOMs (frame 250 of 562) open in mango, first with scaling allowed to vary with run:


Then with scaling of 0 to 10000 in all three runs, showing how much darker run 2 is:


And finally the SBRef from run 2:

In July the thinking was that this is an RF frequency issue, possibly due to the FatSat RF getting set improperly, so that both fat and water were excited. But this seems hard to confirm from the DICOM header; this time, the Imaging Frequency DICOM field (0018,0084) is nearly identical in all three runs: 123.258928, 123.258924, 123.258924 (runs 1, 2, and 3 respectively), which is very similar to what it was in July (123.258803).