Friday, December 6, 2024

"crescent" artifacts revisited

Back in 2018 I posted twice about "crescent" artifacts (first, second), and have kept a casual eye out for them since, wondering how much they might affect fMRI analysis results. 

The artifact isn't totally consistent across people, but when present, is very stable over time (i.e., sessions several years apart), and bright in temporal standard deviation QC images. The artifact's location varies with encoding direction: front for PA encoding, rear with AP encoding; the PA encoding versions tend to be much brighter and obvious than the AP. 

Below is an example of the artifact, pointed out by the pink arrows. This is the temporal standard deviation image for sub-f8570ui from the DMCC55B dataset (doi:10.18112/openneuro.ds003465.v1.0.6), the first (left, AP encoding) and second (right, PA encoding) runs of Sternberg baseline (ses-wave1bas_task-Stern), after fmriprep preprocessing:

These two runs were collected sequentially within the same session, but the artifact is only visible in the PA encoding run (right). (Briefly, DMCC used a 3T Siemens Prisma, 32-channel headcoil, CMRR MB4, no in-plane acceleration, 2.4 mm iso, 1.2 s TR, alternating AP and PA encoding runs; details at openneuro, dataset description paper, and OSF sites, plus DMCC-tagged posts on this blog.)

In the previous "crescent" posts we speculated that these could be N/2 ghosts or related to incomplete fat suppression; I am now leaning away from the Nyquist ghost idea, because the crescents don't appear to line up with the most visible ghosts. (Some ghosts are a bit visible in the above image; playing with the contrast and looking in other slices makes the usual multiband-related sets of ghosts obvious, but none clearly intersect with the artifact.) It also seems odd that ghosts would be so much brighter and change their location with AP vs. PA encoding; I am no physicist, though!

link to cortex volume?

This week I gave three lab members a set of temporal standard deviation images (similar to the pair above) for 115 participants from the DMCC dataset. The participant images were in random order, and I asked the lab members to rate how confident they were that each showed the "crescent" artifact or not. My raters agreed that 34 participants showed the artifact, and 39 did not. (Ratings were mixed or less confident on the others; I asked them to decide quickly from single pairs of QC images, not investigate closely.)

We didn't measure external head size in the participants, but did run freesurfer during preprocessing, so I used its CortexVol and eTIV statistics as proxies (a different stat better?): and the group my colleagues rated as having the artifact tended to have smaller brains than those without:

If the appearance of this artifact is indeed somewhat related to head size, then it's logical that it would (as I've observed) generally be stable over time. DMCC's population was younger adults; it'd be interesting to see if there's a relationship with a wider range of head sizes.

only with DMCC or its MB4 acquisition?

Is the artifact restricted to this particular acquisition or study? No, not somehow related to DMCC; I've checked a few DMCC participants with the artifact who later participated in other studies and they have it (or not) in all of the datasets.

To see if it's restricted to the MB4 acquisition, I looked at a few images from a different study, which also has adult participants, a 3T Prisma, 32-channel headcoil, 2.4 mm iso voxels, and CMRR sequences, but with MB6 TR 0.8 s, and PA encoding for all runs. Below are standard deviation images for three different people from this MB6 study, one run each of the same task, after fmriprep preprocessing. (I chose these three because of the artifacts; not all are so prominent.)

Since this study has all PA runs I can't directly compare artifacts across the encoding directions, but there are clearly some "crescents", and more sets of rings than typical with MB4 (which makes sense for MB6). The rings are especially obvious in person #3; some of these appear to be full-brain ghosts. I suspect the artifacts would be much clearer in subject space; I haven't looked (I'm not directly involved in the study). But a substantial minority of these participants' standard deviation images resemble #1, whose artifacts strike me as quite similar to the "crescents" in some DMCC MB4 PA images.

but does it matter?

Not all artifacts that look strange in QC images actually change task-related BOLD statistics enough to be a serious concern. (Of course, how much is too much totally depends on the particular case!) I suspect that this artifact does matter for our analyses, though, both because of where it falls in the brain and because it affects BOLD enough to be visible by eye in some cases.

The artifact's most prominent frontal location with PA runs puts it uncomfortably close to regions of interest for most of my colleagues, and is one reason I have advised shifting to all AP encoding for new studies I'm involved with. Preprocessing, motion, transformation to standard space, and spatial smoothing blurs the artifact across a wider area, hopefully diluting its effect. But the artifact's location is somewhat consistent across participants, and present in a sizable enough minority (a third, perhaps, in the datasets I've looked at), that it seems possible it could reduce signal quality in our target ROIs.

For showing that it does indeed actually affect task-related BOLD enough to matter, so far I mostly just have qualitative impressions. For example, below left is the standard deviation of one DMCC person's PA Sternberg run, with the cursors on the artifact. The right side is from averaging together (after voxel-wise normalizing and detrending) frames after pressing a button with the right hand. Squinting, the statistical image is brighter in sensible motor-related grey matter areas, marked with green. But the "crescent" may also be faintly visible, as pointed out in pink.

I can imagine quantitative tests, such as comparing the single-run (so separating AP and PA encoding runs) GLM output images from the group of participants with and without the artifact. Differences in estimates in parcels/searchlights/areas overlapping the artifact would be suggestive, particularly as the estimates vary with encoding direction and participant subgroup (with-artifact or without).

thoughts?

I'm curious, have you seen similar? Do you think this artifact is from N/2 ghosting, incomplete fat suppression, or something else? (What should I call it? "Crescent" is visually descriptive, but not standard. 😅) Seem reasonable it could be related to head size? And that it can significantly affect BOLD? Other reactions? Thanks! (And we can chat about this at my OHBM 2025 poster, which will be on this topic.)

Tuesday, June 25, 2024

tutorial: making new versions of NIfTI parcellation files

This post is a tutorial on making new versions of NIfTI files, particularly parcellation images. For example, suppose you ran an analysis using a set of parcels from the Schaefer 400x7 volumetric parcellation, and you want to make a figure showing just those parcels. One way to make such a figure is to write a new NIfTI image the same size and shape as the full parcellation, but with only the parcels you want to show, and then display it as an overlay in a visualization program or via code.

Or, suppose you want to make a parcel-average timecourse using afni's 3dROIstats function, but of a group of parcels together, not each parcel individually. If you make a new mask image with the same number to all the voxels in all the parcels you want treated as a group, 3dROIstats will average them together into a single timecourse.

The final example assigns a new value to each parcel, such as to show the t-values resulting from a statistical test run on each parcel individually.

The code below uses the same files as my NIfTI knitr tutorialSchaefer2018_400x7_81x96x81.nii.gz  and  S1200_AverageT1w_81x96x81.nii.gz. To run the code, save the two files somewhere locally, and then change path to their location.

new NIfTI with one parcel only

This first block of code sets the variables for all examples, and then makes a new NIfTI file, p24.nii.gz, which has all the voxels in parcel 24 set to 1, and all the other voxels set to 0. 
 library(RNifti); # package for NIfTI image reading https://github.com/jonclayden/RNifti  
   
 rm(list=ls()); # clear memory  
 path <- "//storage1.ris.wustl.edu/tbraver/Active/MURI/BraverLab/JoWorking/tutorial/"; # path to input images  
 p.img <- readNifti(paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  # read the parcellation image  
   
 # make a new nifti with only parcel 24   
 new.img <- array(0, dim(p.img));  # blank "brain" of zeros, same size as p.img  
 new.img[which(p.img == 24)] <- 1;  # voxels with value 24 in p.img set to 1 
 writeNifti(new.img, paste0(path, "p24.nii.gz"), template=paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  
 # note the template= in the writeNifti function: it specifies that the new file should have the same 
 # header settings as the original parcellation image; important for both to match properly.  
Below are two MRIcron windows, both with S1200_AverageT1w_81x96x81.nii.gz as the anatomical underlay. At left is p24.nii.gz; the title bar text giving the value of the voxel under the crosshairs as 1. The right pane overlay is Schaefer2018_400x7_81x96x81.nii.gz, and the voxel under the crosshairs has the value 24, as expected.


new NIfTI with several parcels given the same value

This code is very similar to the first, but instead of only setting the voxels in parcel 24 to 1, it changes the values of the voxels in all the parcels listed in p.ids to 1 (effectively, making a new parcel out of the previous four):
 # make a new nifti with parcels 24, 26, 370, and 231 all given the value 1.  
 p.ids <- c(24, 26, 231, 360);    # parcel ids we want to combine  
 
 new.img <- array(0, dim(p.img));  # blank "brain" of zeros, same size as p.img  
 for (i in 1:length(p.ids)) { new.img[which(p.img == p.ids[i])] <- 1; }  # set voxels in p.ids to 1   
 writeNifti(new.img, paste0(path, "four.nii.gz"), template=paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  
   

new NIfTI with unique numbers for each parcel

To give different numbers to each parcel, loop over all the parcels and assign the corresponding value:
 # assign a value to each parcel, such as from the results of a statistical test performed on each parcel individually.  
 # note: 400 is the total number of parcels in the example parcellation. It is usually best not to hard-code  
 # the number of parcels, but it is here to keep the code as short as possible.  
 stat.vals <- rnorm(400); # random numbers to plot, one for each parcel
   
 new.img <- array(0, dim(p.img));  # blank "brain" of zeros, same size as p.img  
 for (i in 1:400) { new.img[which(p.img == i)] <- stat.vals[i]; }  # set voxels in parcel i to stat.value i  
 writeNifti(new.img, paste0(path, "stat.nii.gz"), template=paste0(path, "Schaefer2018_400x7_81x96x81.nii.gz"));  
Below are views of four.nii.gz (left) and stat.nii.gz (right). Since the vector of "statistics" for each parcel in the example code is random, it will be different each time, but the extreme positive and negative values were plotted below in hot and cool colors as is usual for fMRI statistical images. 



Thursday, April 4, 2024

Corresponding Schaefer2018 400x7 and 400x17 atlas parcels by number

My "default" cortical parcellation is the 400 parcels by 7 networks version of Schaefer2018. I like these parcellations because they're available in all the spaces we use (volume, fsaverage, fsLR (HCP)) and are independent from our analyses; "all parcellations are wrong, but some are useful".

The Schaefer parcellations come in several versions for each resolution, however, and a parcel described by its 7Networks number likely has a different 17Network number. Note that the parcel boundaries are the same for all same-resolution versions: there is only one set of 400 parcels, but which of those 400 parcels is #77 varies between the 7 and 17 network versions. This post describes how to translate parcel numbers between network versions, using a bit of (base) R code.

Logic: since there is only one set of parcels at each resolution, there is only one set of centroid coordinates at each resolution. Thus, we can match parcels across network orderings by centroids.

First, set sch.path to the location of your copy of the Schaefer2018 Parcellations directory and load the 7 and 17 network files:  (apologies for the wonky code formatting)

sch.path <- "/data/nil-bluearc/ccp-hcp/DMCC_ALL_BACKUPS/ATLASES/Schaefer2018_Parcellations/";

cen7.tbl <- read.csv(paste0(sch.path, "MNI/Centroid_coordinates/Schaefer2018_400Parcels_7Networks_order_FSLMNI152_1mm.Centroid_RAS.csv"));
cen17.tbl <- read.csv(paste0(sch.path, "MNI/Centroid_coordinates/Schaefer2018_400Parcels_17Networks_order_FSLMNI152_1mm.Centroid_RAS.csv"));

Next, make vectors for translating the 7Network number to the 17Network number (and the reverse):

x7to17 <- rep(NA, nrow(cen7.tbl));
x17to7 <- rep(NA, nrow(cen7.tbl));
for (i in 1:nrow(cen7.tbl)) { 
  x7to17[i] <- which(cen17.tbl$R == cen7.tbl$R[i] & cen17.tbl$A == cen7.tbl$A[i] & cen17.tbl$S == cen7.tbl$S[i]); 
  x17to7[i] <- which(cen7.tbl$R == cen17.tbl$R[i] & cen7.tbl$A == cen17.tbl$A[i] & cen7.tbl$S == cen17.tbl$S[i]); 
}

Now the vectors can be used to translate parcel numbers: parcel #77 in the 7Networks ordering is parcel #126 in the 17Networks ordering.

x7to17[77]   # [1] 126

cen7.tbl[77,]
#   ROI.Label                     ROI.Name   R   A  S
#77        77 7Networks_LH_DorsAttn_Post_9 -33 -46 41

cen17.tbl[126,]
#    ROI.Label                  ROI.Name   R   A  S
#126       126 17Networks_LH_ContA_IPS_5 -33 -46 41

x17to7[126]    # [1] 77

Note that the parcel's R A S coordinates are the same, but its name and label (number) vary between the two network versions.

Friday, February 23, 2024

OHBM 2023 links

I was a co-organizer (with Paul Taylor, Daniel Glen, and Richard Reynolds, all of afni fame) of the "Making Quality Control Part of Your Analysis: Learning with the FMRI Open QC Project" course at OHBM 2023. FMRI Open QC Project participants described how they approached the Project datasets, and we had several general QC discussions. Thanks to all participants and attendees! I hope we can keep making progress towards a common vocabulary for fMRI QC and encourage researchers to include it if they do not already.

The session was unfortunately not recorded live, though speakers submitted recordings in advance. OHBM is supposed to make all of these recordings available; in the meantime I've linked material already public.

  • Brendan Williams, "Reproducible Decision Making for fMRI Quality Control"
  • Céline Provins, "Quality control in functional MRI studies with MRIQC and fMRIPrep"
  • Daniel Glen, "Quality control practices in FMRI analysis: philosophy, methods and examples using AFNI"
  • Dan Handwerker, "The art and science of using quality control to understand and improve fMRI data"  slides
  • Chao-Gan Yan, "Quality Control Procedures for fMRI in DPABI"
  • Rasmus Birn, "Quality control in resting-state functional connectivity: qualitative and quantitative measures"
  • Xin Di, "QC for resting-state and task fMRI in SPM"
  • Jo Etzel, "Efficient evaluation of the Open QC task fMRI dataset"  video 
  • Rebecca Lepping, "Quality Control in Resting-State fMRI: The Benefits of Visual Inspection"
  • Francesca Morfini, "Functional Connectivity MRI Quality Control Procedures in CONN"


I also presented a poster, "Which Acquisition? Choosing protocols for task fMRI studies", #700. Here's some of the introduction and conclusion for an abstract. The test data is already public; the code isn't written up properly, but I could share if anyone is interested.

When planning a task fMRI study, one necessary choice is the acquisition sequence. Many are available, with recommendations varying with hardware, study population, brain areas of interest, task requirements, etc.; it is rare to have only one suitable option. Acquisition protocols for task studies can be difficult to evaluate, since metrics like tSNR are not specific to task-related activity. But task designs can make choosing easier, since there is a known effect to compare the candidate acquisition protocols against. 

The procedure illustrated here will rarely make the choice of acquisition completely unambiguous, but can indicate which to avoid, and give the experimenters confidence that the chosen sequence will produce usable data. After choosing the acquisition, more piloting should be performed with the study tasks to confirm that image quality and response clarity are sufficient and as expected.   


... it's taken me so long to finish this post (started August 2023!) that I'm publishing it without adding a proper explanation of the protocol-choosing logic. Take a look at the poster pdf, and please ask questions or otherwise nudge me for more information if you're interested.