Monday, January 13, 2020

working with surfaces: musings and a suggestion

First, some musings about surface datasets. I'm not convinced that surface analysis is the best strategy for human fMRI, particularly task-based fMRI. I don't dispute that the cortex is a sheet/ribbon/surface; what I'm skeptical about is whether fMRI resolution is (or could be) anywhere close to good enough to accurately keep track of the surface over hours of scanning. (More grey matter musings here.)

I also have concerns about how exactly to do the surface interpolation: MRI collects volumes, from which surfaces must be interpolated, and there are multiple algorithms for finding the surface (in the anatomical images) and performing the transformation (including of the corresponding fMRI images). None of these are remotely trivial procedures, and I have a general bias against adding complexity (and experimenter degrees of freedom) to preprocessing pipelines/analyses.

For a practical issue, quality control and interpreting results on the surface is more difficult than in the volume. I prefer omitting masking during volumetric analysis, which allows checking if the "blobs" fall in an anatomically-sensible way (e.g., are the results stronger in the grey matter or the ventricles?). This type of judgment is not possible on the surface, because it only includes grey matter. For a more extreme example, in the image below the top row is from buggy code that randomized the vertex assignments, while the second row are results with a weak effect, but correct assignments (ignore the different underlay surfaces). The two images don't look all that different on the surface, but a parallel error in the volume would make salt-and-pepper all over the 3d frame: something clearly "not brain-shaped" and so easier to spot as a mistake.

Even with those reservations, I often need to use surface images (and the benchmark GLM results look reasonable, so it does seem to work ok); here are some suggestions if you need to as well.

Key: surfaces are surfaces (made up of vertices) and volumes are volumes (made up of voxels), do not transform back and forth. If you want surface GLM results, conduct GLMs on vertex timeseries; for volume GLM results, conduct GLMs on voxel timeseries.

It is possible to quickly make a surface from a volume (e.g., with a couple of wb_command -volume-to-surface-mapping commands), but that is a very rough process, and should be used only for making an illustration (e.g., for a poster, though even that's not great - see below), NOT for data that will be analyzed further. Instead, generating the surfaces should be done during preprocessing (e.g., with fmriprep), so that you have a pair of gifti timeseries files (one for each hemisphere) in your chosen output space (e.g., subject space, fsaverage5) along with the preprocessed 4d NIfTI file (e.g., subject space, MNI). (Note: I have found it easiest to work with surfaces as gifti files; cifti files can be separated into a pair of giftis and a NIfTI.)

Why this emphasis on keeping surfaces as surfaces and volumes as volumes? The main reason is related to the fuzziness and complexity in creating the surfaces. In both the fmriprep and HCP surface preprocessing pipelines vertices and voxels do not correspond at the end: you can't find a voxel timeseries to match each vertex timeseries (or the reverse). Instead, the vertex timeseries are a weighted mix of voxels determined to correspond to each bit of cortical ribbon, perhaps followed by parcel-constrained (or other) smoothing. It may be theoretically possible to undo all that weighting and interpolation, but no documented method or program currently exists, as far as I know. More generally, information is lost when transforming volumes to surfaces (and the reverse), so it should be done as rarely as possible, and the "rarest" possible is once (during preprocessing).

What can you do with giftis? Pretty much everything you can do with NIfTIs (plotting, extracting stats, running GLMs, etc.), though a bit of tweaking may be needed. For example, we use afni for GLMs, and many of its functions can now take gifti images. I use R for nearly all gifti plotting and manipulation, as in this tutorial, which includes my plotting function, and turning vertex vectors into surface images.

This snippet of code illustrates working with giftis in R (calculating vertex-wise mean, standard deviation, and tsnr for QC): in.fname is the path to a gifti timeseries (such as produced by fmriprep). It is read with readGIfTI from the gifti library, than rearranged into a vertex x frame matrix. Once in that form it's trivial to calculate vertex-wise statistics, which this code writes out in plain text; they could also be plotted directly.

      in.img <- readGIfTI(in.fname)$data;  # just save the vertex timeseries part  
      in.tbl <- matrix(unlist(in.img), nrow=length(in.img), byrow=TRUE); # now vertices in rows, timepoints in columns  
   
      out.tbl <- array(NA, c(ncol(in.tbl), 3));  # one row for each vertex  
      colnames(out.tbl) <- c("sd", "mean", "tsnr");  
   
      out.tbl[,1] <- apply(in.tbl, 2, "sd");  
      out.tbl[,2] <- apply(in.tbl, 2, "mean");  
      out.tbl[,3] <- out.tbl[,2]/out.tbl[,1];  # tsnr: mean/sd  
      write.table(out.tbl, out.fname);  

I try to carry this "surfaces as surfaces and volumes as volumes" strategy throughout the process, to keep the distinction obvious. For example, if a parcel-average timecourse was derived from surface data, show it next to a surface illustration of the parcel. If a GLM was run in the volume, show the beta maps on volumetric slices. Many people (myself included!) have made surface versions of volume data for a final presentation because they look "prettier". This was perhaps acceptable when only volumetric analyses were possible - we knew the surface image was just an illustration. But now that entire analyses can be carried out in the surface or volume I think we should stop this habit and show the data as it actually is - volumes can be "pretty", too!