Monday, July 27, 2020

overlaying in Connectome Workbench 1.4.2

This post answers a question posted on my Workbench introductory tutorial: Rui asked how to visualize the common regions for two overlays. I've found two ways of viewing overlap.

The first is to reduce the Opacity (circled in red) of the top-most Layer. In this screen capture I have a blueish ROI on top of a yellow one. (I scribbled the colors over their corresponding rows in the Layers toolbox; blue is listed above - over - the yellow. Click the Layers On and off if you're not sure which is which.) Compare the appearance of the ROIs when the Opacity is set to 1.0 for both (left) versus 0.7 for the top (blue, right): the borders of both ROIs are visible on the right side, and the blue is less opaque all over (not just in the area that overlaps).

The second method is to set one (or both) ROIs to Outline (border) mode. Here, I set the top (blue) ROI to "Outline Only" in the Overlay and Map Settings dialog box (click the little wrench button to bring up the dialog).

Tuesday, June 16, 2020

tip: running FSL commands from within R

I very much believe that analyses should use existing/established programs/packages/functions as much as possible, preferably, the original version of the program in a transparent way. But I also want to preserve a record of the analysis, and be able to rerun it if needed, while minimizing the chance of difficult-to-identify errors like of clicking the wrong setting in a GUI.

I'm increasingly convinced that a good strategy is to do file management-type things in the script,  build the needed (FSL, afni, etc.) command, then send the command directly to the target program (FLS, afni, etc.), rather than use a specialized interface package. There are at least two big advantages to building the actual commands: first, few (if any) dependencies or new syntax are introduced, second, the executed commands are immediately and transparently recoverable. Note that this advice is for analysis scripts, not full-scale pipelines or software.

For example, I needed to run FSL flirt to register around twenty images. This is the only step requiring FSL, and my script is in R. I have a few options, including the FLIRT GUI, generating a text file of the flirt commands and running it at the command line, typing the commands directly at the command line, or running fsl from within R using wrapper functions (e.g., fslr).

This code shows the solution I'm advocating: using R functions for the file-wrangling and command-building parts, and then using system2 to "send" the command to fsl for running. This strategy can be used not only with fsl, but also afni, wb_command, and other command-line programs.

 fsl.path <- "/usr/local/pkg/fsl6.0/bin/";   # path to fsl flirt
 in.path <- "/data/";  # path to the input images 
 ref.path <- "/data/";  # path to the reference images  
 out.path <- "/scratch3/registrations/";   # where to write files  
 sub.ids <- c(102008, 107321, 814649, 849971);  # subject IDs
 
 for (sid in 1:length(sub.ids)) {   # sid <- 1;   
  # make the file names
  fname.in <- paste0(in.path, sub.ids[sid], "/T1w/T1w_acpc_dc_restore.nii.gz");   
  fname.ref <- paste0(ref.path, sub.ids[sid], "/T1w/T1w_acpc_dc_restore.nii.gz");    
  fname.out <- paste0(out.path, sub.ids[sid], "_T1w.nii.gz");  
  fname.mat <- paste0(out.path, sub.ids[sid], "_T1w.txt");  
    
  # check if the input files exist (and output files do not), then run fsl if so
  if (file.exists(fname.in) & file.exists(fname.ref) & !file.exists(fname.out) & !file.exists(fname.mat)) {  
   system2(paste0(fsl.path, "flirt"), args=paste("-in", fname.in, "-ref", fname.ref, "-out", fname.out, "-omat", fname.mat, "-dof 6"),  
       stdout=TRUE, env="FSLOUTPUTTYPE=NIFTI_GZ");  
  }  
 }  


The system2 function "sends" the command to fsl and can be used to check what was done:
  • paste0(fsl.path, "flirt") shows which program was run and its location on disk.
  • args= has the options sent to the flirt program, as a string. Copying out just the paste(...) part and running it in R will print it for inspection (example below).
  • stdout=TRUE tells R to print fsl's messages as it runs (useful to spot any errors)
  • env="FSLOUTPUTTYPE=NIFTI_GZ" sets the fsl environment variable FSLOUTPUTTYPE to NIFTI_GZ. Depending on your system and fsl installation you may need to set more (or no) environment variables.
Specifically, for the first participant (sid 1), the code generates this, which could be run without R:
/usr/local/pkg/fsl6.0/bin/flirt -in /data/102008/T1w/T1w_acpc_dc_restore.nii.gz -ref /data/102008/T1w/T1w_acpc_dc_restore.nii.gz -out /scratch3/registrations/102008_T1w.nii.gz -omat /scratch3/registrations/102008_T1w.txt -dof 6

Wednesday, May 13, 2020

"Pattern similarity analyses of frontoparietal task coding" update

A recently-published paper of ours (preprint; "Pattern Similarity Analyses of FrontoParietal Task Coding: Individual Variation and Genetic Influences") uses correlational MVPA/RSA-type methods to look at heritability effects in HCP task fMRI data. A primary motivation was methods development for the DMCC, which we're analyzing for patterns in individuals and twin pairs across time.

Since publishing, the HCP has updated their list of subjects with major issues, including some in our analyses. I wondered if these problematic subjects could have affected the results, so reran the analyses without them (advertisement: this sort of update is drastically easier if you use knitr or another report-generating system for your results). I'm pleased to say that the omitting these flagged subjects had a negligible impact on the results - it turns out that I'd already excluded most of them from the key analyses because of missing behavioral data.

Both the published and updated versions of the supplemental are on the osf site for the paper. Versions of my favorite figure before and after omitting the flagged participants are after the jump.


Wednesday, April 22, 2020

observation: variability from running fmriprep multiple times

UPDATE 13 May 2020: I'm now quite sure that the high variability wasn't due to the multiple fmriprep runs, but rather the way I picked the training/testing examples. I'm leaving this post here, but put most of it "below the fold" since my initial interpretation was incorrect. I will eventually release code and more details.

UPDATE 24 April 2020: I realized that there was at least one other possible big contributing factor to the difference between the runs: randomness in how I chose the not-button examples. I'll fix this and see what happens to the SVM results. I previously found extremely similar GLM results between fmriprep versions 1.1.2 and 1.3.2, so the variability here is very possibly only from my randomness in setting up the classification.
 

Thursday, March 26, 2020

Getting started with Connectome Workbench 1.4.2

Here is an updated version of my introductory tutorial for the Connectome Workbench, written for Workbench 1.4.2. This new version of Workbench has some nice features (only a few of which are described here!), so I suggest you try this one rather than a previous version. I have a few other posts on Workbench and HCP-related topics, mostly linked from the "Connectome Workbench: 1st steps" post.

downloading the program

Connectome Workbench has versions for Windows, Mac OS, and Linux; just click the download link for your OS (Linux types can also install Workbench via NeuroDebian).

On Windows and Mac you don't "install" Workbench, but just unzip the download then double-click the executable. On my windows box I put the download into d:\Workbench\, unzipped it, and got a bunch of subdirectories. Navigate through until you find wb_view; in my case it's at D:\Workbench\workbench-windows64-v1.4.2\bin_windows64\wb_view.exe. Double-click wb_view to start the program. If you don't want to navigate to this directory each time to start Workbench, make a shortcut to wb_view.exe and put it on your desktop or in a handy menu.

Aside: wb_command.exe is in the same directory as wb_view.exe. wb_command.exe is not a GUI program (nothing much will happen if you double-click it!), but is useful for various functions; see this post and this documentation for more.

getting images to plot

Workbench doesn't come with any images, so this tutorial will use ones from my knitr tutorials, available at https://osf.io/w7zkc/. Download all of the files under "knitr tutorials", the "surface (GIFTI) brain plotting" and "volumetric (NIfTI) brain plotting" subdirectories. (The .rnw files should be kept in separate directories if you're going to compile the knitr tutorials.)

Aside: Only the files from the osf site are needed for this tutorial, but you will likely want more anatomic underlays than these. I suggest underlays from the HCP S1200 release for volumetric (MNI) and fsLR surfaces; I converted the fsaverage5 surfaces from our local FreeSurfer installation, but FreeSurfer provides many more than the pair I put into osf.

seeing blank brains

Open the Workbench GUI (e.g., by double-clicking wb_view.exe). A command window will open (just ignore it), as well as a interactive box prompting you to open a spec or scene file. Click the Skip button to load the main program. Note: spec and scene files are very useful, and a big reason to use Workbench, because they let you save collections of images and visualizations, which can save a massive amount of time. I won't cover them in this tutorial, though.

Since we skipped loading anything, Workbench opens with a blank screen. We want to first open images to use as underlays: a NIfTI volumetric underlay to plot volumetric blobs on, and GIFTI surface files to plot surface blobs on (see this post for a bit more about file types).

Select File -> Open File from the top menus to open a standard file selection dialog. Navigate to where you put the images from "surface (GIFTI) brain plotting", and change the Files of type drop-down to "Surface Files (*.surf.gii)", then select and open the four fsaverage5 .surf.gii files (two hemispheres * two inflations). Open the Open File dialog again, navigate to where you put the files from "volumetric (NIfTI) brain plotting", and set the Files of type drop-down to "Volume Files (*.nii *.nii.gz)", then select and open S1200_AverageT1w_81x96x81.nii.gz.

Aside: Connectome Workbench works with both fsLR and fsaverage5 surfaces (the two types in the gifti tutorial files), but not at the same time.

All of the underlay images are now loaded in Workbench, but we need to tell it to display them like we want: let's put the surfaces on the first tab and volumes on the second.



The above images show the settings to display the volumes and surfaces (click to enlarge). The first tab probably was already set for multiple surfaces ("Montage"), and likely now shows four wrinkly (pial) brains. Since both pial and inflated fsaverage5 surfaces were loaded, which are shown where can be adjusted with the Montage Selection part of the toolbar (highlighted in red). Try adjusting how many and which surfaces are viewed by changing these settings. You can click and drag the hemispheres around with the mouse; use the buttons in the Orientation part of the menu to reset.

Click on the second tab (probably labeled "All"), then choose Volume in the View part of the menu (circled in red above). The top menus and tab title should change, and a single slice of the volume be displayed. Try adjusting the number and spacing of the volume images. For example, show more slices by clicking the Montage On button (right red arrow), and adjust the image arrangement with the values in the Montage menu boxes (left of the vertical red line). The height of the slices is changed in the A: (axial) box of the Slice Indices/Coords menu (right of vertical red line). The crosshairs can be turned on and off with the button at the bottom left of the Slice Plane menu (left red arrow); the labels on and off with the adjacent LARP and XYZ buttons.

adding overlays

Now that we have arranged underlay images, let's add something on top. Overlays are opened in the same way as the underlay images, via File -> Open File. The Open File window is probably still in the nifti tutorial directory and set to Files of type "Volume", so select the two images in that directory that are not already loaded: continuousOverlay.nii.gz and Schaefer2018_400x7_81x96x81.nii.gz. Workbench won't show the overlays right away, but don't worry - they were opened.

To load the surface overlays use the File -> Open File menu option again (it doesn't matter which Workbench tab you're on), navigate to the directory with the gifti tutorial files, and change the Files of Type: box to "Metric Files (*.func.gii *shape.gii). Select the two fsaverage5 files (fsaverage5_STATS_L.func.gii and fsaverage5_STATS_R.func.gii) and click Open.

Workbench will pop up a query box like this, one for each hemisphere. Select the matching Cortex for each file, and click OK. As with the volume overlays the appearance of Workbench won't change, but the files were read.

To see the images we just loaded, we need to turn the proper overlay layers “On” the surface and volume tabs. The Overlay ToolBox settings control the loading and appearance of the overlay images and work similarly for surfaces and volumes.

On the surface (Montage) tab select the two overlays (one for Left and one for Right; order does not matter) in the File boxes, then click the rows On, as pointed out by red arrows. The color scaling is rather mysterious, but just leave it for now.

These STATS overlays were generated by an afni GLM, and contain multiple named slots. Which statistic is displayed on each hemisphere can be controlled by the Map boxes highlighted in green (note that this number is 1-based while afni uses 0-based, so the numbers shown with 3dinfo are one less). While Workbench lets you show different maps (statistics, in this case) on each hemisphere, it's more usual to want to see the same map on both hemispheres. The Yoke options (between the red and green arrows) will link the two hemispheres together: switch both from "Off" to "I", and when you change one hemisphere's map the other should as well.

On the Volume tab, note that the last row in the Overlay ToolBox is S1200_AverageT1w_81x96x81.nii.gz, the anatomic underlay. This is good - we want to have the anatomy under the overlays. Select one of the volumetric overlay files in each of the other rows using the File dropdown menu (red arrows), and click them On and off.

The two volume overlays in the tutorial dataset are 3d - only one image, rather than 4d (3d plus time or statistics) - so the Map and Yoke options are disabled. Workbench used greyscale for the overlays, but if you look closely and click the layers On and off you can see that the layers are in the order of the Overlay ToolBox File rows: Schaefer (parcel mask) on top, then the continuous image (speckly), then the anatomy. Change which file is listed in each row to change the stacking.

change the color scaling


Schaefer2018_400x7_81x96x81.nii.gz is the Schaefer 400 parcel x 7 network parcellation; let’s add some colors to the parcels. On the Volume tab and click the continuousOverlay.nii.gz layer off, so that only the Schaefer parcels are shown on the anatomy. Now click the wrench button in the Schaefer layer's row (green arrow below); the Overlay and Map Settings window should appear.
Change the settings in the window to match this image and see how the histogram and brains change. The histogram at the right shows the range of the data (1:400 for the Schaefer 400x7), how many voxels with each value, and what color they're assigned with the current color palette and scaling settings. In this case we can see that lower-numbered parcels are given greens, and plotted on the left hemisphere. Switch through the different Palettes and range options to see how the appearance shifts.

What if we like this coloring, but only want to show the right hemisphere parcels? The settings below are one way to accomplish it; try switching the Show Data Inside Thresholds and Show Data Outside Thresholds selection to show the left hemisphere.

Click the Close button at the lower right to close the volume's Overlay and Map Settings window, and switch to the surface tab. Show Map 3 ("ON_BLOCKS#0_Tstat") on each hemisphere, then click the wrench in one of the overlay's rows to open the Overlay and Map Settings window again (I clicked the one for the right hemisphere). You'll see that the histogram looks quite different: the t statistics are both positive and negative, roughly normal, with a lot of values around 0.

Can you make it show only values above 1 and below -1, using a typical warm colors for positive, cool for negative scheme, with red and blue corresponding to 1 and -1, respectively? Below is one solution.
Note that only the right hemisphere has the coloring scheme - since I started by clicking the wrench in the right hemisphere's row. I could close this Overlay and Map Settings window, then click the wrench in the left hemisphere's row and set everything again, but there's an easier way: click the Apply to Files button (green arrow) to open the Copy Palette Color ... window, then click both fsaverage5_STATS Metrics on. Click OK, then Close. Both hemispheres should now have the same coloring scheme.
Finally, switch through the different surface Maps - they will have have the new coloring scheme. This happened because the Apply to All Maps option (just above the green arrow in the previous picture) was checked in the Overlay and Map Settings window. If you uncheck this option, changes will only affect the Map you have shown when the wrench was clicked.

parting comments

This tutorial introduces some basic Workbench functionality, but it has many, many more features. The Workbench tutorial is a good next step to see what else it can do. Good luck!

Friday, March 13, 2020

volume and surface brain plotting knitr tutorial

Here is the second of my pair of posts introducing my updated knitr brain plotting tutorials. The first tutorial describes setting up RStudio for knitr compilation, with base R graphics examples - start there. This post adds a pair of brain plotting tutorials, one for volumetric images and the other for surfaces. The source (knitrIntro_NIfTI.rnw, knitrIntro_gifti.rnw) and image files needed for compilation can be downloaded from the blog osf site, "knitr tutorials" section. The compiled pdfs are included as well, at knitrIntro_NIfTI.pdf and knitrIntro_gifti.pdf.

The surface and volume examples are roughly parallel, covering plotting both continuous statistical overlays and parcellations. The example code includes assigning values to parcels (e.g., to show the results of an analysis), adjusting the plot appearance, and doing math with the images in R. Please read both the text in the pdf and the code comments.


The brain images in the knitrs (a few of which are above) are added with a pair of functions I wrote: plot.volume() and plot.surface(). These are updated versions of the functions in previous posts on this blog, with changes to improve the appearance of the surface images and make the color scaling parameters for volumes more similar to those for surfaces.

My intention is that the plotting functions and usage examples in these knitrs will enable beginners to quickly start making useful (and attractive!) knitr documents to summarize, explore, and describe their own analyses. Please let me know if you encounter any bugs, have a suggestion for an improvement, or have a new feature suggestion.

Monday, March 9, 2020

introductory knitr tutorial

This is a new introductory knitr tutorial. I have posted two previous knitr tutorials, but this supersedes the first: I have now split the NIfTI image plotting into its own (forthcoming) tutorial. This post describes setting up RStudio and knitr and compiling the tutorial .rnw, which can be downloaded from the (new!) blog osf site.

I'm a big fan of R and knitr; I now use knitr to create nearly all of my analysis-summary documents, even those with "brain blob" images, figures, and tables. The files can be complex, such as this supplemental information.

This post contains a knitr tutorial in the form of an example knitr-created document, and the source needed to recreate it. My intention is that this will be a "starter kit", containing examples of the basic formatting needed to quickly start using knitr. The file is also a starter kit for base R graphics ... I use base R for nearly everything, rather than ggplot2 or the tidyverse.



What does knitr do? Yihui has many demonstrations on his web site. I use knitr to create pdf files presenting, summarizing, and interpreting analysis results. Part of the demo pdf is in the image at left to give the idea: I have several paragraphs of explanatory text above a figure. This entire pdf was created from a knitr .rnw source file, which contains LaTeX text and R code blocks.

Previously, I'd make Word documents describing an analysis, copy-pasting figures and screenshots as needed, and manually formatting tables. Besides time, a big drawback of this system is human memory ... "how exactly did I calculate these figures?." I tried including links to the source R files and notes about thresholds, etc, but often missed some key detail, which I'd then have to reverse-engineer. knitr avoids that problem: I can look at the document's .rnw source code and immediately see which NIfTI image is displayed, which directory contains the plotted data, etc.

In addition to (human) memory and reproducibility benefits, the time saved by using knitr instead of Word for analysis summary documents is substantial. Need to change a parameter and rerun an analysis? With knitr there's no need to spend hours updating the images: just change the file names and parameters in the knitr document and recompile. Similarly, the color scaling or displayed slices can be changed easily.

Using knitr is relatively painless: if you use RStudio. There is still a bit of a learning curve, especially if you want fancy formatting in the text parts of the document, since it uses LaTeX syntax (this tutorial contains enough to get going with, though). But RStudio takes care of all of the interconnections: simply click the "Compile PDF" button (blue arrow) ... and it does!


 to run the demo

If you don't already have them, first install RStudio, then install LaTeX compilers. If you'll only be using LaTeX with R I suggest using TinyTeX; I also needed to install MiKTeX (set it to 'Always install missing packages on-the-fly'). Within R, you'll probably want the knitr package (plus tinytex, if using); use the GUI or type install.packages("knitr").

RStudio defaults .rnw files to Sweave, but this tutorial .rnw is in knitr, so you MUST change the RStudio setting. To do this, go through Tools then Global Options in the top RStudio menus to bring up the Options dialog box, as shown here. Click on the Sweave icon, then tell it to Weave Rnw files using knitr (marked with yellow arrow). Then click Ok to close the dialog box.



Next, download knitrIntro_baseRgraphics.rnw from the osf site, save it locally into its own directory, and open it in RStudio. The RStudio GUI tab menu should look like the screenshot above, complete with a Compile PDF button. Click the Compile PDF button and RStudio should switch to a  running Compile PDF log, finishing with opening the pdf in a separate window. A little reload pdf button also appears to the right of the Compile PDF button; if the pdf viewer doesn't open by itself, try clicking this button to reload. The compiled pdf will be created in the same directory, and with the same file name, as you saved the source (in this case, knitrIntro_baseRgraphics.rnw).


UPDATE 7 April 2020: added the note about needing MiKTeX as well as TinyTeX.