Thursday, March 26, 2020

Getting started with Connectome Workbench 1.4.2

Here is an updated version of my introductory tutorial for the Connectome Workbench, written for Workbench 1.4.2. This new version of Workbench has some nice features (only a few of which are described here!), so I suggest you try this one rather than a previous version. I have a few other posts on Workbench and HCP-related topics, mostly linked from the "Connectome Workbench: 1st steps" post.

downloading the program

Connectome Workbench has versions for Windows, Mac OS, and Linux; just click the download link for your OS (Linux types can also install Workbench via NeuroDebian).

On Windows and Mac you don't "install" Workbench, but just unzip the download then double-click the executable. On my windows box I put the download into d:\Workbench\, unzipped it, and got a bunch of subdirectories. Navigate through until you find wb_view; in my case it's at D:\Workbench\workbench-windows64-v1.4.2\bin_windows64\wb_view.exe. Double-click wb_view to start the program. If you don't want to navigate to this directory each time to start Workbench, make a shortcut to wb_view.exe and put it on your desktop or in a handy menu.

Aside: wb_command.exe is in the same directory as wb_view.exe. wb_command.exe is not a GUI program (nothing much will happen if you double-click it!), but is useful for various functions; see this post and this documentation for more.

getting images to plot

Workbench doesn't come with any images, so this tutorial will use ones from my knitr tutorials, available at https://osf.io/w7zkc/. Download all of the files under "knitr tutorials", the "surface (GIFTI) brain plotting" and "volumetric (NIfTI) brain plotting" subdirectories. (The .rnw files should be kept in separate directories if you're going to compile the knitr tutorials.)

Aside: Only the files from the osf site are needed for this tutorial, but you will likely want more anatomic underlays than these. I suggest underlays from the HCP S1200 release for volumetric (MNI) and fsLR surfaces; I converted the fsaverage5 surfaces from our local FreeSurfer installation, but FreeSurfer provides many more than the pair I put into osf.

seeing blank brains

Open the Workbench GUI (e.g., by double-clicking wb_view.exe). A command window will open (just ignore it), as well as a interactive box prompting you to open a spec or scene file. Click the Skip button to load the main program. Note: spec and scene files are very useful, and a big reason to use Workbench, because they let you save collections of images and visualizations, which can save a massive amount of time. I won't cover them in this tutorial, though.

Since we skipped loading anything, Workbench opens with a blank screen. We want to first open images to use as underlays: a NIfTI volumetric underlay to plot volumetric blobs on, and GIFTI surface files to plot surface blobs on (see this post for a bit more about file types).

Select File -> Open File from the top menus to open a standard file selection dialog. Navigate to where you put the images from "surface (GIFTI) brain plotting", and change the Files of type drop-down to "Surface Files (*.surf.gii)", then select and open the four fsaverage5 .surf.gii files (two hemispheres * two inflations). Open the Open File dialog again, navigate to where you put the files from "volumetric (NIfTI) brain plotting", and set the Files of type drop-down to "Volume Files (*.nii *.nii.gz)", then select and open S1200_AverageT1w_81x96x81.nii.gz.

Aside: Connectome Workbench works with both fsLR and fsaverage5 surfaces (the two types in the gifti tutorial files), but not at the same time.

All of the underlay images are now loaded in Workbench, but we need to tell it to display them like we want: let's put the surfaces on the first tab and volumes on the second.



The above images show the settings to display the volumes and surfaces (click to enlarge). The first tab probably was already set for multiple surfaces ("Montage"), and likely now shows four wrinkly (pial) brains. Since both pial and inflated fsaverage5 surfaces were loaded, which are shown where can be adjusted with the Montage Selection part of the toolbar (highlighted in red). Try adjusting how many and which surfaces are viewed by changing these settings. You can click and drag the hemispheres around with the mouse; use the buttons in the Orientation part of the menu to reset.

Click on the second tab (probably labeled "All"), then choose Volume in the View part of the menu (circled in red above). The top menus and tab title should change, and a single slice of the volume be displayed. Try adjusting the number and spacing of the volume images. For example, show more slices by clicking the Montage On button (right red arrow), and adjust the image arrangement with the values in the Montage menu boxes (left of the vertical red line). The height of the slices is changed in the A: (axial) box of the Slice Indices/Coords menu (right of vertical red line). The crosshairs can be turned on and off with the button at the bottom left of the Slice Plane menu (left red arrow); the labels on and off with the adjacent LARP and XYZ buttons.

adding overlays

Now that we have arranged underlay images, let's add something on top. Overlays are opened in the same way as the underlay images, via File -> Open File. The Open File window is probably still in the nifti tutorial directory and set to Files of type "Volume", so select the two images in that directory that are not already loaded: continuousOverlay.nii.gz and Schaefer2018_400x7_81x96x81.nii.gz. Workbench won't show the overlays right away, but don't worry - they were opened.

To load the surface overlays use the File -> Open File menu option again (it doesn't matter which Workbench tab you're on), navigate to the directory with the gifti tutorial files, and change the Files of Type: box to "Metric Files (*.func.gii *shape.gii). Select the two fsaverage5 files (fsaverage5_STATS_L.func.gii and fsaverage5_STATS_R.func.gii) and click Open.

Workbench will pop up a query box like this, one for each hemisphere. Select the matching Cortex for each file, and click OK. As with the volume overlays the appearance of Workbench won't change, but the files were read.

To see the images we just loaded, we need to turn the proper overlay layers “On” the surface and volume tabs. The Overlay ToolBox settings control the loading and appearance of the overlay images and work similarly for surfaces and volumes.

On the surface (Montage) tab select the two overlays (one for Left and one for Right; order does not matter) in the File boxes, then click the rows On, as pointed out by red arrows. The color scaling is rather mysterious, but just leave it for now.

These STATS overlays were generated by an afni GLM, and contain multiple named slots. Which statistic is displayed on each hemisphere can be controlled by the Map boxes highlighted in green (note that this number is 1-based while afni uses 0-based, so the numbers shown with 3dinfo are one less). While Workbench lets you show different maps (statistics, in this case) on each hemisphere, it's more usual to want to see the same map on both hemispheres. The Yoke options (between the red and green arrows) will link the two hemispheres together: switch both from "Off" to "I", and when you change one hemisphere's map the other should as well.

On the Volume tab, note that the last row in the Overlay ToolBox is S1200_AverageT1w_81x96x81.nii.gz, the anatomic underlay. This is good - we want to have the anatomy under the overlays. Select one of the volumetric overlay files in each of the other rows using the File dropdown menu (red arrows), and click them On and off.

The two volume overlays in the tutorial dataset are 3d - only one image, rather than 4d (3d plus time or statistics) - so the Map and Yoke options are disabled. Workbench used greyscale for the overlays, but if you look closely and click the layers On and off you can see that the layers are in the order of the Overlay ToolBox File rows: Schaefer (parcel mask) on top, then the continuous image (speckly), then the anatomy. Change which file is listed in each row to change the stacking.

change the color scaling


Schaefer2018_400x7_81x96x81.nii.gz is the Schaefer 400 parcel x 7 network parcellation; let’s add some colors to the parcels. On the Volume tab and click the continuousOverlay.nii.gz layer off, so that only the Schaefer parcels are shown on the anatomy. Now click the wrench button in the Schaefer layer's row (green arrow below); the Overlay and Map Settings window should appear.
Change the settings in the window to match this image and see how the histogram and brains change. The histogram at the right shows the range of the data (1:400 for the Schaefer 400x7), how many voxels with each value, and what color they're assigned with the current color palette and scaling settings. In this case we can see that lower-numbered parcels are given greens, and plotted on the left hemisphere. Switch through the different Palettes and range options to see how the appearance shifts.

What if we like this coloring, but only want to show the right hemisphere parcels? The settings below are one way to accomplish it; try switching the Show Data Inside Thresholds and Show Data Outside Thresholds selection to show the left hemisphere.

Click the Close button at the lower right to close the volume's Overlay and Map Settings window, and switch to the surface tab. Show Map 3 ("ON_BLOCKS#0_Tstat") on each hemisphere, then click the wrench in one of the overlay's rows to open the Overlay and Map Settings window again (I clicked the one for the right hemisphere). You'll see that the histogram looks quite different: the t statistics are both positive and negative, roughly normal, with a lot of values around 0.

Can you make it show only values above 1 and below -1, using a typical warm colors for positive, cool for negative scheme, with red and blue corresponding to 1 and -1, respectively? Below is one solution.
Note that only the right hemisphere has the coloring scheme - since I started by clicking the wrench in the right hemisphere's row. I could close this Overlay and Map Settings window, then click the wrench in the left hemisphere's row and set everything again, but there's an easier way: click the Apply to Files button (green arrow) to open the Copy Palette Color ... window, then click both fsaverage5_STATS Metrics on. Click OK, then Close. Both hemispheres should now have the same coloring scheme.
Finally, switch through the different surface Maps - they will have have the new coloring scheme. This happened because the Apply to All Maps option (just above the green arrow in the previous picture) was checked in the Overlay and Map Settings window. If you uncheck this option, changes will only affect the Map you have shown when the wrench was clicked.

parting comments

This tutorial introduces some basic Workbench functionality, but it has many, many more features. The Workbench tutorial is a good next step to see what else it can do. Good luck!

Friday, March 13, 2020

volume and surface brain plotting knitr tutorial

Here is the second of my pair of posts introducing my updated knitr brain plotting tutorials. The first tutorial describes setting up RStudio for knitr compilation, with base R graphics examples - start there. This post adds a pair of brain plotting tutorials, one for volumetric images and the other for surfaces. The source (knitrIntro_NIfTI.rnw, knitrIntro_gifti.rnw) and image files needed for compilation can be downloaded from the blog osf site, "knitr tutorials" section. The compiled pdfs are included as well, at knitrIntro_NIfTI.pdf and knitrIntro_gifti.pdf.

The surface and volume examples are roughly parallel, covering plotting both continuous statistical overlays and parcellations. The example code includes assigning values to parcels (e.g., to show the results of an analysis), adjusting the plot appearance, and doing math with the images in R. Please read both the text in the pdf and the code comments.


The brain images in the knitrs (a few of which are above) are added with a pair of functions I wrote: plot.volume() and plot.surface(). These are updated versions of the functions in previous posts on this blog, with changes to improve the appearance of the surface images and make the color scaling parameters for volumes more similar to those for surfaces.

My intention is that the plotting functions and usage examples in these knitrs will enable beginners to quickly start making useful (and attractive!) knitr documents to summarize, explore, and describe their own analyses. Please let me know if you encounter any bugs, have a suggestion for an improvement, or have a new feature suggestion.

Monday, March 9, 2020

introductory knitr tutorial

This is a new introductory knitr tutorial. I have posted two previous knitr tutorials, but this supersedes the first: I have now split the NIfTI image plotting into its own (forthcoming) tutorial. This post describes setting up RStudio and knitr and compiling the tutorial .rnw, which can be downloaded from the (new!) blog osf site.

I'm a big fan of R and knitr; I now use knitr to create nearly all of my analysis-summary documents, even those with "brain blob" images, figures, and tables. The files can be complex, such as this supplemental information.

This post contains a knitr tutorial in the form of an example knitr-created document, and the source needed to recreate it. My intention is that this will be a "starter kit", containing examples of the basic formatting needed to quickly start using knitr. The file is also a starter kit for base R graphics ... I use base R for nearly everything, rather than ggplot2 or the tidyverse.



What does knitr do? Yihui has many demonstrations on his web site. I use knitr to create pdf files presenting, summarizing, and interpreting analysis results. Part of the demo pdf is in the image at left to give the idea: I have several paragraphs of explanatory text above a figure. This entire pdf was created from a knitr .rnw source file, which contains LaTeX text and R code blocks.

Previously, I'd make Word documents describing an analysis, copy-pasting figures and screenshots as needed, and manually formatting tables. Besides time, a big drawback of this system is human memory ... "how exactly did I calculate these figures?." I tried including links to the source R files and notes about thresholds, etc, but often missed some key detail, which I'd then have to reverse-engineer. knitr avoids that problem: I can look at the document's .rnw source code and immediately see which NIfTI image is displayed, which directory contains the plotted data, etc.

In addition to (human) memory and reproducibility benefits, the time saved by using knitr instead of Word for analysis summary documents is substantial. Need to change a parameter and rerun an analysis? With knitr there's no need to spend hours updating the images: just change the file names and parameters in the knitr document and recompile. Similarly, the color scaling or displayed slices can be changed easily.

Using knitr is relatively painless: if you use RStudio. There is still a bit of a learning curve, especially if you want fancy formatting in the text parts of the document, since it uses LaTeX syntax (this tutorial contains enough to get going with, though). But RStudio takes care of all of the interconnections: simply click the "Compile PDF" button (blue arrow) ... and it does!


 to run the demo

If you don't already have them, first install RStudio, then install a LaTeX compiler (if you'll only be using LaTeX with R I suggest using TinyTeX). Within R, you'll probably want the knitr package (plus tinytex, if using); use the GUI or type install.packages("knitr").

RStudio defaults .rnw files to Sweave, but this tutorial .rnw is in knitr, so you MUST change the RStudio setting. To do this, go through Tools then Global Options in the top RStudio menus to bring up the Options dialog box, as shown here. Click on the Sweave icon, then tell it to Weave Rnw files using knitr (marked with yellow arrow). Then click Ok to close the dialog box.



Next, download knitrIntro_baseRgraphics.rnw from the osf site, save it locally into its own directory, and open it in RStudio. The RStudio GUI tab menu should look like the screenshot above, complete with a Compile PDF button. Click the Compile PDF button and RStudio should switch to a  running Compile PDF log, finishing with opening the pdf in a separate window. A little reload pdf button also appears to the right of the Compile PDF button; if the pdf viewer doesn't open by itself, try clicking this button to reload. The compiled pdf will be created in the same directory, and with the same file name, as you saved the source (in this case, knitrIntro_baseRgraphics.rnw).

Wednesday, February 26, 2020

when making QC (or most any) images, don't mask the brain ...

Some preprocessing pipelines/programs mask out the space around the brain, while others don't. Masking makes the files a bit smaller and a lot neater looking, but also makes it harder to spot artifacts or other errors, as was highlighted for me just now when comparing results across pipelines.

These two images are from button-pressing (positive control; should have a hot "blob" of left hand motor-ish activity) GLMs for the same person, two different tasks (Axcpt and Cuedts), three different sessions (B, P, R). All tasks are performed within each session, but each session is collected on a different day - the two B from the same day, the two P, etc. These two sets of images are the same, but from two different pipelines:


Spot the error? There was a coil error on the day the P ("proactive") session was acquired, which is much easier to spot in the second (non-masked fmriprep) output. Those were GLM results; here's how it looks in the standard deviation images (standard deviation of each voxel over all the frames in each run), from each pipeline, for the (problematic) proactive session:



It's clear in both versions that something went wrong, but MUCH easier to see the artifact when the brain has not been masked.

For completeness, here's the surface version of the standard deviation from fmriprep preprocessing; I don't have the HCP pipeline version handy at the moment but expect it to look similar. Brain masking is of course not optional for surfaces so we can't look "around" the brain, but the "striping" pattern is abnormal in this image (reflecting the artifact instead of the brain structure).


I've often recommended against masking the brain in working files to make artifacts and oddities easier to spot, but the difference masking makes in this case is especially striking. (And this session will not be included in any real analyses!)

Monday, January 13, 2020

working with surfaces: musings and a suggestion

First, some musings about surface datasets. I'm not convinced that surface analysis is the best strategy for human fMRI, particularly task-based fMRI. I don't dispute that the cortex is a sheet/ribbon/surface; what I'm skeptical about is whether fMRI resolution is (or could be) anywhere close to good enough to accurately keep track of the surface over hours of scanning. (More grey matter musings here.)

I also have concerns about how exactly to do the surface interpolation: MRI collects volumes, from which surfaces must be interpolated, and there are multiple algorithms for finding the surface (in the anatomical images) and performing the transformation (including of the corresponding fMRI images). None of these are remotely trivial procedures, and I have a general bias against adding complexity (and experimenter degrees of freedom) to preprocessing pipelines/analyses.

For a practical issue, quality control and interpreting results on the surface is more difficult than in the volume. I prefer omitting masking during volumetric analysis, which allows checking if the "blobs" fall in an anatomically-sensible way (e.g., are the results stronger in the grey matter or the ventricles?). This type of judgment is not possible on the surface, because it only includes grey matter. For a more extreme example, in the image below the top row is from buggy code that randomized the vertex assignments, while the second row are results with a weak effect, but correct assignments (ignore the different underlay surfaces). The two images don't look all that different on the surface, but a parallel error in the volume would make salt-and-pepper all over the 3d frame: something clearly "not brain-shaped" and so easier to spot as a mistake.

Even with those reservations, I often need to use surface images (and the benchmark GLM results look reasonable, so it does seem to work ok); here are some suggestions if you need to as well.

Key: surfaces are surfaces (made up of vertices) and volumes are volumes (made up of voxels), do not transform back and forth. If you want surface GLM results, conduct GLMs on vertex timeseries; for volume GLM results, conduct GLMs on voxel timeseries.

It is possible to quickly make a surface from a volume (e.g., with a couple of wb_command -volume-to-surface-mapping commands), but that is a very rough process, and should be used only for making an illustration (e.g., for a poster, though even that's not great - see below), NOT for data that will be analyzed further. Instead, generating the surfaces should be done during preprocessing (e.g., with fmriprep), so that you have a pair of gifti timeseries files (one for each hemisphere) in your chosen output space (e.g., subject space, fsaverage5) along with the preprocessed 4d NIfTI file (e.g., subject space, MNI). (Note: I have found it easiest to work with surfaces as gifti files; cifti files can be separated into a pair of giftis and a NIfTI.)

Why this emphasis on keeping surfaces as surfaces and volumes as volumes? The main reason is related to the fuzziness and complexity in creating the surfaces. In both the fmriprep and HCP surface preprocessing pipelines vertices and voxels do not correspond at the end: you can't find a voxel timeseries to match each vertex timeseries (or the reverse). Instead, the vertex timeseries are a weighted mix of voxels determined to correspond to each bit of cortical ribbon, perhaps followed by parcel-constrained (or other) smoothing. It may be theoretically possible to undo all that weighting and interpolation, but no documented method or program currently exists, as far as I know. More generally, information is lost when transforming volumes to surfaces (and the reverse), so it should be done as rarely as possible, and the "rarest" possible is once (during preprocessing).

What can you do with giftis? Pretty much everything you can do with NIfTIs (plotting, extracting stats, running GLMs, etc.), though a bit of tweaking may be needed. For example, we use afni for GLMs, and many of its functions can now take gifti images. I use R for nearly all gifti plotting and manipulation, as in this tutorial, which includes my plotting function, and turning vertex vectors into surface images.

This snippet of code illustrates working with giftis in R (calculating vertex-wise mean, standard deviation, and tsnr for QC): in.fname is the path to a gifti timeseries (such as produced by fmriprep). It is read with readGIfTI from the gifti library, than rearranged into a vertex x frame matrix. Once in that form it's trivial to calculate vertex-wise statistics, which this code writes out in plain text; they could also be plotted directly.

      in.img <- readGIfTI(in.fname)$data;  # just save the vertex timeseries part  
      in.tbl <- matrix(unlist(in.img), nrow=length(in.img), byrow=TRUE); # now vertices in rows, timepoints in columns  
   
      out.tbl <- array(NA, c(ncol(in.tbl), 3));  # one row for each vertex  
      colnames(out.tbl) <- c("sd", "mean", "tsnr");  
   
      out.tbl[,1] <- apply(in.tbl, 2, "sd");  
      out.tbl[,2] <- apply(in.tbl, 2, "mean");  
      out.tbl[,3] <- out.tbl[,2]/out.tbl[,1];  # tsnr: mean/sd  
      write.table(out.tbl, out.fname);  

I try to carry this "surfaces as surfaces and volumes as volumes" strategy throughout the process, to keep the distinction obvious. For example, if a parcel-average timecourse was derived from surface data, show it next to a surface illustration of the parcel. If a GLM was run in the volume, show the beta maps on volumetric slices. Many people (myself included!) have made surface versions of volume data for a final presentation because they look "prettier". This was perhaps acceptable when only volumetric analyses were possible - we knew the surface image was just an illustration. But now that entire analyses can be carried out in the surface or volume I think we should stop this habit and show the data as it actually is - volumes can be "pretty", too!

Tuesday, December 3, 2019

comparing fMRIPrep and HCP Pipelines: Resting state matrix correlation

In the previous post I described comparing resting state pipelines by asking blinded people to match functional connectivity matrices by participant. As described in that post, while no one correctly matched all matrices (i.e., pairing up the hcp-Siegel and fmriprep-xcp versions for the 13 people in DMCC13benchmark), most correctly matched many of them, reassuring me that the two pipelines are producing qualitatively similar functional connectivity matrices.

In this post I show the results of correlating the functional connectivity matrices, on the suggestion of Max Bertolero (@max_bertolero, thanks!). Specifically, I turned the lower triangles of each connectivity matrix into a vector then correlated them pairwise. (In other words, "unwrapping" each functional connectivity matrix into a vector then calculating all possible pairwise correlations between the 26 vectors.)

The results are in the figure below. To walk you through it: participants are along the x-axis, matrix-to-matrix correlations  on the y-axis. Each person is given a unique plotting symbol, which is shown in black. The first column is for person 150423, whose plotting symbol is a circle. The correlation of 150423's hcp-Siegel and fmriprep-xcp matrices is about 0.75 (location of the black circle). The four colored sets of symbols are the correlation of 150423's hcp or fmriprep matrix with everyone else's hcp or fmriprep matrices, as listed along the top. For example, the bright blue column is listed as "hcp w/others' fp", and the highest symbol is a plus. Looking at the black symbols, the plus sign is 171330, so this means that the correlation of 150423's hcp-Siegel matrix with 171330's fmriprep-xcp matrix is about 0.63.
There are several interesting parts of the figure. First, the dark blue and dark red points tend to be a bit higher than the bright blue and pink: matrices derived from the same pipeline tend to have higher correlation than those from different pipelines, even from different people.

Next, most people's fmriprep and hcp matrices are more correlated than either are to anyone else's - the black symbols are usually above all of the colored ones. This is not the case for 3 people: 178950, DMCC8033964, and DMCC9478705. 203418 is also interesting, since their self-correlation is a bit lower than most, but still higher than the correlation with all the others.

Finally, I compared the impressions from this graph with those from the blinded matching (summarized in the table below), and it came out fairly consistently, particularly for the hardest to match pairs: No one correctly matched 178950 or DMCC9478705's matrices, two of the people just listed as having their self-correlations in the midst of the others. Only one person correctly matched the third (DMCC8033964).

While the hardest-to-match pairs are easy to see in this graph, the ones most often correctly matched are more difficult to predict. For example, from the height of the self-correlation and distance to the other points I'd predict that 171330 and 393550 would be relatively hard to pair, but they were the most often correctly matched. 203418 has the fourth-lowest self-correlation, but was correctly matched by more than half the testers.

Note: the "correct matches" column is how many of the 9 people doing the blinded matching test correctly matched the person's matrices.
subject ID correct pairing (hcp, fmriprep) correlation of this person’s matrices correct matches
150423 h,q 0.7518 4
155938 v,i 0.7256 6
171330 l,y 0.7643 4
178950 b,r 0.6069 0
203418 x,d 0.651 5
346945 w,j 0.7553 4
393550 u,n 0.7772 7
601127 c,m 0.7152 1
849971 k,t 0.7269 1
DMCC5775387 s,f 0.767 6
DMCC6705371 a,p 0.733 6
DMCC8033964 g,z 0.5938 1
DMCC9478705 o,e 0.5428 0

Together, both the blinded matching and correlation results reassure me that the two pipelines are giving qualitatively similar matrices, but clearly not identical, nor equally similar for all of the test people.


UPDATE 4 December 2019: Checking further, missings have a sensible impact on these results. These functional connectivity matrices were not calculated from the DMCC baseline session only (which are the runs included in DMCC13benchmark), but rather by concatenating all resting state runs for the person in the scanning wave. Concatenating gives 30 minutes of resting state total without missings or censoring.

Two subjects had missing resting state runs: DMCC8033964 is missing one (25 minutes total instead of 30), and DMCC9478705 is missing two (20 minutes). These are two of the lowest-correlation people, which is consistent with the expectation that longer scans make more stable functional connectivity matrices. It looks like the two pipelines diverge more when there's fewer input images, which isn't too surprising. 178950 didn't have missing runs but perhaps had more censoring or something; I don't immediately know how to extract those numbers from the output but expect the pipelines to vary in many details.

Wednesday, November 6, 2019

comparing fMRIPrep and HCP Pipelines: Resting state blinded matching

The previous post has a bit of a literature review and my initial thoughts on how to compare resting state preprocessing pipeline output. We ended up trying something a bit different, which I will describe here. I actually think this turned out fairly well for our purposes, and has given me sufficient confidence that the two pipelines have qualitatively similar output.

First, the question. As described in previous (task-centric) posts, we began preprocessing the DMCC dataset with the HCP pipelines, followed by code adapted from Josh Siegel (2016) for the resting state runs. We are now switching to fmriprep for preprocessing, followed by the xcpEngine (fc-36p) for resting state runs. [xcp only supports volumes for now, so we used the HCP pipeline volumes as well.] Our fundamental concern is the impact of this processing switch: do the two pipelines give the "same" results? We could design clear benchmark tests for the task runs, but how to test the resting state analysis output is much less clear, so I settled on a qualitative test.

We're using the same DMCC13benchmark dataset as in the task comparisons, which has 13 subjects. Processing each with both pipelines gives 26 total functional connectivity matrices. Plotting the two matrices side-by-side for each person (file linked below) makes similarities easy to spot: it looks like the two pipelines made similar matrices. But we are very good at spotting similarities in paired images; are they actually similar?

The test: would blinded observers match up the 26 matrices by person (pairing the two matrices for each subject) or by something else (such as pipeline)? If observers can match most of the matrices by person, we have reason to think that the two pipelines are really producing similar output. (Side note: these sorts of tests are sometimes called Visual Statistical Inference and can work well for hard-to-quantify differences.)

For details, here's a functional connectivity matrix for one of the DMCC13benchmark subjects. There are 400 rows and columns in the matrix, since these were run within the Schaefer (2018) 400-parcels 7-network ordering parcellation. This parcellation spans both hemispheres, the first 200 on left, second 200 right. Both hemispheres are in the matrix figures (labeled and separated by black lines). The number of parcels in each network is not perfectly matched between hemispheres, so only the quadrants along the diagonal have square networks. The dotted lines separate the 7 networks: Visual, SomMot, DorsAttn, SalVentAttn, Limbic, Cont, Default (ordering and labels from Schaefer (2018)). Fisher r-to-z transformed correlations are plotted, with the color range from -1.9 (darkest blue) to 1.9 (darkest red); 0 is white.
 
Interested in trying to match the matrices yourself? This pdf has the matrices in a random order, labeled with letters. I encourage you to print the pdf , cut the matrices apart, then try to pair them into the 13 people (we started our tests with the matrices in this alphabetical order). I didn't use a set instructional script or time limit, but encouraged people not to spend more than 15-20 minutes or so, and explained the aim similarly to this post. If you do try it, please note in the comments/email me your pairings or number correct (as well as your strategy and previous familiarity with these sorts of matrices) and I'll update the tally. The answers are listed on the last page of this pdf and below the fold on this post.

Several of us had looked at the matrices for each person side-by-side before trying the blind matching, and on that basis thought that it would be much easier to pair the people than it actually was. Matching wasn't impossible, though: one lab members successfully matched 9 of the 13 people (the most so far). At the low end, two other lab members only successfully matched 2 of the 13 people; three members matched 4, one 5, one 7, and one 8.

That it was possible for anyone to match the matrices for most participants reassures me that they are indeed more similar by person than pipeline: the two pipelines are producing qualitatively similar matrices when given the same input data. For our purposes, I find this sufficient: we were not attempting to determine if one version is "better" than the other, just if they are comparable.

What do you think? Agree that this "test" suggests similarity, or would you like to see something else? pdfs with the matrices blinded and not are linked here; let me know if you're interested in the underlying numbers or plotting code and I can share that as well; unpreprocessed images are already on OpenNeuro as DMCC13benchmark.

Assigned pairings (accurate and not) after the jump.

UPDATE 3 December 2019: Correlating the functional connectivity matrices is now described in a separate post.