Wednesday, November 6, 2019

comparing fMRIPrep and HCP Pipelines: Resting state blinded matching

The previous post has a bit of a literature review and my initial thoughts on how to compare resting state preprocessing pipeline output. We ended up trying something a bit different, which I will describe here. I actually think this turned out fairly well for our purposes, and has given me sufficient confidence that the two pipelines have qualitatively similar output.

First, the question. As described in previous (task-centric) posts, we began preprocessing the DMCC dataset with the HCP pipelines, followed by code adapted from Josh Siegel (2016) for the resting state runs. We are now switching to fmriprep for preprocessing, followed by the xcpEngine (fc-36p) for resting state runs. [xcp only supports volumes for now, so we used the HCP pipeline volumes as well.] Our fundamental concern is the impact of this processing switch: do the two pipelines give the "same" results? We could design clear benchmark tests for the task runs, but how to test the resting state analysis output is much less clear, so I settled on a qualitative test.

We're using the same DMCC13benchmark dataset as in the task comparisons, which has 13 subjects. Processing each with both pipelines gives 26 total functional connectivity matrices. Plotting the two matrices side-by-side for each person (file linked below) makes similarities easy to spot: it looks like the two pipelines made similar matrices. But we are very good at spotting similarities in paired images; are they actually similar?

The test: would blinded observers match up the 26 matrices by person (pairing the two matrices for each subject) or by something else (such as pipeline)? If observers can match most of the matrices by person, we have reason to think that the two pipelines are really producing similar output. (Side note: these sorts of tests are sometimes called Visual Statistical Inference and can work well for hard-to-quantify differences.)

For details, here's a functional connectivity matrix for one of the DMCC13benchmark subjects. There are 400 rows and columns in the matrix, since these were run within the Schaefer (2018) 400-parcels 7-network ordering parcellation. This parcellation spans both hemispheres, the first 200 on left, second 200 right. Both hemispheres are in the matrix figures (labeled and separated by black lines). The number of parcels in each network is not perfectly matched between hemispheres, so only the quadrants along the diagonal have square networks. The dotted lines separate the 7 networks: Visual, SomMot, DorsAttn, SalVentAttn, Limbic, Cont, Default (ordering and labels from Schaefer (2018)). Fisher r-to-z transformed correlations are plotted, with the color range from -1.9 (darkest blue) to 1.9 (darkest red); 0 is white.
 
Interested in trying to match the matrices yourself? This pdf has the matrices in a random order, labeled with letters. I encourage you to print the pdf , cut the matrices apart, then try to pair them into the 13 people (we started our tests with the matrices in this alphabetical order). I didn't use a set instructional script or time limit, but encouraged people not to spend more than 15-20 minutes or so, and explained the aim similarly to this post. If you do try it, please note in the comments/email me your pairings or number correct (as well as your strategy and previous familiarity with these sorts of matrices) and I'll update the tally. The answers are listed on the last page of this pdf and below the fold on this post.

Several of us had looked at the matrices for each person side-by-side before trying the blind matching, and on that basis thought that it would be much easier to pair the people than it actually was. Matching wasn't impossible, though: one lab members successfully matched 9 of the 13 people (the most so far). At the low end, two other lab members only successfully matched 2 of the 13 people; three members matched 4, one 5, one 7, and one 8.

That it was possible for anyone to match the matrices for most participants reassures me that they are indeed more similar by person than pipeline: the two pipelines are producing qualitatively similar matrices when given the same input data. For our purposes, I find this sufficient: we were not attempting to determine if one version is "better" than the other, just if they are comparable.

What do you think? Agree that this "test" suggests similarity, or would you like to see something else? pdfs with the matrices blinded and not are linked here; let me know if you're interested in the underlying numbers or plotting code and I can share that as well; unpreprocessed images are already on OpenNeuro as DMCC13benchmark.

Assigned pairings (accurate and not) after the jump.

UPDATE 3 December 2019: Correlating the functional connectivity matrices is now described in a separate post

UPDATE 4 January 2021: Corrected DMCC13benchmark openneuro links.

correct pairings: h,q;  v,i;  l,y;  b,r;  x,d;  w,j;  u,n;  c,m;  k,t;  s,f;  a,p;  g,z;  o,e
which correspond to DMCC13 ids: 150423 155938 171330 178950 203418 346945 393550 601127 849971 DMCC5775387 DMCC6705371 DMCC8033964 DMCC9478705

pairings assigned by blinded testers (correct in green text):

C:  e,n; u,v; g,t; f,l; m,z; s,b; c,h; x,d; i,j; a,p; o,k; q,y; w,r   - 2 correct
D: a,i; k,x; j,l; f,s; p,b; c,z; w,g; q,r; h,y; u,n; e,m; t,o; v,d    - 2 correct
B: a,y;  b,w;  c,t;  d,e;  f,sg,z;  h,r;  i,v;  j,m;  k,o;  l,p;  n,u;  q,x   - 4 correct
Al:  a,p;  b,g;  c,x;  d,i;  e,m;  f,v;  h,q;  j,r;  k,o;  l,y;  n,u;  s,w;  t,z    - 4 correct
J:  f,x;  k,d;  e,c;  n,uv,i;  o,z;  t,y;  m,r;  q,hp,a;  s,j;  b,g;  l,w    - 4 correct
Ax:  d,x;  b,l,  j,w;  g,y;  h,r;  a,q;  s,f;  c,t;  e,k;  u,n;  z,m;  i,v,  o,p    - 5 correct
Me: s,f;  l,y;  j,w;  h,q;  i,v;  b,g;  m,t;  c,r;   o,u;  a,p;  d,x;  n,z;  e,k     - 7 correct
K:  a,p;  b,g;  c,k;  d,x;  e,r;  f,s;  h,q;  i,v;  j,w;  l,y;  m,t;  n,u;  o,z    - 8 correct
Ma:  b,o;  e,z;  a,p;  k,t;  n,u;  i,v;  g,q;  d,x;  f,s;  w,j;  c,m;  y,l;  r,h   - 9 correct

It looks like no one matched b,r or o,e. The most common correct matches were u,n; v,i;  s,f;  x,d;  and a,p.


1 comment:

  1. I think this is a really interesting experiment, thank you very much for this post !

    ReplyDelete