Thursday, May 9, 2013

Schapiro 2013: "Neural representations of events arise from temporal community structure"

While not a methods-focused paper, this intriguing and well-written paper includes an interesting application of searchlight analysis which I'll explore a bit here. I'm only going to describe a bit of the searchlight - related analyses here, you really should take a look at the full paper.

First, though, they used cubical searchlights! I have an informal collection of searchlight shapes, and suspect that the authors used cubical searchlights from Francisco's legacy, though I couldn't find a mention of which software/scripts they used for the MVPA. (I don't mean to imply cubes are bad, just a less-common choice.)

a bit of background

Here's a little bit about the design relevant for the searchlight analysis; check the paper for the theoretical motivation and protocols. Briefly, the design is summarized in their Figure 1: Subjects watched long sequences of images (c). There were 15 images, not shown in random order, but rather in orders chosen by either random walks or Hamiltonian paths on the network in (a). I superimposed unique numbers on the nodes to make them easier to refer to later; my node "1" was not necessarily associated with image "1" (though it could have been).

Subjects didn't see the graph structure (a), just long (1,400 images) sequences of images (c). When each image appeared they indicated whether each image was rotated from its 'proper' orientation. The experiment wasn't about the orientation, however, but rather about the sequences: would subjects learn the underlying community structure?

The searchlight analysis was not a classification but instead rather similar to RSA (representational similarity analysis), though they didn't mention RSA. In their words,
"Thus, another way to test our prediction that items in the same community are represented more similarly is to examine whether the multivoxel response patterns evoked by each item come to be clustered by community. We examined these patterns over local searchlights throughout the entire brain, using Pearson correlation to determine whether activation patterns were more similar for pairs of items from the same community than for pairs from different communities."
Using Figure 1, the analysis is asking whether a green node (e.g. number 2) is more similar to other green nodes than to purple or orange nodes. It's not just a matter of taking all of the images and sorting them by node color, though - there are quite a few complications.

setting up the searchlight analysis

The fMRI session had 5 runs, each of which had 160 image presentations, during which the image orders alternated between random walks and Hamiltonian paths. They only wanted to include the Hamiltonian paths in the searchlight analysis (for theoretical reasons, see the paper), which I think would work out to around 5 eligible path-traversals per run (160/15 = 10.6/2 =~ 5); each node/image would have about 5 presentations per run. They didn't include images appearing at the beginning of a path-traversal, so I think there would be something less than 25 total possible image presentations to include in the analyses.

Hamiltonian paths in the graph mean that not all node orderings are possible: nodes of the same color will necessarily be visited sequentially (with the starting point's color potentially visited at the beginning and end of the path). For example, one path is to follow the nodes in the order of the numbers I gave them above: starting at 1 and ending at 15. Another path could be (1:5, 6,8,7,9,10, 11:15). But (1:5, 6,8,10,7,9, 11:15) is NOT possible - we'd have to got through 10 again to get out of the purple nodes, and Hamiltonian paths only visit each node once. Rephrased, once we reach one of the light-colored boundary nodes (1,5,6,10,11,15) we need to visit all the dark-colored nodes of that color before visiting the other boundary node of the same color.

This linking of order and group makes the searchlight analysis more difficult: they only want to capture same-cluster/different-cluster similarity differences due to cluster, not that the different-cluster images appeared separated by more time than the same-cluster images (since fMRI volumes collected closer together in time will generally be more similar to each other than fMRI volumes collected farther apart in time). They tried to compensate by calculating similarities for pairs of images within each path that were separated by the same number of steps (but see question 1 below). 

For example, there are three possible step-length-1 pairs for node 1: 15-1-2; 15-1-3; 15-1-4. The dark-colored nodes (2,3,4; 7,8,9; 12,13,14) can't be the "center" for any step-length-1 pairs since it takes at least 2 steps to reach the next cluster. Every node could be the "center" for a step-length-2 pair, but there are many more valid pairings for the dark-colored nodes than the light-colored ones.

The authors say that "Across these steps, each item participated in exactly four within-cluster pair correlations and exactly four across-cluster pair correlations.", but it's not clear to me whether this count means one correlation of each step-length or if only four pairings went into each average. It seems like there would be many more possible pairings at each step-length than four.

Once the pairings for each person have been defined calculating the statistic for each pairing on each searchlight would be relatively straightforward: get the three 27-voxel vectors corresponding to the item presentation, its same-cluster item presentation, and its different-cluster item presentation. Then, calculate the correlation between the item and its same-cluster and different-cluster items, Fisher-transform, and subtract. We'd then have a set of differences for each searchlight (one for each of the pairings), which are averaged and the average assigned to the center voxel.

I think this is an interesting analysis, and hopefully if you've read this far you found my description useful. I think my description is accurate, but had to guess in a few places, and still have a few questions:
  1. Were the item pairs matched for time as well as number of steps? In other words, with the interstimulus interval jittering two steps could be as few as 3 seconds (1img + 1iti + 1img + 1iti + 1img) or as many as 11 seconds (1img + 5iti + 1img + 5iti + 1img).
  2. How many correlation differences went into each average? Were these counts equal for step-lengths or in every subject?
  3. How was the group analysis done? The authors describe using fsl's randomise on the difference maps; I guess a voxel-wise one-sample t-test, for difference != 0? What was permuted?
I'll gladly post any answers, comments, or clarifications I receive.

UPDATE, 24 May: Comments from Anna Schapiro are here. Schapiro, A., Rogers, T., Cordova, N., Turk-Browne, N., & Botvinick, M. (2013). Neural representations of events arise from temporal community structure Nature Neuroscience, 16 (4), 486-492 DOI: 10.1038/nn.3331

No comments:

Post a Comment