Showing posts with label searchlight shapes. Show all posts
Showing posts with label searchlight shapes. Show all posts

Thursday, May 9, 2013

Schapiro 2013: "Neural representations of events arise from temporal community structure"

While not a methods-focused paper, this intriguing and well-written paper includes an interesting application of searchlight analysis which I'll explore a bit here. I'm only going to describe a bit of the searchlight - related analyses here, you really should take a look at the full paper.

First, though, they used cubical searchlights! I have an informal collection of searchlight shapes, and suspect that the authors used cubical searchlights from Francisco's legacy, though I couldn't find a mention of which software/scripts they used for the MVPA. (I don't mean to imply cubes are bad, just a less-common choice.)

a bit of background

Here's a little bit about the design relevant for the searchlight analysis; check the paper for the theoretical motivation and protocols. Briefly, the design is summarized in their Figure 1: Subjects watched long sequences of images (c). There were 15 images, not shown in random order, but rather in orders chosen by either random walks or Hamiltonian paths on the network in (a). I superimposed unique numbers on the nodes to make them easier to refer to later; my node "1" was not necessarily associated with image "1" (though it could have been).

Subjects didn't see the graph structure (a), just long (1,400 images) sequences of images (c). When each image appeared they indicated whether each image was rotated from its 'proper' orientation. The experiment wasn't about the orientation, however, but rather about the sequences: would subjects learn the underlying community structure?


The searchlight analysis was not a classification but instead rather similar to RSA (representational similarity analysis), though they didn't mention RSA. In their words,
"Thus, another way to test our prediction that items in the same community are represented more similarly is to examine whether the multivoxel response patterns evoked by each item come to be clustered by community. We examined these patterns over local searchlights throughout the entire brain, using Pearson correlation to determine whether activation patterns were more similar for pairs of items from the same community than for pairs from different communities."
Using Figure 1, the analysis is asking whether a green node (e.g. number 2) is more similar to other green nodes than to purple or orange nodes. It's not just a matter of taking all of the images and sorting them by node color, though - there are quite a few complications.

setting up the searchlight analysis

The fMRI session had 5 runs, each of which had 160 image presentations, during which the image orders alternated between random walks and Hamiltonian paths. They only wanted to include the Hamiltonian paths in the searchlight analysis (for theoretical reasons, see the paper), which I think would work out to around 5 eligible path-traversals per run (160/15 = 10.6/2 =~ 5); each node/image would have about 5 presentations per run. They didn't include images appearing at the beginning of a path-traversal, so I think there would be something less than 25 total possible image presentations to include in the analyses.

Hamiltonian paths in the graph mean that not all node orderings are possible: nodes of the same color will necessarily be visited sequentially (with the starting point's color potentially visited at the beginning and end of the path). For example, one path is to follow the nodes in the order of the numbers I gave them above: starting at 1 and ending at 15. Another path could be (1:5, 6,8,7,9,10, 11:15). But (1:5, 6,8,10,7,9, 11:15) is NOT possible - we'd have to got through 10 again to get out of the purple nodes, and Hamiltonian paths only visit each node once. Rephrased, once we reach one of the light-colored boundary nodes (1,5,6,10,11,15) we need to visit all the dark-colored nodes of that color before visiting the other boundary node of the same color.

This linking of order and group makes the searchlight analysis more difficult: they only want to capture same-cluster/different-cluster similarity differences due to cluster, not that the different-cluster images appeared separated by more time than the same-cluster images (since fMRI volumes collected closer together in time will generally be more similar to each other than fMRI volumes collected farther apart in time). They tried to compensate by calculating similarities for pairs of images within each path that were separated by the same number of steps (but see question 1 below). 

For example, there are three possible step-length-1 pairs for node 1: 15-1-2; 15-1-3; 15-1-4. The dark-colored nodes (2,3,4; 7,8,9; 12,13,14) can't be the "center" for any step-length-1 pairs since it takes at least 2 steps to reach the next cluster. Every node could be the "center" for a step-length-2 pair, but there are many more valid pairings for the dark-colored nodes than the light-colored ones.

The authors say that "Across these steps, each item participated in exactly four within-cluster pair correlations and exactly four across-cluster pair correlations.", but it's not clear to me whether this count means one correlation of each step-length or if only four pairings went into each average. It seems like there would be many more possible pairings at each step-length than four.

Once the pairings for each person have been defined calculating the statistic for each pairing on each searchlight would be relatively straightforward: get the three 27-voxel vectors corresponding to the item presentation, its same-cluster item presentation, and its different-cluster item presentation. Then, calculate the correlation between the item and its same-cluster and different-cluster items, Fisher-transform, and subtract. We'd then have a set of differences for each searchlight (one for each of the pairings), which are averaged and the average assigned to the center voxel.

I think this is an interesting analysis, and hopefully if you've read this far you found my description useful. I think my description is accurate, but had to guess in a few places, and still have a few questions:
  1. Were the item pairs matched for time as well as number of steps? In other words, with the interstimulus interval jittering two steps could be as few as 3 seconds (1img + 1iti + 1img + 1iti + 1img) or as many as 11 seconds (1img + 5iti + 1img + 5iti + 1img).
  2. How many correlation differences went into each average? Were these counts equal for step-lengths or in every subject?
  3. How was the group analysis done? The authors describe using fsl's randomise on the difference maps; I guess a voxel-wise one-sample t-test, for difference != 0? What was permuted?
I'll gladly post any answers, comments, or clarifications I receive.

UPDATE, 24 May: Comments from Anna Schapiro are here.

ResearchBlogging.org Schapiro, A., Rogers, T., Cordova, N., Turk-Browne, N., & Botvinick, M. (2013). Neural representations of events arise from temporal community structure Nature Neuroscience, 16 (4), 486-492 DOI: 10.1038/nn.3331

Thursday, October 25, 2012

searchlight shapes: Stelzer

This is the first of what will likely be a series of posts on a paper in press at NeuroImage:

Stelzer, J., et al., Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA). NeuroImage (2012), http://dx.doi.org/10.1016/j.neuroimage.2012.09.063

There is a lot in this paper, touching some of my favorite topics (permutation testing, using the binomial, searchlight analysis, Malin's 'random' searchlights).
But in this post I'll just highlight the searchlight shapes used in the paper. They're given in this sentence: "The searchlight volumes to these diameters were 19 (D=3), 57 (D=5), 171 (D=7), 365 (D=9), and 691 (D=11) voxels, respectively." The authors don't list the software they used; I suspect it was custom matlab code.

Here I'll translate the first few sizes to match the convention I used in the other searchlight shape posts:
diameter radius number of surrounding voxels notes
3 1 18 This looks like my 'edges or faces touch' searchlight. 
5 2 56 This has more voxels than the 'default' searchlight, but less than my two-voxel radius searchlight. Squinting at Figure 1 in the text, I came up with the shape below.


Here's the searchlight from Figure 1, and my blown-up version for a two-voxel radius searchlight.
It looks like they added the plus signs to the outer edges of a three-by-three cube. This doesn't follow any of my iterative rules, but perhaps would result from fitting a particular sphere-type rule.

Monday, October 1, 2012

searchlight shapes: BrainVoyager

Rainer Goebel kindly provided a description and images of the searchlight creation used in BrainVoyager:

"BrainVoyager uses the "sphere" approach (as in our original PNAS paper Kriegeskorte, Goebel, Bandettini 2006), i.e. voxels are considered in a cube neighborhood defined by the radius and in this neighborhood only those voxels are included in the "sphere" that have an Euclidean distance from the center of less than (or equal to) the radius of the sphere. From your blog, I think the resulting shapes in BrainVoyager are the same as for pyMVPA.

Note, however, that in BrainVoyager the radius is a float value (not an integer) and this allows to create "spheres" where the center layer has a single element on each side at cardinal axes (e.g. with radius 1, 2, 3, 4... voxels, see snapshot below) but also "compact" spheres as you seem to have used by setting the radius, e.g. to 1.6, 1.8, 2.6, 2.8, 3.6...). "

At right is an image Rainer generated showing  radius 1.0 and 2.0 searchlights created in BrainVoyager.

I am intrigued by Rainer's comment that using non-integer radii will make more "compact" spheres; non-interger radii underscores the need to be explicit in describing searchlight shape in methods sections. It appears that pyMVPA requires integer radii, but the Princeton MVPA Toolbox does not.

Monday, September 24, 2012

searchlight shapes: searchmight

The searchmight toolbox was developed by Francisco Pereira and the Botvinick Lab and introduced in Information mapping with pattern classifiers. The toolbox is specialized for searchlight analysis, streamlining and speeding the process of trying different classification algorithms on the same datasets.

Francisco said that, by default, the toolbox uses cubical searchlights (other shapes can be manually defined). In the image of different one-voxel radius searchlights, this is the one that I labeled "edge, face, or corner": 26 surrounding voxels for a one-voxel radius searchlight. Larger radii are also cubes.


Pereira, Francisco, & Botvinick, Matthew (2011). Information mapping with pattern classifiers: A comparative study. NeuroImage. DOI: 10.1016/j.neuroimage.2010.05.026

Wednesday, September 19, 2012

pyMVPA searchlight shape: solved

Michael Hanke posted further description of the pyMVPA searchlight: it is the same as the Princeton MVPA toolbox searchlight I described earlier.

As Michael points out, it is possible to define other searchlight shapes in pyMVPA; this is the default for a sphere, not the sole option.

searchlight shapes: a collection

The topic of searchlight shapes is more complex than I'd originally expected. I'd like to compile a collection of what people are using. I added a "searchlight shapes" label to the relevant posts: clicking that label should bring up all of the entries.

I have contacted some people directly in the hopes of obtaining sufficient details to add their searchlight shapes to the collection. If you have details of a searchlight not included, please send them and I'll add it (or arrange for you to post them). My initial goal is to describe the main (i.e. default) spherical searchlights produced in the most widely used code/programs, but I also plan to include other types of searchlights, like the random ones created by Malin Björnsdotter.

Wednesday, September 12, 2012

more searchlights: the Princeton mvpa toolbox

MS Al-Rawi kindly sent me coordinates of the searchlights created by the Princeton MVPA toolbox, which I made into this image. As before, the center voxel is black, the one-voxel-radius searchlight red, the two-voxel-radius searchlight purple, and the three-voxel-radius searchlight blue.

The number of voxels in each is:
1-voxel radius: 6 surrounding voxels (7 total)
2-voxel radius: 32 surrounding voxels (33 total)
3-voxel radius: 122 surrounding (123 total)
4-voxel radius: 256 surrounding (257 total)

From a quick look at the code generating the coordinates, it appears to "draw" a sphere around the center voxel, then retrieve the voxels falling within the sphere. (anyone disagree with this characterization?)

This "draw a sphere first" strategy explains why I couldn't come up with this number of voxels for a three-voxel-radius searchlight if the shape follows additive rules (e.g. "add voxels that share a face with the next-size-smaller radius"). It's another example of how much it can vary when different people actually implement a description: to me, it was more natural to think of searchlights of different radii as growing iteratively, a rather different solution than the one used in the Princeton MVPA toolbox.

Tuesday, September 11, 2012

my searchlight, and some (final?) thoughts

Since I've been thinking about searchlight shapes, here's a diagram of the two-voxel radius searchlight I'm currently using in a few projects. As in the previous post, the center voxel is black, the one-voxel radius surrounds red, and the two-voxel radius surrounds purple.

This two-voxel radius searchlight has 13 + 21 + 24 + 21 + 13 = 92 voxels in the surround, which makes it larger than the three-voxel radius faces-must-touch searchlight (at 86 surrounding voxels).

So how should we grow searchlights, edge-face-corner, edge-face, faces-only? Beats me. I can imaging comparing the results of a few different shapes in a particular dataset, but I doubt there's a single shape that is always best. But the searchlight shape should be added to our list of how to describe a searchlight analysis.

PS - I use R code to generate lists of adjacent voxels for running searchlight analyses; contact me if you'd like a copy/details.

pyMVPA searchlight shapes

Michael Hanke listed the number of voxels for different searchlight radii in pyMVPA. He also wrote that pyMVPA considers neighboring voxels to be those that share two edges (i.e., a face).

Here's a diagram of following the faces-touching rule. The center voxel is black, the one-voxel radius surrounds red, the two-voxel radius surrounds purple, and the three-voxel radius surrounds blue.
This gives 1 + 9 + 21 + 24 + 21 + 9 + 1 = 86 surrounding voxels for the three-voxel radius.

But Michael wrote that a three-voxel radius searchlight in pyMVPA has 123 voxels (counting the center). If so, at left is how I could come by that count: 13 + 28 + 40 + 28 + 13 = 122.

This shape is not quite symmetrical: there are four voxels to the left, right, front, and back of the center voxel but only two above and below.

Anyone know if this is what pyMVPA does?

UPDATE: Neither of these sketches is correct, see pyMVPA searchlight: solved.

Monday, September 10, 2012

searchlight shapes

How exactly the searchlight was shaped is one of those details that is usually not mentioned in searchlight analysis papers but is not obvious, mostly since voxels are cubes.

Nikolaus Kriegeskorte (the father of searchlight analysis) uses the illustration on the left.








Expanded, I think this would look like the picture at left. There are 32 voxels surrounding the center; the center voxel in black and surrounding in blue.


But there are other ways of thinking of a searchlight. Here are three ways of defining a one-voxel radius searchlight.

Personally, I implemented the middle version (edges and faces, not corners, 18 voxels) as a one-voxel radius searchlight in my code. Larger radii are calculated iteratively (so for a two-voxel radius, combine the one-voxel radius searchlights defined by treating all 18 surrounding voxels present in the one-voxel radius searchlight as centers). I don't think one version of defining surrounds is necessarily better than others, but we do need to specify the voxels we are including.


update 11 September: I changed the caption of the searchlight shape images for clarity.