Saturday, October 19, 2013

nice doughnuts: pretty searchlight quirks

More than a year ago I posted an explanation about how strange ring-type edge effects can show up in searchlight analysis information maps, but this morning I saw the most vivid example of the phenomenon I've ever run across in real data.

These images show slices from two leave-one-subject-out searchlight analyses (linear SVM). The subjects were viewing all sorts of movie stimuli, and for a positive control I classified whether the people were viewing stimuli showing people moving, or whether the stimuli showed still people. This analysis should work very well, reporting massive information (high classification accuracy, in this case) in motor, somatosensory, and visual areas.

At right are a few slices of the information map that results when the examples are normalized within each searchlight (i.e. to a mean of 0, standard deviation of 1). The highest-classifying searchlights are in the proper brain areas (and with properly high accuracies), but in strange, hollow, shapes.


And here's what it looks like when the images are not normalized within each searchlight. The same areas have accurate searchlights, but now they make solid "blobs", with the most highly accurate searchlights (yellow) in the centers - where normalized searchlights had chance accuracy.


Read that previous post for a full explanation. This is the sort of data we'd expect to have the strongest edge/center effects: focal areas with high information, areas large enough to allow searchlights to fit inside the middle of the informative areas. It's also likely that the activations in these areas are all in the same direction (higher activity during the moving stimuli than the non-moving stimuli), creating consistent differences in the amount of BOLD throughout the areas. Normalizing each example within each searchlight removes this consistent BOLD difference, and so classification fails.

5 comments:

  1. Can't you just use a constraint that the center voxel has to have the highest weight? It makes more sense than to normalize it.

    ReplyDelete
  2. Yes, you could add constraints (though just requiring the center voxel to have highest weight could be problematic; e.g. since fMRI data tends to be very redundant). But for some hypotheses normalizing within searchlights is sensible.

    People have come up with variations on the searchlight algorithm; the first two that come to mind are Bjornsdotter2011 (doi:10.1016/j.neuroimage.2010.07.044) and Zhang2011 (doi:10.1109/TMI.2011.2114362), and there are others. Unfortunately, I haven't yet been able to play around with these algorithms yet. They strike me as quite promising, though.

    ReplyDelete
  3. If you do not have any constraints about the weights there can be "bleeding artifacts". In a 3 x 3 x 3 neighbourhood, one corner voxel can for example contain a lot of information, and then the classification accuracy is saved in the center voxel...

    ReplyDelete
    Replies
    1. Yep, exactly. I talk about that a lot in my recent searchlight analysis paper (doi: 10.1016/j.neuroimage.2013.03.04), and Viswanathan has a great demonstration in arXiv:1210.6317 [q-bio.NC].

      The problem is what to do about it: just looking at the weights is probably not enough.

      Delete
    2. This paper has a rather long discussion about different approaches. It uses canonical correlation analysis (CCA), but the ideas should apply to classification based analysis as well.

      http://www.hindawi.com/journals/ijbi/2012/738283/

      Delete