Each searchlight is a small ROI. Doing
mean-subtraction or row-scaling within each searchlight (or using a metric
insensitive to magnitude, like correlation) can maybe reduce the likelihood that uniform differences are not responsible for the classification (but can cause edge effects). Performing the scaling
on the entire brain (or anything bigger than the searchlight) does not
eliminate this possibility (and could potentially introduce artifacts, as described in the previous post). Things are never clean in real situations ...
This example illustrates some edge effects that can happen with scaling in searchlights.
Say this is a 2d set of voxels in which we're doing a searchlight analysis (the yellow square). The reddish squares are voxels that differ across the conditions, say by having activation in class 'b' equal to activation in class 'a' + 1 (a uniform difference like in the scaling examples).
Here the "informative" voxels from the searchlight analysis are shown in light green, if we don't do scaling within each searchlight. (I colored all voxels for which the searchlight contains at least one of the reddish voxels).
And here are the "informative" voxels from the searchlight analysis if we do row-scaling (or mean-subtraction) within each searchlight: the left-side blob is now a doughnut: the center reddish voxels are not included as informative voxels. This happens because the activation difference is removed by the scaling in searchlights completely contained within the blob, but not in ones that contain only some of the blob.
This "doughnut" effect can be trouble if we want to detect all of the reddish voxels: we're missing the ones in the center of the blob, which presumably would have the strongest effect and be most likely to spatially overlap across subjects. But it can also be trouble if we don't want to detect voxels with a mass-univariate difference, as pointed out in an example by Jesse Rissman on the mvpa-toolbox list.
Thanks for your blog, I think it will be very helpful :)
ReplyDelete