Wednesday, January 29, 2014

demo: R code to perform a searchlight analysis

Here is an R script for performing a single-subject searchlight analysis with R. You are welcome to adapt the code for your own use so long as I am cited; the code is a demo that performs an entire single-subject searchlight analysis on the sample dataset, but will need to be edited for other datasets. This is an R script, not a package, because it is not "plug-and-play", though I hope it is straightforward and clear.

The demo takes a 4d nifti as input and creates a 3d nifti output image in which the value in each voxel is the classification accuracy of the voxel's searchlight. The code creates iterative-shell shaped searchlights, but can be altered for other shapes.

The code performs two preliminary steps before doing the searchlight analysis. These just need to be done once for each dataset (e.g. if multiple subjects have been spatially normalized, all use the same lookup and 3d files). The first creates a 3d.rds file, which assigns an integer to each brain voxel and serves to go between each voxel's 3d coordinates and the list of searchlight surrounds, which is stored in a lookup table. Each row of the lookup table corresponds to the brain voxel integer labels stored in 3d.rds; the columns list the voxel-name integers making up the searchlight for that row's voxel. The searchlight analysis proceeds by going through each voxel in turn, looking up its searchlight voxels in the lookup table, pulling the values for those voxels from the 4d input image, then writing the classification accuracy into the voxel's place in the 3d output image.

I hope that this code and/or logic is useful for people setting up their own searchlight analyses, whether in R or another language. I believe that most searchlight analysis implementations use a similar logic, though some others are probably more optimized for speed! If you give it a try, please let me know how it goes.

I created the demo dataset from a little bit of a real dataset, labeling volumes to have strong information in motor and visual areas. This is what my final searchlight accuracy map looked like, plotted on the ch2bet template in MRIcroN and scaled for accuracies of  0.5 to 1.



UPDATE 13 May 2015

The searchlight demo code above has code for creating searchlights shaped as iterative shells. It is now perhaps most common to make "spherical" searchlights, so this updated searchlight demo has code to create searchlights of the same shape as used by pyMVPA, BrainVoyager, and the Princeton toolbox. I also cleaned up a few comments, and changed the lookup table and 3d image to plain text and NIfTI, instead of .rds, for access in multiple programs, not just R.

This shows the accuracy maps produced from this little demo dataset using 2-voxel radius searchlights, shaped as iterative shells (right) and "spheres" (left). The areas with the highest accuracies are the same in both, as we'd hoped, but the accuracies tend to be a bit higher on the right (shells), likely because 2-voxel radius spherical searchlights are smaller (have fewer voxels) than 2-voxel radius iterative shell searchlights. (Note that this doesn't mean one shape is better than the other, they're simply different shapes, and it's probably best for consistency that we all use the same shape, which means spheres.)





No comments:

Post a Comment