baseline
This example is like the one in the previous post, except that only the
first 10 voxels in class b are 1 + (a value); the others are the same as class
a. Here is the baseline data:
This toy dataset is 25 voxels in two classes (a and b) and two runs, with two examples of each. The first two columns of the class b images are darker than the corresponding class a images (since 1 was added to the voxel values) while the other 15 voxels are identical in both classes.These baseline images are classified perfectly by a linear svm: the classifiers had no problem finding the ten informative voxels.
run-column scaling
Next I normalize the voxels individually, separately in each run.
The colors changed since the voxel values changed but the informative voxels are still distinguishable: the first two columns of voxels are different in the two rows while the last three are identical. This set of images is still classified perfectly by the linear svm.
row-scaling
Here I normalize each image separately (across all the voxels in each image).
The key thing to notice here is that the first two columns are no longer the only informative voxels: all of the voxels are different in the two classes, even though I created the dataset with only the first ten voxels having information. The images are still classified perfectly.
row-wise mean-subtraction
Subtracting the mean row-wise (across all voxels in each image separately) also adds information to the uninformative voxels, and the images are yet again classified perfectly.
discussion
The images were classified perfectly in all cases: no type of scaling used here (the most common methods for MVPA) was able to remove the signal. In this example I added a constant to some of the voxels, while in the previous I added a constant to all of the voxels. Generally, a constant difference (e.g. uniformly more BOLD in one condition than another) can be removed (by row-scaling or mean-subtraction) when present equally in all voxels, but not if only some of the voxels are affected.
Notice the "leaking" of information across voxels during row-scaling and row-wise mean-subtraction: voxels that were NOT informative before scaling WERE informative after scaling. This is not a problem if you will only make inferences about the group of voxels as a whole - you don't try to see which voxels within the ROI were contributing to the classification. But it would be possible to introduce distortions if a ROI was row-scaled and then a searchlight analysis was performed: some voxels could look informative that were not before the scaling.
Code for this example is in the same R file as the previous example.
No comments:
Post a Comment