These images show slices from two leave-one-subject-out searchlight analyses (linear SVM). The subjects were viewing all sorts of movie stimuli, and for a positive control I classified whether the people were viewing stimuli showing people moving, or whether the stimuli showed still people. This analysis should work very well, reporting massive information (high classification accuracy, in this case) in motor, somatosensory, and visual areas.
At right are a few slices of the information map that results when the examples are normalized within each searchlight (i.e. to a mean of 0, standard deviation of 1). The highest-classifying searchlights are in the proper brain areas (and with properly high accuracies), but in strange, hollow, shapes.

Read that previous post for a full explanation. This is the sort of data we'd expect to have the strongest edge/center effects: focal areas with high information, areas large enough to allow searchlights to fit inside the middle of the informative areas. It's also likely that the activations in these areas are all in the same direction (higher activity during the moving stimuli than the non-moving stimuli), creating consistent differences in the amount of BOLD throughout the areas. Normalizing each example within each searchlight removes this consistent BOLD difference, and so classification fails.