"
Choosing the Rules: Distinct and Overlapping Frontoparietal Representations of Task Rules for Perceptual Decisions" has interesting combinations of ROI-based and searchlight analyses. I think I figured out a lot of the methods, but all of the steps and choices were not clear. Frustratingly, why some of the thresholds were set as they were (e.g. for ROI definition) was not explained. Were these the only values tried? I tend to keep an eye out for explanations of why particular choices were made, especially when not "standard", aware of the
ongoing discussions of problems caused by too many experimenter degrees of freedom. Few choices can really be considered standard, but I must say I look for more of an explanation when reading something like "the top 120 voxels were selected in each ROI" than "we used a linear SVM, c=1, libsvm package".
task
As usual, I'm going to focus on analysis details rather than the theoretical motivation and interpretation, but
here's a bit of introduction.
The task design is shown at right (Figure 1 from the paper). The top shows what the subject saw (and when): first, a screen with two identical (or not) task cues, then a delay, then a moving-dot stimulus, then they responded and indicated which cue (rule) they'd used to make the response.
The moving-dot stimuli always were similar, but the response the subjects were supposed to make (which button to push) depended on the rule associated with each of the three cues (respond to dot size, color, or movement direction). Additionally, two cues were always presented simultaneously, but sometimes ("specified context") the same cue was shown twice (in which case the participant was to perform that rule), while other times (the "chosen context") two different cues were shown (in which case the participant was to pick which cue of the two they wanted to follow).
To summarize, we're interested in distinguishing three rule cues (motion, color, size), across two contexts (specified and chosen). The tasks were in eight runs, with 30 trials in each run. The cue and contexts were in random order, but with full balance within each run (5 repeats of each of the 2 contexts * 3 cues).
preprocessing/general
- SPM8 for motion and slice-time correction. MVPA in subject space (normalized afterwards for group-level analysis).
- They created PEIs in SPM for the MVPA, one per trial (so 240, 8 runs * 30 trials/run), with the timing matched to the length of the delay periods. { note: "PEIs" are parameter estimate images, made by fitting a HRF-aware linear model to the voxel timecourses do then doing MVPA on the beta weights produced by the model. }
- Classification was with linear SVMs (c=1), Princeton MVPA toolbox (and libsvm), partitioning on the runs, within-subjects (each person classified separately, then results combined across subjects to get the group-level results).
The classifications were of three main types, each of which was carried out both in ROIs and searchlights:
- Rule classification, using either the chosen context only, the specified context only, or both (question #1). "Representation of chosen and specified rules: multivariate analysis" results section.
- Rule cross-classification, in which classifiers were trained to distinguish rules with trials from one context, then tested with trials from the other context. "Generalization and consistency of rule representation across contexts" results section.
- Stage cross-classification, in which classifiers were trained to distinguish rules using the "maintenance stage" and tested with images from the "execution stage", each context separately. This is classifying across parts of each trial, but it is not clear to me which images were classified for this analysis (question #2 below). "Invariant rule representation during maintenance and execution stages" results section.
ROI creation
NOTE: see update to this section
here.
I was intrigued by how they defined the ROIs (other than BA17, which was from an anatomical atlas). Here's my guess at what they did; I'm not sure I figured out all the steps.
- Do a mass-univariate analysis, finding voxels with greater activity during the chosen than the specified context. ("Establishing and maintaining the neural representations of rules under different contexts: univariate analysis" section)
- Threshold the group map from #1 at p < 0.001 uncorrected, cluster size 35 voxels. 9 blobs pass this threshold, listed in Table 1 and shown in Figure 3A. (I think; two different thresholds seem to be used when describing the ROIs.)
- Use Freesurfer to make cortical surface maps of the group ROIs, then transform back into each person's 3d space. (I think; is this to eliminate non-grey-matter voxels? Why back and forth from cortical-surface to volumetric space?)
- Feature selection: pick the 120 voxels from each person, ROI, and training set with the largest "difference between conditions" from a one-way ANOVA. It's not clear what is meant by "conditions" in this paragraph; context, rule, both?
- Normalize each example across the 120 voxels (for each person, ROI, and training set).
I could find no mention of the stability of these final 120 voxels, nor of what proportion of starting ROI voxels 120 is, nor of why these particular thresholds were chosen (and if any others were tried).
It is proper to do the feature selection on the training set only, but interpretability might be affected if stability is low. In this case, they did leave-one-run-out cross-validation with eight runs, making eight versions of each ROI for each person. If these eight versions are nearly identical, it makes sense to consider them the same ROI. But if 120 voxels is something like 1/10th of the starting voxels, and a different tenth is chosen in each of the eight versions, shouldn't we think of them as eight different ROIs? Unfortunately, stability is not addressed in the manuscript, as far as I can tell.
a few other comments
- I liked that they discussed both searchlight and ROI-based results (I've argued that searchlight analyses can be more powerful combined with ROI-type analyses), but found the combination here quite difficult to interpret. I would have liked to have seen direct comparisons between the two analyses; how much do the searchlight 'blobs' (such as in Figure 5B) overlap with the 9 ROIs? Each section of the results had a phrase along the lines of "The searchlight analysis largely confirmed the ROI results (Fig. 5A), with significant rule coding in the PMv, ..." I don't see how they quantified these claims of similarity, however.
- Significance testing for the ROI-based analyses was with permutation testing, following what I'd call a fold-wise scheme with both training and testing sets relabeled. How the group-level distribution was calculated is not described.
- Significance testing for the searchlight analyses was by spatially normalizing then smoothing each individual's accuracy map, then doing a voxel-wise t-test for accuracy greater than chance.
questions
- in the "Representation of chosen and specified rules: multivariate analysis" section it says that one analysis was after "... pooling data from the chosen and specified conditions ...". Were separate PEIs made for this analysis (i.e. fitting all trials of each rule, regardless of context), or were PEIs made for the specified and chosen trials separately, then both used for this analysis (so that there were twice as many examples in the pooled analysis as either context alone)?
- Were new PEIs made for the cross-stage classification, were individual volumes classified, or something else? "Therefore, we constructed fMRI patterns of rule execution from activations at the onset of the RDK stimuli, and examined whether classifiers trained on patterns from maintenance stage could discriminate patterns from execution stage."
- Are my guesses for the steps of the ROI creation technique correct? Why were these thresholds chosen? Any other techniques for ROI creation tried? How stable were the ROIs after feature selection?
Update (22 August 2013): Jiaxiang Zhang answered my questions and corrected a few parts of this post
here.
Zhang J, Kriegeskorte N, Carlin JD, & Rowe JB (2013). Choosing the rules: distinct and overlapping frontoparietal representations of task rules for perceptual decisions.
The Journal of neuroscience : the official journal of the Society for Neuroscience, 33 (29), 11852-62 PMID:
23864675