Briefly, the dataset is from a cognitive control task-switching paradigm: during fMRI scanning, people saw images of a human face with a word superimposed. But their response to the stimuli varied according to the preceding cue: in the Word task they responded whether the word part of the stimulus had two syllables or not; in the Face task they responded whether the image was of a man or woman. Figure 1 from the paper (below) schematically shows the timing and trial parts. The MVPA tried to isolate the activity associated with the cue part of the trial.
The people did this task on two separate scanning days: first the Baseline session, then the Incentive session. During the Incentive session incentives were introduced: people had a chance to earn extra money on some trials for responding quickly and accurately.
The analyses in the paper are aimed at understanding the effects of incentive: people perform a bit better when given an incentive (are more motivated) to perform better. We tested the idea that this improvement in performance is because the (voxel level) brain activity patterns encoding the task are better formed with incentive: sharper, more distinct, less noisy task-related patterns on trials with an incentive than trials without an incentive.
How to quantify "better formed"? There's no simple test, so we got at it three ways:
First, cross-session task classification accuracy (train on baseline session, test on incentive session) was higher on incentive trials, suggesting that the Incentive trials are "cleaner" (less noisy, so easier to classify). Further, the MVPA classification accuracy is a statistical mediator of performance accuracy (how many trials each person responded to correctly): people with a larger incentive-related increase in MVPA classification accuracy also tended to have a larger incentive-related increase in behavioral performance accuracy.
At left is Figure 4 from the paper, showing the correlation between classification and performance accuracy differences; each circle is a participant. It's nice to see this correlation between MVPA accuracy and behavior; there are still relatively few studies tying them together.
Second, we found that the Incentive test set examples tended to be further from the SVM hyperplane than the No-Incentive test set examples, which suggests that the classifier was more "confident" when classifying the Incentive examples. Since we used cross-session classification there was only one hyperplane for each person (the (linear) SVM trained on all baseline session examples), so it's possible to directly compare the distance of the test set examples to the hyperplane.
Third, we found a higher likelihood of distance concentration in the No-Incentive examples, suggesting that this dataset is less structured (higher intrinsic dimensionality) than the Incentive examples. The distance concentration calculation doesn't rely on the SVM hyperplane, and so gives another line of evidence.
There's (of course!) lots more detail and cool methods in the main paper; hope you enjoy! As always, please let me know what you think of this (and any questions), in comments, email, or in person.
UPDATE (24 March 2015): I have put some of the code and input images for this project online at the Open Science Foundation.
Etzel JA, Cole MW, Zacks JM, Kay KN, & Braver TS (2015). Reward Motivation Enhances Task Coding in Frontoparietal Cortex. Cerebral Cortex PMID: 25601237
No comments:
Post a Comment