Wednesday, January 28, 2015

pointer: "Reward Motivation Enhances Task Coding in Frontoparietal Cortex"

I'm pleased to announce that a long-in-the-works paper of mine is now online: "Reward Motivation Enhances Task Coding in Frontoparietal Cortex". It doesn't look like the supplemental is online at the publisher's yet; The supplement is online now, or you can download it here. This is the work I spoke about at ICON last summer (July 2014). As the title indicates, this is not a straight methodology paper, though it has some neat methodological aspects, which I'll highlight here.

Briefly, the dataset is from a cognitive control task-switching paradigm: during fMRI scanning, people saw images of a human face with a word superimposed. But their response to the stimuli varied according to the preceding cue: in the Word task they responded whether the word part of the stimulus had two syllables or not; in the Face task they responded whether the image was of a man or woman. Figure 1 from the paper (below) schematically shows the timing and trial parts. The MVPA tried to isolate the activity associated with the cue part of the trial.


The people did this task on two separate scanning days: first the Baseline session, then the Incentive session. During the Incentive session incentives were introduced: people had a chance to earn extra money on some trials for responding quickly and accurately.

The analyses in the paper are aimed at understanding the effects of incentive: people perform a bit better when given an incentive (are more motivated) to perform better. We tested the idea that this improvement in performance is because the (voxel level) brain activity patterns encoding the task are better formed with incentive: sharper, more distinct, less noisy task-related patterns on trials with an incentive than trials without an incentive.

How to quantify "better formed"? There's no simple test, so we got at it three ways:

First, cross-session task classification accuracy (train on baseline session, test on incentive session) was higher on incentive trials, suggesting that the Incentive trials are "cleaner" (less noisy, so easier to classify). Further, the MVPA classification accuracy is a statistical mediator of performance accuracy (how many trials each person responded to correctly): people with a larger incentive-related increase in MVPA classification accuracy also tended to have a larger incentive-related increase in behavioral performance accuracy.

At left is Figure 4 from the paper, showing the correlation between classification and performance accuracy differences; each circle is a participant. It's nice to see this correlation between MVPA accuracy and behavior; there are still relatively few studies tying them together.

Second, we found that the Incentive test set examples tended to be further from the SVM hyperplane than the No-Incentive test set examples, which suggests that the classifier was more "confident" when classifying the Incentive examples. Since we used cross-session classification there was only one hyperplane for each person (the (linear) SVM trained on all baseline session examples), so it's possible to directly compare the distance of the test set examples to the hyperplane.

Third, we found a higher likelihood of distance concentration in the No-Incentive examples, suggesting that this dataset is less structured (higher intrinsic dimensionality) than the Incentive examples. The distance concentration calculation doesn't rely on the SVM hyperplane, and so gives another line of evidence.

There's (of course!) lots more detail and cool methods in the main paper; hope you enjoy! As always, please let me know what you think of this (and any questions), in comments, email, or in person.

UPDATE (24 March 2015): I have put some of the code and input images for this project online at the Open Science Foundation.


 ResearchBlogging.orgEtzel JA, Cole MW, Zacks JM, Kay KN, & Braver TS (2015). Reward Motivation Enhances Task Coding in Frontoparietal Cortex. Cerebral Cortex PMID: 25601237

Wednesday, January 21, 2015

research blogging: "Exceeding chance level by chance"

Neuroskeptic made me aware of a new paper by Combrisson & Jerbi entitled "Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy"; full citation below. Neuroskeptic's post has comments and a summary of the article, which I suggest you check out, along with its comment thread. 

My first reaction reading the article was confusion: are they suggesting we shouldn't test against chance (0.5 for two classes), but some other value? But no, they are arguing that it is necessary to do a test against chance ... to which I say, yes, of course it is necessary to do a statistical test to see if the accuracy you obtained is significantly above chance. The authors are arguing against a claim ("the accuracy is 0.6! 0.6 is higher than 0.5, so it's significant!") that I don't think I've seen in an MVPA paper, and would certainly question if I did. Those of us doing MVPA debate about how exactly to best do a permutation test (a favorite topic of mine!), and if the binomial or t-test is appropriate in particular situations, but everyone agrees that a statistical test is needed to support a claim that an accuracy is significant. In short, I agree with

What about the results of the paper's analyses? Basically, they strike me as unsurprising. For example, the authors note that smaller datasets are less stable (eg quite easy to get accuracies above 0.7 in noise data when only 5 examples of each class), and that smaller test set sizes (eg leave-1-out vs. leave-20-out cross validation when 100 examples) tend to have higher variance across the cross-validation folds (and so harder to reach significance). At right is Figure 1e, showing the accuracies they obtained from classifying many (Gaussian random) noise datasets of different sizes. What I immediately noticed is how nice and symmetrical around chance the spread of dots appears: this is the sort of figure we expect to see when doing a permutation test. Eyeballing the graph (and assuming the permutation test was done properly), we'd probably end up with accuracies above 0.7 being significant at small sample sizes, and around 0.6 for larger datasets, which strikes me as reasonable.

I'm not a particular fan of using the binomial for significance in neuroimaging datasets, especially when the datasets have any sort of complex structure (eg multiple fMRI scanning runs, cross-validation, more than one person), which they almost always have. Unless your data is structured exactly like Combrisson & Jerbi's (and they did the permutation test properly, which they might not have, see Martin Hebart's comments), Table 1 strikes me as inadequate for establishing significance: I'd want to see a test taking into account the variance in your actual dataset (and claims being made).

Perhaps my concluding comment should be that proper statistical testing can be hard, and is usually time consuming, but is absolutely necessary. Neuroimaging datasets are nearly always structured (eg sources of variance and patterns of dependency and interaction) far differently from the assumptions of quick statistical tests, and we are asking questions of them not covered by one-line descriptions. Don't look for a quick fix, but rather focus on your dataset and claims, and a method for establishing significance levels is nearly always possible.


ResearchBlogging.orgCombrisson, E., & Jerbi, K. (2015). Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy Journal of Neuroscience Methods DOI: 10.1016/j.jneumeth.2015.01.010

Thursday, January 8, 2015

connectome workbench: montages of volumes

This tutorial describes working with montages of volumetric images in the Connectome Workbench. Workbench calls displays with more than one slice "Montages;" these have other names in other programs, such as "MultiSlice" in MRIcroN. I've written a series of tutorials about the Workbench; check the this post for comments about getting started, and see other posts labeled workbench.

When you first open a volumetric image in Workbench, the Volume tab doesn't display a montage, but rather a single slice, like in the image at left (which is my fakeBrain.nii.gz demo file superimposed on the conte69 anatomy).

Workbench opens an axial (A) view by default, as in this screenshot. The little push buttons in the Slice Plane section (marked with a red arrow in the screenshot) change the view to the parasagittal (P) (often called the sagittal) or coronal (C) plane instead. Whichever view is selected but the Slice Plane buttons will be shown in the montage - montages can be made of axial slices (as is most common), but just as easily of coronal or sagittal slices. (The All button displays all three planes at once, which can be useful, but not really relevant for montages.)

To change the single displayed slice, put the mouse cursor in the Slice Indices/Coords section (marked with a red arrow in the screenshot) corresponding to the plane you're viewing, and use the up and down arrows to scroll (or click the little up and down arrow buttons, or type in a new number). In the screenshot, I'm viewing axial slice 109, at 37.0 mm.


Now, viewing more than one slice, a montage. The On button in the Montage section (arrow in screenshot at left) puts Workbench into montage mode: click the On button so that it sticks down to work with montages; click it again to get out of montage mode.

Workbench doesn't let you create an arbitrary assortment of slices in montage mode, but rather a display of images with the number of rows (Rows) and columns (Cols) specified in the Montage section boxes. The number of slices between each of the images filling up those rows and columns is given in the Step box of the Montage section, and the slice specified in the Slice Indices/Coords section is towards the middle of the montage. Thus, this screenshot shows images in four rows and three columns, with the displayed slices separated by 12 mm.

Customizing the montage view requires fiddling: adjusting the window size, number of rows and columns, step between slices, and center slice (in the Slice Indices/Coords section) to get the desired collection of slices. On my computer, I can adjust the zoom level (the size of the individual montage slice images) with a "scroll" gesture; I haven't found a keyboard or menu option to similarly adjust the zoom - anyone know of one? With a mouse, hold the control key, click the left button in the black area near the slice images, and drag to change the zoom (thanks, Tim!), or use the scroll dial (if the mouse has one).

Several useful montage-relevant options are not on the main Volume tab, but rather in the Preferences (bring it up with the Preferences option in the File dropdown menu in the main program toolbar), as shown at left. Set:ting the Volume Montage Slice Coord: option to Off hides the Z=X mm labels, which can be useful. The Volume Axes Crosshairs option hides the crosshairs;  experiment with the options to see their effect.

I haven't found ways of controlling all aspects of the montage; for publication-quality images I ended up using an image editor to have full control, such changing the slice label font.