This R code demonstrates how to carry out a permutation test at the group level, when using within-subjects cross-validation. I describe permutation schemes in a pair of PRNI conference papers; see DOI:10.1109/PRNI.2013.44 for an introduction and single subjects, and this one for the group level, including the fully balanced within-subjects approach used in the demo. A blog post from a few years ago also describes some of the issues, using examples structured quite similarly to this demo.
For this demo I used part of the dataset from doi:10.1093/cercor/bhu327, which can be downloaded from the OSF site. I did a lot of temporal compression with this dataset, which is useful for the demo, since only about 18 MB of files need to be downloaded.Unfortunately, none of the analyses we did for the paper are quite suitable to demonstrate simple permutation testing with within-subjects cross-validation, so this demo performs a new analysis. The demo analysis is valid, just not really sensible for the paper's hypotheses (so, don't be confused when you can't find it in the paper!).
The above figure is generated by the demo code, and shows the results of the test. The demo uses 18 subjects' data, and their null distributions are shown as blue histograms. The true-labeled accuracy for each person is plotted as a red line, and listed in the title, along with its p-value, calculated from the shown null distribution (the best-possible p-value, 1/2906, rounds to 0).
The dataset used for the demo has no missings: each of the people has six runs, with four examples (two of each class) in each run. Thus, I can use a single set of labels for the permutation test, carrying out the relabelings and classifications in each person individually (since it's within-subjects cross-validation), but with the null distribution for each person built from the same relabelings. Using the same relabelings in each person allows the group-level null distribution (green, in the image above) to be built from the across-subjects average accuracy for each relabeling. In a previous post I called this fully-balanced strategy "single corresponding stripes", illustrated with the image below; see that post (or the demo code) for more detail.
The histogram for the across-subjects means (green histogram; black column) is narrower than the individual subject's histograms. This is sensible: for any particular permutation relabeling, one person might have a high accuracy, and another person a low accuracy; averaging the values together gives a value closer to chance. Rephrased, each individual has at least one permutation with very low (0.2) accuracy (as can be seen in the blue histograms). But different labelings made that low accuracy in each person, so the lowest group-mean accuracy was 0.4.
The group mean of 0.69 was higher than all the permuted-label group means, giving a p-value of 0.0003 = 1/2906 (2906 permutation relabelings were run, all possible). The equivalent t-test is shown in the last panel, and also produces a very significant p-value.
Tuesday, August 18, 2015
Friday, August 14, 2015
start planning: PRNI 2016
Start planning: the 6th International Workshop on Pattern Recognition in Neuroimaging will be held at the University of Trento (Italy), (probably) 21-24 June 2016. PRNI is an OHBM satellite conference, generally held several days before and somewhat geographically near the OHBM conference, which next year is in Geneva, Switzerland, 26-30 June 2016. Once the hosting is sorted out the website will be (as for past meetings) linked from www.prni.org.
The paper deadline will be mid-March 2016, perhaps with a later deadline for short abstract-only poster submissions. Previous workshops' papers were published by IEEE; search for "Pattern Recognition in Neuroimaging". Conference papers are standard in many engineering-related fields, but might be unfamiliar for people from a psychology or neuroscience background. Basically, PRNI papers are "real" publications: they are peer-reviewed, published in proceedings, indexed, and cite-able. The paper guidelines aren't settled yet, but will likely be similar to those of previous years. MVPA methods papers (applications, statistical testing, ...) are a good fit for PRNI; not just fMRI, but any neuroimaging modality is welcome.
UPDATE 10 November 2015: prni.org now has information about PRNI 2016 (including the call for papers), and we'll be updating the website as time goes on. You can also follow on twitter @PRNI2016 or facebook.
The paper deadline will be mid-March 2016, perhaps with a later deadline for short abstract-only poster submissions. Previous workshops' papers were published by IEEE; search for "Pattern Recognition in Neuroimaging". Conference papers are standard in many engineering-related fields, but might be unfamiliar for people from a psychology or neuroscience background. Basically, PRNI papers are "real" publications: they are peer-reviewed, published in proceedings, indexed, and cite-able. The paper guidelines aren't settled yet, but will likely be similar to those of previous years. MVPA methods papers (applications, statistical testing, ...) are a good fit for PRNI; not just fMRI, but any neuroimaging modality is welcome.
UPDATE 10 November 2015: prni.org now has information about PRNI 2016 (including the call for papers), and we'll be updating the website as time goes on. You can also follow on twitter @PRNI2016 or facebook.
Monday, August 3, 2015
R? python? MATLAB?
I've been asked a few times why I use R for MVPA, and what I think people just getting into MVPA should use. I don't think that there is a universally "best" package for MVPA (or neuroimaging, or statistics), but here are some musings.
The question as to why I personally started using R for MVPA is easy: I started before MVPA packages were available, so I had to write my own scripts, and I prefer scripting in R (then and now). Whether to keep using my own scripts or switch to pyMVPA (or some other package) is something I reconsider occasionally.
A very big reason to an established package is that it's a known quantity: coding bugs have hopefully been caught, and analyses can be reproduced. Some packages are more open (and have more stringent tests) than others, but in general, the more eyes that have studied the code and tried the routines, the better. This need for openness was one of my motivations for starting this blog: to post bits of code and detailed methods descriptions. I think the more code and details we share (blog, OSF, github, whatever), the better, regardless of what software we use (and I wish code was hosted by journals, but that's another issue).
I'm a very, very big fan of using R for statistical analyses, and of knitr (sweave and RMarkdown are also viable options in R) for documenting the various analyses, results, impressions, and decisions as the research progresses (see my demo here), regardless of the program that generated the raw data. My usual workflow is to switch to knitr once an analysis reaches the "what happened?" stage, regardless of the program that generated the data being analyzed (e.g., I have knitr files summarizing the motivation, procedure, and calculating results from cvMANOVA analyses run in MATLAB). Python has the iPython Notebook, which is sort of similar to knitr (I don't think as aesthetically pleasing, but that's a matter of taste);I don't think MATLAB has anything equivalent. Update 12 August 2015 (Thanks, Dov!): MATLAB comes with Publishing Markup, which (at a quick glance) looks similar to RMarkdown.
All neuroimaging (and psychology, neuroscience, ...) graduate students should expect to learn a proper statistical analysis language, by which I mostly mean R, with MATLAB and python coming in as secondary options. In practice, if you have proficiency in one of these programs you can use the others as needed (the syntax isn't that different), or have them work together (e.g., calling MATLAB routines from R; calling R functions from python). The exact same MVPA can be scripted in all three languages (e.g., read in NIfTI images, fit a linear SVM, write the results into a file), and I don't see that any one of the three languages is clearly best or worst. MATLAB has serious licensing issues (and expense); python dependencies can be a major headache, but which program is favored seems to go more with field (engineers for MATLAB, statisticians for R) and personal preference than intrinsic qualities.
So, what should a person getting started with MVPA use? I'd say an R, python, or MATLAB-based package/set of scripts, with which exact one depending on (probably most important!) what your colleagues are using, personal preference and experience (e.g, if you know python in and out, try pyMVPA), and what software you're using for image preprocessing (e.g., if SPM, try PRoNTO). Post-MVPA (and non-MVPA) investigations will likely involve R at some point (e.g., for fitting mixed models or making publication-quality graphs), since it has the most comprehensive set of functions (statisticians favor R), but that doesn't mean everything needs to be run in R.
But don't start from scratch; use existing scripts/programs/functions as much as possible. You should mostly be writing code for analysis-specific things (e.g., the cross-validation scheme, which subjects are patients, which ROIs to include), not general things (like reading NIfTI images, training a SVM, fitting a linear model). Well-validated functions exist for those more general things (e.g., oro.nifti, libsvm); use them.
The question as to why I personally started using R for MVPA is easy: I started before MVPA packages were available, so I had to write my own scripts, and I prefer scripting in R (then and now). Whether to keep using my own scripts or switch to pyMVPA (or some other package) is something I reconsider occasionally.
A very big reason to an established package is that it's a known quantity: coding bugs have hopefully been caught, and analyses can be reproduced. Some packages are more open (and have more stringent tests) than others, but in general, the more eyes that have studied the code and tried the routines, the better. This need for openness was one of my motivations for starting this blog: to post bits of code and detailed methods descriptions. I think the more code and details we share (blog, OSF, github, whatever), the better, regardless of what software we use (and I wish code was hosted by journals, but that's another issue).
I'm a very, very big fan of using R for statistical analyses, and of knitr (sweave and RMarkdown are also viable options in R) for documenting the various analyses, results, impressions, and decisions as the research progresses (see my demo here), regardless of the program that generated the raw data. My usual workflow is to switch to knitr once an analysis reaches the "what happened?" stage, regardless of the program that generated the data being analyzed (e.g., I have knitr files summarizing the motivation, procedure, and calculating results from cvMANOVA analyses run in MATLAB). Python has the iPython Notebook, which is sort of similar to knitr (I don't think as aesthetically pleasing, but that's a matter of taste);
All neuroimaging (and psychology, neuroscience, ...) graduate students should expect to learn a proper statistical analysis language, by which I mostly mean R, with MATLAB and python coming in as secondary options. In practice, if you have proficiency in one of these programs you can use the others as needed (the syntax isn't that different), or have them work together (e.g., calling MATLAB routines from R; calling R functions from python). The exact same MVPA can be scripted in all three languages (e.g., read in NIfTI images, fit a linear SVM, write the results into a file), and I don't see that any one of the three languages is clearly best or worst. MATLAB has serious licensing issues (and expense); python dependencies can be a major headache, but which program is favored seems to go more with field (engineers for MATLAB, statisticians for R) and personal preference than intrinsic qualities.
So, what should a person getting started with MVPA use? I'd say an R, python, or MATLAB-based package/set of scripts, with which exact one depending on (probably most important!) what your colleagues are using, personal preference and experience (e.g, if you know python in and out, try pyMVPA), and what software you're using for image preprocessing (e.g., if SPM, try PRoNTO). Post-MVPA (and non-MVPA) investigations will likely involve R at some point (e.g., for fitting mixed models or making publication-quality graphs), since it has the most comprehensive set of functions (statisticians favor R), but that doesn't mean everything needs to be run in R.
But don't start from scratch; use existing scripts/programs/functions as much as possible. You should mostly be writing code for analysis-specific things (e.g., the cross-validation scheme, which subjects are patients, which ROIs to include), not general things (like reading NIfTI images, training a SVM, fitting a linear model). Well-validated functions exist for those more general things (e.g., oro.nifti, libsvm); use them.
Subscribe to:
Posts (Atom)