Friday, August 17, 2012

what MVPA detects

MVPA is not a universal solution for fMRI; it's always necessary to carefully think about what types of activity changes you want to detect.

For example, suppose these are timecourses for two people completing a task with two types of trials. It's clear that in both people there is a very strong effect of the trials: for person #1 the BOLD goes up during trial type A and down during trial type B; the reverse is true for person #2.

"Standard" MVPA (e.g. linear svm) will detect both of these patterns equally well: there is a consistent difference between trial types A and B in both people. In addition, the difference in direction is usually not reflected in the analysis: often only each subject's accuracy is taken to the second level.

This can be a feature, or a bug, depending on the hypotheses: If you want to identify regions with consistent differences in activation in each person, regardless of what those differences are, it's a feature. If you want to identify regions with a particular sort of difference in activation, it can be a bug.

My suggestion is usually that if what you want to detect is a difference in the amount of BOLD (e.g. "there will be more BOLD in this brain region during this condition compared to that condition") then it's probably best to look at some sort of mass-univariate/GLM/SPM analysis. But if you want to detect consistent differences regardless of the amount or direction of BOLD change (e.g. "the BOLD in this brain region will be different in my conditions"), then MVPA is more suitable.

Note also that a linear svm is perfectly happy to detect areas in which adjacent voxels have opposite changes in BOLD - the two timecourses above can be within the same ROI yet be detected quite well as an informative area. As before, this can be a feature or a bug. So, again, if you want to detect consistent regional differences in the overall amount of BOLD, you probably don't want to use "standard" MVPA.

Thursday, August 2, 2012

Nipype: comments from Yaroslav

Yaroslav Halchenko was kind enough to provide some detailed feedback on my previous post, which he allowed me to excerpt here.
"a decent understanding of both nipype and the programs it's calling (from fsl or whatever)."
Yaroslav: yes and no. As with any software -- decent understanding is desired but not strictly necessary. nipype comes with "pre-cooked" workflows for some standard analyses so you would just need to tune the script/parameters up to input your data/design.
"as most useful to a skilled programmer wanting to run several versions of an analysis" 
Yaroslav: That is one of the cases. I could argue that the most useful is an efficient utilization of computing resources. i.e running many computations in parallel (e.g. per each functional run/ subject etc). 

also -- noone runs their analysis ones with all parameters decided a priory and not 'tuning them up' -- that is also where nipype comes useful since it would be smart to recompute only what needs to be recomputed. Thus possibly helping to avoid human errors.

 as for a "skilled programmer" -- well... somewhat -- scripting nipype may be could be easier but altogether it is not that bad actually. Just as with any scripting/hacking with practice comes (greater) efficiency. 

Jo: I saw mentions of using it in parallel. Our computer support people had a devil of a time (and basically failed) with python on our shared systems in the past. But we're also having problems with matlab/spm scripts tying up way too much of the systems and not getting properly managed. So that might be worth a closer look for us.
"It's often necessary to check initial output before moving to the next step in the pipeline; these errors would need to be caught somehow (go back and check intermediate files?)" 
Yaroslav: you could code your "pipeline" incrementally indeed and check all the intermediate results... or just run it once and then check them and rerun pipeline if you would need to change any of the parameters. I do not see it being much different from what you would get with other software -- once again boiling down to how particular/anal you want to be about checking your data/results. Also you could code/assemble pieces which would provide you with reports of the specific processing steps so you could later on glance over them rapidly.
"Nipype will need to be updated any time one of the underlying programs is updated." 
Yaroslav: again -- yes and no -- nipype (interfaces) would need to be updated only if corresponding programs' command line interface gets changed in some non-backward compatible fashion. Most of the software suites avoid such evil actions; but yes -- nipype would need to be (manually) updated to accommodate new cmdline options, and at times remove some if they were removed from the original software.

Jo: And I would think the defaults. It'd be troublesome if, say, a default Nipype motion-correction routine used a different motion-correction algorithm specification than the GUI version.

Yaroslav: well -- they are trying to match the defaults and as long as it doesn't change and you do not mix GUI/non-gui -- how do you know that GUI's is different/better.

Jo: I don't know that the GUI options are better. But practically speaking, many people (at least with spm) use the default options for most choices, so I was thinking we'd need to confirm that the options are the same running it through the GUI or Nipype, just for consistency.
"I've had terrible troubles before with left/right image flipping and shifting centerpoints when changing from one software package to another (and that's not even thinking about changing from one atlas to another). Trying to make packages that have different standard spaces play nicely together strikes me as very non-trivial..."
Yaroslav:  RIGHT ON -- that is indeed one of the problems I have heard about (mixing AFNI and FSL IIRC) and to say the truth I am not sure how far nipype could help assuring correct orientation... if that is possible at all -- it would be a good question to nipype folks I guess. BUT many tools do play nicely together without doing all kinds of crazy flips, so, although it is a valid concern, it is manageable (run pipeline and verify correct orientation by inspecting results through the steps) and might be limited to only a limited set of abusers.

"... Second, would there be a benefit to using Nipype if the pipeline only includes one program (e.g. why Nipype instead of spm8 batching)?"
Yaroslav:   well -- not everyone is using spm8 and those tools might lack back processing/dependencies tracking etc... and even with spm8 -- you might like eventually to add few additional steps provided by other toolkits and would be ready to do so with ease (given you also check for possible flip).