tag:blogger.com,1999:blog-57378749590058525522024-03-14T01:17:22.819-05:00MVPA Meanderingsmusings on fMRI analysis, quality control, science, ...Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.comBlogger221125tag:blogger.com,1999:blog-5737874959005852552.post-6385589912230516892024-02-23T16:56:00.001-06:002024-02-23T16:56:34.447-06:00OHBM 2023 linksI was a co-organizer (with Paul Taylor, Daniel Glen, and Richard Reynolds, <a href="https://afni.nimh.nih.gov/Staff" target="_blank">all of afni fame</a>) of the "<a href="https://ww6.aievolution.com/hbm2301/index.cfm?do=ev.viewEv&ev=1201" target="_blank">Making Quality Control Part of Your Analysis: Learning with the FMRI Open QC Project</a>" course at <a href="https://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=4114" target="_blank">OHBM 2023</a>. <a href="https://osf.io/qaesm/" target="_blank">FMRI Open QC Project</a> participants described how they approached the Project datasets, and we had several general QC discussions. Thanks to all participants and attendees! I hope we can keep making progress towards a common vocabulary for fMRI QC and encourage researchers to include it if they do not already.<div><br /></div><div>The session was unfortunately not recorded live, though speakers submitted recordings in advance. OHBM is supposed to make all of these recordings available; in the meantime I've linked material already public.<p></p><ul style="text-align: left;"><li><a href="https://neurobren.com/" target="_blank">Brendan Williams</a>, "Reproducible Decision Making for fMRI Quality Control"</li><li><a href="https://orcid.org/0000-0002-1668-9629" target="_blank">CĆ©line Provins</a>, "Quality control in functional MRI studies with MRIQC and fMRIPrep"</li><li><a href="https://orcid.org/0000-0001-8456-5647" target="_blank">Daniel Glen</a>, "Quality control practices in FMRI analysis: philosophy, methods and examples using AFNI"</li><li><a href="https://fim.nimh.nih.gov/profiles/daniel-handwerker-phd" target="_blank">Dan Handwerker</a>, "The art and science of using quality control to understand and improve fMRI data" <a href="https://fim.nimh.nih.gov/sites/default/files/2023-07/Handwerker_QCEducationSession_small.pdf" target="_blank">slides</a></li><li><a href="http://yanlab.psych.ac.cn/" target="_blank">Chao-Gan Yan</a>, "Quality Control Procedures for fMRI in DPABI"</li><li><a href="https://www.psychiatry.wisc.edu/staff/birn-rasmus/" target="_blank">Rasmus Birn</a>, "Quality control in resting-state functional connectivity: qualitative and quantitative measures"</li><li><a href="https://www.dixin.info/" target="_blank">Xin Di</a>, "QC for resting-state and task fMRI in SPM"</li><li>Jo Etzel, "Efficient evaluation of the Open QC task fMRI dataset" <a href="https://osf.io/k36dg" target="_blank">video</a> </li><li><a href="https://www.kumc.edu/rchambers.html" target="_blank">Rebecca Lepping</a>, "Quality Control in Resting-State fMRI: The Benefits of Visual Inspection"</li><li><a href="https://fmorfini.github.io/" target="_blank">Francesca Morfini</a>, "Functional Connectivity MRI Quality Control Procedures in CONN"</li></ul><p></p><p><br /></p><p>I also presented a poster, "<a href="https://osf.io/hj97t" target="_blank">Which Acquisition? Choosing protocols for task fMRI studies</a>", #700. Here's some of the introduction and conclusion for an abstract. The <a href="https://openneuro.org/datasets/ds001399/versions/2.0.0" target="_blank">test data is already public</a>; the code isn't written up properly, but I could share if anyone is interested.</p><blockquote>
When planning a task fMRI study, one necessary choice is the acquisition sequence. Many are available, with recommendations varying with hardware, study population, brain areas of interest, task requirements, etc.; it is rare to have only one suitable option. Acquisition protocols for task studies can be difficult to evaluate, since metrics like tSNR are not specific to task-related activity. But task designs can make choosing easier, since there is a known effect to compare the candidate acquisition protocols against. <div><br /></div><div>The procedure illustrated here will rarely make the choice of acquisition completely unambiguous, but can indicate which to avoid, and give the experimenters confidence that the chosen sequence will produce usable data. After choosing the acquisition, more piloting should be performed with the study tasks to confirm that image quality and response clarity are sufficient and as expected. </div></blockquote><p><br /></p><p>... it's taken me so long to finish this post (started August 2023!) that I'm publishing it without adding a proper explanation of the protocol-choosing logic. Take a look at the poster pdf, and please ask questions or otherwise nudge me for more information if you're interested.</p><div></div></div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-12992423097770099892023-06-08T12:29:00.003-05:002023-06-08T12:29:37.203-05:00US researchers: use extra caution with participant gender-related info<p>Last summer <a href="https://mvpa.blogspot.com/2022/09/update-us-researchers-can-guarantee.html" target="_blank">I wrote several times</a> about pregnancy-related data suddenly becoming much more sensitive and potentially damaging to the participant if released. Unfortunately, now we must add transgender status, biological sex at birth (required, e.g., for <a href="https://nda.nih.gov/nda/nda-tools.html" target="_blank">GUID creation</a>), gender identity, relationship status, and more to the list of data requiring extra care. </p><p>The <a href="https://grants.nih.gov/grants/policy/nihgps/HTML5/section_4/4.1.4_confidentiality.htm#Certificates" target="_blank">NIH Certificate of Confidentiality</a> thankfully means that researchers can't be required to release data on something like someone's abortion or transgender history for criminal or other proceedings. We are responsible for ensuring that our data is handled and stored securely, so that it isn't accidentally (or purposely) shared and then possibly cause the participant harm. I suggest researchers review their data collection with an eye towards information that may be more sensitive now than it was in the past; is this information required? If so, consider how to store and use it securely and safely.</p><p>Also consider what you ask potential participants during screening (and how screening is done in general): people may not wish to answer screening questions about things like pregnancy or hormone treatments. If these possibly-sensitive questions must be asked, consider how to do so while minimizing potential discomfort or risk.</p><p>For example, one of our studies can't include pregnant people, so we must ask about pregnancy during the initial phone screenings. We used to ask potential participants about pregnancy separately, but then changed the script so that this sensitive question was in a list, and the participant is asked if any apply. This way, the participant doesn't have to state explicitly that they are pregnant, and the researcher doesn't have any specific notes, or even respond in a specific way (e.g., we don't want them to say something like "sorry Jane Doe, but you can't be in our study now because you're pregnant").</p><p>Here's the relevant part of the new screening script:</p><div></div><blockquote><div>In order to determine your eligibility for our study, I need to ask you some questions. This will take about 15 minutes. </div><div><br /></div><div>Before we collect your demographic information, I will ready you a list of four exclusionary criteria. If any of these describe you, please answer yes after I read them all; if none apply, please answer no. You do not need to say which ones apply. </div><div><ul style="text-align: left;"><li>You are a non-native English speaker (you learned to speak English as an adult); </li><li>You are over the age of 45 or under the age of 18; </li><li>You are pregnant or breastfeeding; </li><li>You were born prematurely (before 37 weeks, or if twin, before 34 weeks) </li></ul></div><div> Do any of these describe you?
Yes (I am sorry, you do not qualify to be in our study) or No (continue with questions)
</div></blockquote><div></div><div><br /></div><div>We switched to this "any of the above" style screening script for pregnancy last summer, and it has been working well. We recently reviewed our procedures again, and confirmed that we do not ask questions about sex or gender status or history. But if we did, we'd be looking closely about how exactly the questions were asked and responses recorded, with the aim of collecting the absolute minimum of information required.</div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-30144466415612919292023-03-31T11:16:00.001-05:002023-03-31T11:16:31.270-05:00bugfix for niftiPlottingFunctions.R, plot.volume()<p>If you use or adapted the my plot.volume() function from <a href="https://osf.io/k8u2c" target="_blank">niftiPlottingFunctions.R</a>, please update your code with the version released 30 March 2023. This post illustrates the bug (reversed color scaling for negative values) below the jump; please read it if you've used the function, and contact me with any questions. </p><p><b>Huge thanks </b>to <a href="https://dcl.wustl.edu/people/tan-nguyen/" target="_blank">Tan Nguyen</a> for spotting and reporting the color discrepancy! </p><span><a name='more'></a></span><p>These images show the effect of the bug. These are volume and surface versions of t-statistics for a GLM regressor; a strong visual response is expected. The data was preprocessed to both surfaces and volumes, and separate GLMs run for each, so we expect minor (but only minor) differences between the two versions.</p><p>Here is the regressor plotted with the same color scaling used for both volume and surface, using the old (buggy) version of plot.volume():</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsvAhHvmh2E3wpm2jZdknEm_95Ty6mFQI0k-P1-b5w09bkMpc1BD2RqG6_XOuT1jSai0iI0HkeggGWEC3AnNlEdXL2HxTzhfLFQ32mOs2CPGqlAj0IdzW0bqKWZOKzWpS1en-6t2dMkMZvfagVhaZGazohP8Mo_oG6qlE4dHs-5vH_iCX_dAgmCmN2/s796/bug.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="320" data-original-width="796" height="129" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsvAhHvmh2E3wpm2jZdknEm_95Ty6mFQI0k-P1-b5w09bkMpc1BD2RqG6_XOuT1jSai0iI0HkeggGWEC3AnNlEdXL2HxTzhfLFQ32mOs2CPGqlAj0IdzW0bqKWZOKzWpS1en-6t2dMkMZvfagVhaZGazohP8Mo_oG6qlE4dHs-5vH_iCX_dAgmCmN2/s320/bug.jpg" width="320" /></a></div><div>Notice that the hot (positive t-values) colors are of more-or-less the same intensity and in the same areas in both the volume and surface versions. But the negative values are not: the surface has mostly dark blue for negative values, but the volume has mostly light blue. The surface version is correct: most of the negative voxels/vertices are just barely past the -3 plotting threshold, and so <i>should </i>be dark blue. </div><div><br /></div><div>Below is the same image using the corrected function; now the volume and surface versions are both mostly dark blue:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibLsxKBobU_9nXtB85Cagsp0mFIe5jNbY5jYthGCOuVomw_A2xVB5cLfyq2svkytoLoz3jkxQZuV50KoeUQZWmp4RExx4Wzc--xums7KHcG_meixOHRasCEdXefRZGvWs6gjh1KSldqe-Pgj0CtDvLYJDWEnGPRUtr-xrRHnrmqQ-y83UQY9Bn9q-t/s804/fixed.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="320" data-original-width="804" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibLsxKBobU_9nXtB85Cagsp0mFIe5jNbY5jYthGCOuVomw_A2xVB5cLfyq2svkytoLoz3jkxQZuV50KoeUQZWmp4RExx4Wzc--xums7KHcG_meixOHRasCEdXefRZGvWs6gjh1KSldqe-Pgj0CtDvLYJDWEnGPRUtr-xrRHnrmqQ-y83UQY9Bn9q-t/s320/fixed.jpg" width="320" /></a></div><p>The plot.volume() bug was that the color scaling was reversed for negative values (only); positive values were not affected, nor my surface plotting functions. The reversal caused the incorrect shade of blue to be used, but did not affect the thresholding: if a voxel had a value too close to zero to be colored, it was not, even with the bug. But, with the bug a voxel just over barely over the threshold (for this example, -3.01) was plotted with the strong-effect color (teal), not the expected barely-over-threshold color (dark blue).</p><p>The negative-color error is obvious now when I look at the images in this post ... but I missed it (for literally years) until Tan pointed it out. I think I missed it partly because I don't routinely view surface and volume versions side-by-side as above, but even more so because in our analyses we are pretty much always more interested in positive than negative values; I focused on the hot colors and didn't notice the illogically-bright shades of blue. </p><p>I'm pointing out this bug and my oversight since the code is public and may be used by others, but also to illustrate that everyone occasionally writes buggy code and misses weird output. More people using/reviewing code is good because it helps to catch bugs/errors/weirdness; everyone has blind spots on occasion. Especially with scientific work, if an outcome/plot/result/whatever seems strange, please ask about it - that's the way many errors are found, and errors that aren't found can't be corrected.</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-26001113421595257482023-03-02T12:07:00.000-06:002023-03-02T12:07:05.004-06:00reasonable motion censoring thresholds?<p>Recent participants have gotten me thinking (yet again) about the different types of motion during fMRI; causes and consequences. And more immediately practical: which motion censoring thresholds might be reasonable for particular tasks and analyses. </p><p>I've long used (and <a href="https://mvpa.blogspot.com/2017/05/task-fmri-motion-censoring-scrubbing-2.html" target="_blank">recommended</a>) FD > 0.9 as a censoring threshold for our event-related task fMRI studies (not functional correlation or resting state-type analyses). 0.9 is more lenient than many use for task fMRI; e.g., <a href="https://www.ncbi.nlm.nih.gov/pubmed/23861343" target="_blank">Siegel 2014</a>: advise 0.5 FD for adults (which we have), 0.9 for kids or clinical populations. (I need to update <a href="https://mvpa.blogspot.com/2017/05/task-fmri-motion-censoring-scrubbing-2.html" target="_blank">a previous post</a>; I'd misread Siegel's recommendations.)</p><p>Consider the following motion plot of a run from a recent participant, using my usual conventions (grey lines at one-minute intervals, frames along the x-axis); see <a href="https://www.frontiersin.org/articles/10.3389/fnimg.2023.1070274/full" target="_blank">this QC demonstration paper</a>, the <a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic.html" target="_blank">DMCC55B dataset descriptor</a>, etc. for more explanation (and code). The lower panel shows FD, with the red line at the FD 0.9 censoring threshold, and red x marking censored frames; the grey horizontal line is at FD 0.5. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkhLmhtJCQxBuPtEoNKYB9_7Go1hpCBSDHIPlGmkgWBAOFZVhqneiNmKiObIsYUb_S0jWtDwruEB-G6c4Ps3zWLSb0mQhvdsCjFfuMqBZ3M4WsIPyLWsCU0jbECt3ecYNpQaIgkEE1t_wFpxUlxMTWEHVtk0sgoIAKVvwF3VLheZwvljH6NoQ3D-PD/s1008/motion.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="401" data-original-width="1008" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkhLmhtJCQxBuPtEoNKYB9_7Go1hpCBSDHIPlGmkgWBAOFZVhqneiNmKiObIsYUb_S0jWtDwruEB-G6c4Ps3zWLSb0mQhvdsCjFfuMqBZ3M4WsIPyLWsCU0jbECt3ecYNpQaIgkEE1t_wFpxUlxMTWEHVtk0sgoIAKVvwF3VLheZwvljH6NoQ3D-PD/s320/motion.JPG" width="320" /></a></div><div>I interpret this as a run with minimal overt head motion, but pronounced <a href="https://mvpa.blogspot.com/2017/09/yet-more-with-respiration-and-motion.html" target="_blank">apparent motion</a> (from breathing, strongest in trans_y, consistent with the AP encoding direction). Zero frames are censored at an FD > 0.9 threshold (red line); 130 are censored at FD > 0.5 (grey line). There are 562 frames in the run, so 130/562 = 0.23 of the frames censored at FD 0.5, and we would drop the run at our usual criterion of < 20% censored frames.</div><p>Contrast the above plot with the following (marked with a 2 in the upper left); the same task, scanner, etc., but a different participant:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_Y3qb0XyCobNQIqVmwzonl0y171USWvAJU-EtVpXav9BntQO5O3lqb2OBb45771TbD-ynb2udXWGByBOvr-5KKMkLLz6V9dHFtPHIphu2oqxb7MwwPXk68awCi4l1vlaxGuK-yLVilORs4ug8ty5hlAtk5XLdTpOMCe49E3n-Skwcbtm16kWGL71d/s1008/motion2.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="422" data-original-width="1008" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_Y3qb0XyCobNQIqVmwzonl0y171USWvAJU-EtVpXav9BntQO5O3lqb2OBb45771TbD-ynb2udXWGByBOvr-5KKMkLLz6V9dHFtPHIphu2oqxb7MwwPXk68awCi4l1vlaxGuK-yLVilORs4ug8ty5hlAtk5XLdTpOMCe49E3n-Skwcbtm16kWGL71d/s320/motion2.JPG" width="320" /></a></div><p>I'd characterize plot 2 as having more pronounced overt than apparent motion; there are oscillations in the trans_y, but these are dwarfed by the head motions, e.g., in the middle of the first minute. Looking at the censoring, 13 frames (corresponding to the largest overt head motions) are marked for censoring with FD 0.9; 40 are marked with FD 0.5, corresponding to more of the overt head motions. 40/562 = 0.07, well under the 20% censoring dropping threshold.</p><h4 style="text-align: left;">musings</h4><p>To my eye, the 0.5 FD threshold is pretty reasonable in the second case, since it censors more of the overt head motion irregularities, and only those spikes. But for the first plot the 0.5 FD threshold seems far too aggressive: censoring part of every few breaths, 23% of the total frames. What do you think?</p><p>I hope to do some proper analyses of the impact of different amounts of apparent vs. overt motion on statistical analyses, but it is not a trivial problem, particularly with task entrainment. (Synchronizing breathing to task timing.)</p><p>As a final bit of food for thought, here are the <a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic_18.html" target="_blank">tSNR and sd</a> images for each of the two runs, without censoring (all 562 frames), after preprocessing, and with the same color scaling. The first strikes me as higher quality, despite the greater (> 0.5 FD) censoring. I believe apparent motion could have less of an impact on image quality than overt because the head is not actually moving, and so not creating the attendant <a href="https://mvpa.blogspot.com/2021/09/yes-im-still-glad-were-censoring-our.html" target="_blank">magnetic disruptions</a>; the differences are clear to the eye when viewing these types of runs as movies, but it's not clear how those differences translate to statistical analyses.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikk4M1Y_bF9Dhe9absuFR432cxEn2GwhK6amu8Av-bpUbvVGNSBuBH55BFW55WWI7td7-ZhxCGOKzoxk1yrn1eE6OZd9efp8tw3UvQZiMt3oYAQIQrh81h2KlS1a2e_wWIdcGHDUZkYrEez8wLd30mellrtwb6scNDt168zC28TFnOCNO1_Aj3S4WI/s937/tsnr.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="397" data-original-width="937" height="136" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikk4M1Y_bF9Dhe9absuFR432cxEn2GwhK6amu8Av-bpUbvVGNSBuBH55BFW55WWI7td7-ZhxCGOKzoxk1yrn1eE6OZd9efp8tw3UvQZiMt3oYAQIQrh81h2KlS1a2e_wWIdcGHDUZkYrEez8wLd30mellrtwb6scNDt168zC28TFnOCNO1_Aj3S4WI/s320/tsnr.JPG" width="320" /></a></div><p><br /></p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-70696861327188785992022-12-21T14:42:00.000-06:002022-12-21T14:42:32.392-06:00What happened in this fMRI run? ... happened again.<p>Back in July I <a href="https://mvpa.blogspot.com/2022/07/what-happened-in-this-fmri-run.html" target="_blank">posted about a strangely-failed fMRI run</a>, and yesterday I discovered that we had another case not quite two weeks ago. This is the same study, scanner (3T Siemens Prisma), headcoil (32 channel), task, and acquisition protocol (CMRR MB4) as the July case, but a different participant. I've contacted our physicists, but we probably can't investigate properly until after the holidays, and are hampered by no longer having access to some of the intermediate files (evidently some of the more raw k-space/etc. files are overwritten every few days). </p><p>I've asked our experimenters to be on the lookout, and while hopefully it won't happen again, if it does, I hope they can catch it during the session so all the files can be saved. If anyone has ideas for spotting this in real time, please let me know.</p><p><b>A possibly-relevant data point:</b> the participant asked to have the earbuds adjusted after the first task run. The technician pulled the person out of the bore to fix the earbuds, but did not change the head position, and did not do a new set of localizers and spin echo fieldmaps before starting the second task run (the one with the problem). I've recommended that the localizers and spin echo fieldmaps be repeated whenever the person is moved out of the bore, whether they get up from the table or not, but the technician for this scan did not think it necessary. What are your protocols? Do you suggest repeating localizers? No one entered the scanning room before the problematic July run, so this (pulling the person out) might be a coincidence.</p><p>Here's what the this most recent case looks like. First, the three functional runs' DICOMs (frame 250 of 562) open in <a href="https://www.nitrc.org/projects/mango/" target="_blank">mango</a>, first with scaling allowed to vary with run:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrshXm06_MiXgcIGhw3PfqwCBqejHfSiYXspJqpvEBLrDKjr7TYwbZAJWYh-cA-3UOX8mOg8c8W9XFQKnky7c01FdnecwEo6G4lXzVdBL_2Rc6a79D3r1aPWVlwi7MQlZO_69nxeWDUV2qKUmCvcqsC7NkCaq261f83c-5zOWiHnmvABv16UsF11u3/s1550/autoscale.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="742" data-original-width="1550" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrshXm06_MiXgcIGhw3PfqwCBqejHfSiYXspJqpvEBLrDKjr7TYwbZAJWYh-cA-3UOX8mOg8c8W9XFQKnky7c01FdnecwEo6G4lXzVdBL_2Rc6a79D3r1aPWVlwi7MQlZO_69nxeWDUV2qKUmCvcqsC7NkCaq261f83c-5zOWiHnmvABv16UsF11u3/s320/autoscale.JPG" width="320" /></a></div><div><br /></div><div>Then with scaling of 0 to 10000 in all three runs, showing how much darker run 2 is:</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioP-SMgiekIjkmMLFZVS5dfTDZpQtc5ymLtiAYlnyOS8_kyznrBYaMYDrt9StwEY6Lx-OnoCAO-CnT4n-gLdfsciQ4l66ilqNotiHsHXBY9SSf5tXxozE0Cw5Dh5ZtIviOSP2NAFPscy_y5F7Lg7w_eBj0zHqSs7FLzpL8CdrCFoCifWgHxdNBlNh_/s1548/fixed.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="731" data-original-width="1548" height="151" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioP-SMgiekIjkmMLFZVS5dfTDZpQtc5ymLtiAYlnyOS8_kyznrBYaMYDrt9StwEY6Lx-OnoCAO-CnT4n-gLdfsciQ4l66ilqNotiHsHXBY9SSf5tXxozE0Cw5Dh5ZtIviOSP2NAFPscy_y5F7Lg7w_eBj0zHqSs7FLzpL8CdrCFoCifWgHxdNBlNh_/s320/fixed.JPG" width="320" /></a></div><div><br /></div>And finally the SBRef from run 2:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxfSqBHZkskD2Mwm97f0wWmnAxgpl64bVWfFJFNYXMbpfmujAv4rFG0rX0mM8PeTvCL58jQQ1x042enur7-AeTAGo0TMa-K5ElGA1jP8y19xJq8T6u695cwgtr8dZq5YjXh1JWBGOTL9KSGJOy3zgUTbANvLecX-3ypCrNU0j4toVJfsURziVytttp/s723/sbref.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="723" data-original-width="500" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxfSqBHZkskD2Mwm97f0wWmnAxgpl64bVWfFJFNYXMbpfmujAv4rFG0rX0mM8PeTvCL58jQQ1x042enur7-AeTAGo0TMa-K5ElGA1jP8y19xJq8T6u695cwgtr8dZq5YjXh1JWBGOTL9KSGJOy3zgUTbANvLecX-3ypCrNU0j4toVJfsURziVytttp/s320/sbref.JPG" width="221" /></a></div><br /><p>In July the thinking was that this is an RF frequency issue, possibly due to the FatSat RF getting set improperly, so that both fat and water were excited. But this seems hard to confirm from the DICOM header; this time, the Imaging Frequency DICOM field (0018,0084) is nearly identical in all three runs: 123.258928, 123.258924, 123.258924 (runs 1, 2, and 3 respectively), which is very similar to what it was in July (123.258803).</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-67688965863182191932022-11-11T10:27:00.000-06:002022-11-11T10:27:18.236-06:00mastodon <p>Like many, I've started a <a href="https://joinmastodon.org/" target="_blank">mastodon </a>account: <a href="https://fediscience.org/@JosetAEtzel" rel="me" target="_blank">@JosetAEtzel@FediScience.org</a>. My twitter account (also @JosetAEtzel) is still active, but I plan to be more on mastodon than twitter going forward. This blog is staying around, too ... who knows, maybe I'll get some of my backlog of posts written! š
</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-5881672813104161802022-09-09T10:18:00.009-05:002023-06-07T16:48:02.265-05:00Update: US researchers CAN guarantee privacy post-Dobbs <p>The two <a href="https://mvpa.blogspot.com/2022/07/research-in-united-states-after-fall-of.html" target="_blank">previous posts</a> described my concerns about the <a href="https://grants.nih.gov/policy/humansubjects/coc/information-institutional-responsibilities.htm" target="_blank">NIH Certificate of Confidentiality</a> exceptions post-Dobbs; the vagueness of the "<a href="https://grants.nih.gov/policy/humansubjects/coc/information-institutional-responsibilities.htm" target="_blank">federal, state, or local laws" "limited circumstances" formulation</a> is troubling, since it seems that it could apply to something like a state-level prosecution for pregnancy termination.</p><p>I am happy to relay that the "federal, state, or local laws" exemption is clarified in the "<b>When can Information or Biospecimens Protected by a Certificate of Confidentiality be Disclosed?</b>" section of the <a href="https://grants.nih.gov/policy/humansubjects/coc/what-is.htm" target="_blank">What is a Certificate of Confidentiality? | grants.nih.gov</a> site: </p><p>[update 7 June 2023: the NIH website has changed a bit, but the key text below is still present, now under section <a href="https://grants.nih.gov/grants/policy/nihgps/HTML5/section_4/4.1.4_confidentiality.htm" target="_blank">4.1.4 Confidentiality</a> > <a href="https://grants.nih.gov/grants/policy/nihgps/HTML5/section_4/4.1.4_confidentiality.htm#Certificates" target="_blank">4.1.4.1 Certificates of Confidentiality</a>]</p><blockquote><p>"Disclosure is permitted only when: </p><p></p><ul style="text-align: left;"><li>Required by Federal, State, or local laws (e.g., as required by the Federal Food, Drug, and Cosmetic Act, or state laws requiring the reporting of communicable diseases to State and local health departments), <span style="background-color: #fcff01;">excluding instances of disclosure in any Federal, State, or local civil, criminal, administrative, legislative, or other proceeding</span>; [emphasis mine]</li><li>Necessary for the medical treatment of the individual to whom the information, document, or biospecimen pertains and made with the consent of such individual; </li><li>Made with the consent of the individual to whom the information, document, or biospecimen pertains; or </li><li>Made for the purposes of other scientific research that is in compliance with applicable Federal regulations governing the protection of human subjects in research."</li></ul><p></p></blockquote><p style="text-align: left;">The highlighted clause is the key clarification: the "federal, state, or local laws" exemption would <b>not </b>apply to something like a state-level prosecution for pregnancy termination, because that would be a criminal proceeding. And our data isn't only protected from criminal proceedings, but from civil, administrative, legislative, and others as well.</p><p style="text-align: left;">I am relieved by this exclusion, and encourage all universities and groups covered by the Certificate to include it, not only the <a href="https://grants.nih.gov/policy/humansubjects/coc/information-institutional-responsibilities.htm" target="_blank">"*Disclosure of identifiable, sensitive information (i.e., information, physical documents, or biospecimens) protected by a Certificate of Confidentiality must be done when such disclosure is required by other applicable Federal, State, or local laws." formulation</a>. </p><p style="text-align: left;">While I am relieved by this exclusion and find it sufficient guarantee that our participants' data is protected from disclosure, we will continue to minimize the amount of pregnancy-related information we collect, and use <a href="https://twitter.com/JosetAEtzel/status/1549873885067214850" target="_blank">indirect phrasing in our screening questions</a> whenever possible. Privacy and sensitivity are always important, but are especially critical now in the United States and when reproduction is involved.</p><p style="text-align: left;"><br /></p><p style="text-align: left;"><b>UPDATE </b>16 September 2022: Many universities already use the longer (with the exclusion) explanation on their HRPO websites when describing the Certificate of Confidentiality protections. A google search for "excluding instances of disclosure in any Federal, State" found many, including <a href="https://hrpp.msu.edu/help/topics/coc.html" target="_blank">Michigan State University</a>, the <a href="https://www.hrpo.pitt.edu/certificate-confidentiality" target="_blank">University of Pittsburgh</a>, the <a href="https://www.washington.edu/research/myresearch-lifecycle/manage/compliance-requirements-non-financial/information-privacy-and-security/" target="_blank">University of Washington</a>, <a href="https://research.vcu.edu/media/office-of-research-and-innovation/documents/coc_assurance_template.docx" target="_blank">Virginia Commonwealth University</a>, and <a href="https://www.ndsu.edu/research/for_researchers/research_integrity_and_compliance/institutional_review_board_irb/resources/" target="_blank">North Dakota State University</a>. Hopefully these examples can serve as templates for other institutions.</p><p style="text-align: left;">The <a href="https://grants.nih.gov/policy/humansubjects/coc/helpful-resources/suggested-consent.htm" target="_blank">NIH's example consent language</a> also includes it: "This research is covered by a Certificate of Confidentiality from the National Institutes of Health. This means that the researchers cannot release or use information, documents, or samples that may identify you in any action or suit unless you say it is okay. They also cannot provide them as evidence unless you have agreed. This protection includes federal, state, or local civil, criminal, administrative, legislative, or other proceedings. An example would be a court subpoena."</p><p style="text-align: left;"><b>UPDATE</b> 28 September 2022: Washington University in St. Louis changed the <a href="https://research.wustl.edu/cocs/" target="_blank">Certificate of Confidentiality description</a> to include the exclusion.</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-57843147272725271362022-08-05T12:33:00.002-05:002022-09-09T10:36:20.818-05:00Tracking US universities' post-Dobbs research privacy guarantees<p>UPDATE <a href="https://mvpa.blogspot.com/2022/09/update-us-researchers-can-guarantee.html" target="_blank">9 September 2022: Good news!</a> The "federal, state, or local laws" exemption is clarified in the "When can Information or Biospecimens Protected by a Certificate of Confidentiality be Disclosed?" section of the <a href="https://grants.nih.gov/policy/humansubjects/coc/what-is.htm" target="_blank">What is a Certificate of Confidentiality? | grants.nih.gov</a> site. </p><p>This post is now less relevant, so I put it below the "jump".</p><span><a name='more'></a></span><p>Many types of human-subjects research collects information related to pregnancy (e.g., date of last menstrual cycle for a circadian rhythm study; pregnancy test before imaging; questionnaires during high-risk pregnancies). After Dobbs v. Jackson Women's Health Organization, this data could expose participants to legal risk if they were charged with something like obtaining an abortion or endangering a pregnancy.</p><div><br /></div><div><a href="https://grants.nih.gov/policy/humansubjects/coc/information-institutional-responsibilities.htm" target="_blank">NIH Certificates of Confidentiality</a> protect participants' information from disclosure, but have exceptions in "limited circumstances":</div><div><blockquote>*Disclosure of identifiable, sensitive information (i.e., information, physical documents, or biospecimens) protected by a Certificate of Confidentiality <b><i>must </i></b>be done when such disclosure is required by other applicable Federal, State, or local laws. [emphasis in original]</blockquote></div><div><br /></div><div>The question then, is if abortion-related lawsuits would fall under the Certificate of Confidentiality's limited circumstances; could researchers be required to disclose data? If so, participants must be informed of the risk during consent, and researchers must consider whether some data can be ethically collected.</div><div><br /></div><div>These issues apply to all US researchers, but I am not aware of any official NIH-level guidance, nor universities that have issued a formal opinion. Many universities want to avoid abortion-related publicity and so are not making public statements, but at the same time are trying to quietly reassure faculty/staff/students that medical treatment and privacy are the same as they were pre-Dobbs. </div><div><br /></div><div>I believe that this public/private message disconnect can and should not continue indefinitely, and that proactive privacy guarantees are less potentially harmful to participants than a lawsuit or court-forced data release. My motivation for this page is that openness is generally good, and since universities and other institutions tend to adopt each other's policies, if a few take action, others will likely follow.</div><div><br /></div><div><span style="font-size: large;">This page</span> is meant to track what researchers are told about the privacy of their pregnancy-related data. Have you asked whether your university would require release of data in the case of a abortion-related lawsuit? If so, what was the response? Was the response formal and cite-able, such as a memo, HRPO or IRB guidance? </div><div><br /></div><div>I started the table with my own understanding of the situation at my university. Please send me (via email, twitter, or a comment on this post) what is happening at your institution. If there was a formal communication, please include its URL. Notes such as "asked HRPO 15 July, no response yet" are also welcome. I promise not to include your name/contact info unless you explicitly request otherwise (blog comments can be made anonymously).</div><div><br /></div><div>Thank You!</div><div><br /><table style="width: 100%;"><tbody><tr><th style="border: 1px solid black; width: 10%;">State</th><th style="border: 1px solid black; width: 15%;">Institution</th><th style="border: 1px solid black; width: 15%;">Date</th><th style="border: 1px solid black; width: 10%;">Formal?</th><th style="border: 1px solid black; width: 50%;">Status/Notes</th></tr><tr><td>MO</td><td>Washington University in St. Louis</td><td>2 Aug 2022</td><td>no</td><td>verbal communication that pregnancy-related research records will not be released in
an abortion lawsuit, regardless of whether the NIH Certificate of
Confidentiality is sufficient protection.</td></tr><tr><td>-</td><td>-</td><td>-</td><td>-</td></tr></tbody></table><br /><div><br /></div></div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-52393959146397239852022-07-15T21:08:00.005-05:002022-09-09T10:34:44.923-05:00research in the United States after the fall of Roe v. Wade<p>UPDATE <a href="https://mvpa.blogspot.com/2022/09/update-us-researchers-can-guarantee.html" target="_blank">9 September 2022: Good news!</a> The "federal, state, or local laws" exemption is clarified in the "When can Information or Biospecimens Protected by a Certificate of Confidentiality be Disclosed?" section of the <a href="https://grants.nih.gov/policy/humansubjects/coc/what-is.htm" target="_blank">What is a Certificate of Confidentiality? | grants.nih.gov</a> site. </p><p>The NIH Certificate of Confidentiality <b>is </b>sufficient to protect researchers from being forced to release data if one of our participants is charged with abortion, which is great news. However, there are still many ethical concerns about collecting sensitive data unnecessarily, and I believe it is prudent to be extra aware of how pregnancy-related questions are being asked (e.g., in a phone screen), and minimize direct questions whenever possible.</p><p><br /></p><p><span style="font-size: medium;">Previous post:</span></p><p>This post is an essay-style, expanded version of messages
Iāve <a href="https://twitter.com/JosetAEtzel/status/1542269439940431873" target="_blank">posted on twitter</a> (@JosetAEtzel) the last few weeks, responding to the Dobbs
v. Jackson Women's Health Organization decision overturning Roe v. Wade in the
United States, and Missouriās subsequent trigger law <a href="https://news.stlpublicradio.org/health-science-environment/2022-07-03/missouri-doctors-fear-vague-emergency-exception-to-abortion-ban-puts-patients-at-risk" target="_blank">outlawing abortion except in dire emergency</a>. I hoped these issues would rapidly become outdated, but unfortunately that is
not the case; if anything they are compounding, and I very much fear no end is
in sight. I am not willing to be silent on the topic of protecting
participants, or university ethics more generally.</p>
<p class="MsoNormal">I am a staff scientist at Washington University in St. Louis, Missouri, USA, and have been here twelve years now. Itās been a good place to
do research, and I have great colleagues. I work with data collected on humans, mostly task fMRI. I generally spend my time at work on analysis and hunting for
missing or weird images in our datasets, but the last few weeks Iāve spent substantial amount of time hunting for pregnancy-related information in our
procedures and datasets, and seeking answers to how the legal changes affect us and our participants.</p>
<p class="MsoNormal">Our fMRI consenting protocols require the use of <b>screening
forms that ask if currently pregnant</b>; high-risk studies (PET-MR) <b>require a
pregnancy test</b> be performed immediately before the scan. These signed and dated
screening forms are retained indefinitely by the imaging center at the hospital
and/or our lab. Imaging studies routinely include pregnancy questions in the phone
screening to determine eligibility. </p>
<p class="MsoNormal">An additional source of pregnancy information in our datasets
is via studies using <b>passive sensing</b> data collection (e.g., via an app
installed on participantsā phones). These can include GPS and other forms of tracking,
which could e.g., show whether the participant spent time at a place where
abortions are possible or searched for abortion information. Previous data
breaches have happened with this type of research software, and the collection
of any GPS or other tracking information <a href="https://www.cnn.com/2022/06/24/tech/abortion-laws-data-privacy/index.html" target="_blank">raises serious privacy concerns</a>,
but my focus here is the security of this data after it is in the researchersā hands. </p>
<p class="MsoNormal">We need guarantees that we will never be asked to release this
data, even in the (appalling but <a href="https://www.acluaz.org/en/news/digital-privacy-and-abortion" target="_blank">not totally unprecedented</a>)
case that someone is charged with abortion and we are asked by a court to disclose whether the participant said that yes, they were pregnant on a particular date. </p>
<p class="MsoNormal"><a href="https://grants.nih.gov/policy/humansubjects/coc/information-institutional-responsibilities.htm" target="_blank">NIH Certificates of Confidentiality</a> protect participantsā information from disclosure, but have exceptions in ālimited circumstancesā. ā*Disclosure of identifiable, sensitive information (i.e., information, physical documents, or biospecimens) protected by a Certificate of Confidentiality must be done when such disclosure is required by other applicable Federal, State, or local laws.ā At Washington University in St. Louis (as of 11 July 2022) we are being told it <b>might not be sufficient to rely upon the Certificate of Confidentiality</b>; that it is not "bulletproof" for state-level abortion-related lawsuits. University counsel here is still investigating, as I assume are those elsewhere.</p>
<p class="MsoNormal">I have been hoping that Washington University in St. Louis
and other research universities would promise to protect participant (and
patient) pregnancy-related information; announcing that they would <b>fight
attempts to force disclosure</b> in any abortion-related lawsuits. So far, this has not occurred. Universities
often have strong law departments and a pronounced influence on their
communities, both as large employers and venerable, respected institutions. Ethics-based
statements that some laws will not be complied with could have an outsized
influence, and serve as a brake on those pushing enforcement and passing of
ever more extreme abortion-related laws.</p>
<p class="MsoNormal">Since we currently <b>lack pregnancy-related data confidentiality guarantees</b>,
in our group we have begun efforts to lessen the chances of our participants
incurring extra risks from being in our studies ā or even from being *asked* to
be in our studies. Reducing our collection of potentially sensitive information
to the absolute minimum is one step: even if subpoenaed or otherwise requested,
we will not have potentially harmful records to disclose. Concretely, we <strike>have
submitted</strike> <a href="https://twitter.com/JosetAEtzel/status/1549873885067214850" target="_blank">changed our screening procedures</a>, such that the participant is asked if any of a group of several exclusion criteria apply, only one of
which is pregnancy (rather than asking about pregnancy separately). The
participant then does not have to verbally state that they are pregnant, nor
the experimenter note which of the exclusion criteria was met.</p>
<p class="MsoNormal">Participants will still need to complete the screening form immediately before scanning, but presumably anyone that reaches this stage will
respond that they are not pregnant; if they are pregnant, the scan is cancelled and the screening form destroyed. This procedure reduces risk if we assume that
recording āno, not pregnantā on a particular date has less potential legal trouble for the participant than a āyes, pregnantā response, which hopefully is the
case. However, it is not unimaginable that an abortion lawsuit could have proof from elsewhere that the participant was pregnant on a particular date before
the experiment, in which case their statement (or test result, in the case of studies requiring one) of not being pregnant on the experiment date could be
relevant and damaging. At this time we canāt avoid using the forms with the pregnancy questions, but may start warning participants in advance that they
will have to respond to a pregnancy question, and that <b>we canāt guarantee</b> their response will be kept private and only used in the context of the experiment.</p>
<p class="MsoNormal">The impact of the Dobbs decision (and in our case, Missouri state
abortion trigger laws) on non-reproduction-related human subjects research is
only a small subset of the harm from these laws, of course, but it is a new risk US-based researchers should consider. <b>Human subjects protections are not trivial and must not be
brushed aside</b>, even if we hope no more abortion-related legal actions will occur.
As scientists, our ethics, honor, and integrity require us to follow not just
the letter but also the spirit of guidelines like the Declaration of Helsinki; we
must work towards the absolutely best and strongest participant protections. </p>
<p class="MsoNormal">I hope that this essay has caused you to consider what data you are collecting, whether it puts your participants at new legal risk, and
what you can do to minimize such risk in the short and long terms. Immediate actions such as changing how pregnancy is asked about or stopping collection of
especially sensitive information seem to me the minimum ethically appropriate action; stronger, legally-binding guarantees of confidentiality may be needed
soon for many types of human subjects research to continue responsibly in the United States.</p><p class="MsoNormal"><br /></p>
<p class="MsoNormal"><b>UPDATE </b>3 August 2022: Our screening changes were approved, so I edited the relevant text and added a link to <a href="https://twitter.com/JosetAEtzel/status/1549873885067214850" target="_blank">the tweet</a> showing the approved version.</p>
<p class="MsoNormal">Yesterday I <a href="https://twitter.com/JosetAEtzel/status/1554544035955621889" target="_blank">tweeted that</a> a source I trust (and in a position to know) told me that Washington University in St. Louis counsel/administration told them that pregnancy-related research records will not be released in an abortion-related lawsuit, regardless of whether the NIH Certificate of Confidentiality is sufficient protection. That is good news, but I am troubled that it came via word of mouth; my source said I shouldnāt āhold outā for an official statement. It is hard to be confident without something concrete; even a technically-phrased memo or HRPO website note would be encouraging. It seems that we are being asked to act as if nothing has changed post-Dobbs, and trust that everything will be fine, but that's an awfully big ask for issues this consequential.</p>
<p class="MsoNormal"><b>UPDATE </b>17 August 2022: Last month <a href="https://news.bloomberglaw.com/pharma-and-life-sciences/nih-privacy-certificates-shield-reproductive-research-post-roe" target="_blank">Jeannie Baumann wrote an article at Bloomberg Law</a> discussing questions about Certificates of Confidentiality protections, including, "Itās unclear how the state law versus certificates would play out because it hasnāt been tested in court."</p>
<p class="MsoNormal"><b>UPDATE</b> 26 August 2022: <a href="https://www.columbiapsychiatry.org/research-labs/sussman-lab" target="_blank">Tamara J. Sussman</a> and <a href="https://davidpagliaccio.com/">David Pagliaccio</a> published <a href="https://doi.org/10.1016/j.bpsc.2022.08.006" target="_blank">Pregnancy testing before MRI for neuroimaging research: Balancing risks to fetuses with risks to youth and adult participants</a> </p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-89522842217188756072022-07-12T15:01:00.002-05:002022-07-13T12:16:07.186-05:00What happened in this fMRI run? <p>This is one of those occurrences (artifacts?) that is difficult to google, but perhaps someone will recognize it or have a guess.</p><p>This run is from a session in which a person completed four fMRI runs of a task sequentially. They did not get out of the scanner between these runs, nothing was changed in the protocol, no one entered the scanner room. Later participants (with the same protocol, scanner, etc.), have been fine. This study uses CMRR MB4 acquisitions, so we have an SBRef image for each run; the artifact is the same in the SBRef and functional run.</p><p>Runs 1 (not shown), 2, and 4 are normal, but run 3 is much darker than the others and has an obvious ghost-ish artifact, here are the DICOMs from each run's SBRef, allowing <a href="https://www.nitrc.org/projects/mango/" target="_blank">mango</a> to adjust the contrast in each:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnR_wHIJXT4QP045mLiVGSM5HyPNagRR6oudV4C8DtNwGsYZDrn0fhwd8qJhjVU_8zU9ch2Q70S69EpKJq_AtLUvXkOaH3E3g2qX2NytH4a-FYiQfuE65rP1LSXcyWn4shLvoh9hojgFeMioNRvYe-ts0SOiOEPUCB_Ic6ZpAkxcTsULr-T3if3CRI/s1549/sbrefs1.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="742" data-original-width="1549" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnR_wHIJXT4QP045mLiVGSM5HyPNagRR6oudV4C8DtNwGsYZDrn0fhwd8qJhjVU_8zU9ch2Q70S69EpKJq_AtLUvXkOaH3E3g2qX2NytH4a-FYiQfuE65rP1LSXcyWn4shLvoh9hojgFeMioNRvYe-ts0SOiOEPUCB_Ic6ZpAkxcTsULr-T3if3CRI/s320/sbrefs1.JPG" width="320" /></a></div><br /><p>And here they are again, with contrast set to 1-15000 in all three images:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJ6yoZ1rprTiVIJJXoCmnE8seG5qEjvnPM1XsZ5LDXh3sfFCy79PvalKHydBb2ORmYD6V3UfLwdsJRXgAWZwkh2j0YGQA-SymbBl17QOBmO3lDe3wavRh1wZ2luYRbva6L6WIgyHEO4zvQO5LRN-N4XXDzbzYllTupEybyDNuM57NC3SvdOOSyp3Ns/s1550/sbrefs2.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="754" data-original-width="1550" height="156" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJ6yoZ1rprTiVIJJXoCmnE8seG5qEjvnPM1XsZ5LDXh3sfFCy79PvalKHydBb2ORmYD6V3UfLwdsJRXgAWZwkh2j0YGQA-SymbBl17QOBmO3lDe3wavRh1wZ2luYRbva6L6WIgyHEO4zvQO5LRN-N4XXDzbzYllTupEybyDNuM57NC3SvdOOSyp3Ns/s320/sbrefs2.JPG" width="320" /></a></div><br /><p>The functional run's DICOMs are also dark and have the prominent artifact; here's a frame:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNGmbQcCvWXGhKKb8AMCAHHzVwStO0lH_RiLRKedza0FRvlOjv1BdaamOOM0lrLTVqDe8lJBspU2aPlZpmSAm2F30eLgxXr1n0k6IuSqpFbk1ncuqpmg3djHV7oqgzxeVaj1syEahNJWvUJffIPzyjyuo9kKS92C0dIQkCvmp13SVChNTDnuL54oy5/s738/run3.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="738" data-original-width="505" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNGmbQcCvWXGhKKb8AMCAHHzVwStO0lH_RiLRKedza0FRvlOjv1BdaamOOM0lrLTVqDe8lJBspU2aPlZpmSAm2F30eLgxXr1n0k6IuSqpFbk1ncuqpmg3djHV7oqgzxeVaj1syEahNJWvUJffIPzyjyuo9kKS92C0dIQkCvmp13SVChNTDnuL54oy5/s320/run3.JPG" width="219" /></a></div><br /><p>When the run is viewed as a movie in mango the blood flow, small head movements, etc. are plainly and typically visible. The artifact does not appreciably shift or change over the course of the run, other than appearing to follow the (small) overt head motions (when the head nodded a bit, the artifact shifted in approximately the same way). The two surrounding runs (2 & 4) are typical in all frames (no sign of the artifact).</p><p>Given that this artifact is in the DICOMs, it's not introduced by preprocessing, and I am assuming this run is unusable. I'd like an explanation, though, if nothing else, so we can take any steps to reduce the chance of a recurrence. Our best guess at this time is some sort of transient machine fault, but that's not an especially satisfactory explanation. </p><p>Any ideas? Thanks!</p><p><br /></p><p>update 13 July 2022:</p><p>In response to Ben and Renzo's suggestions, I skimmed through the DICOM headers for fields with large differences between the three runs; if there are particular fields to look for, please let me know (this is a Siemens Prisma); I am not fluent in DICOM header! The most obvious are these, which I believe are related to color intensity, but I'm not sure if it's reporting a setting or something determined from the image after it was acquired.</p><p>run 2 (typical)</p><p></p><blockquote><p>(0028,0107) <span style="white-space: pre;"> </span>Largest Image Pixel Value <span style="white-space: pre;"> </span>32238</p><p>(0028,1050) <span style="white-space: pre;"> </span>Window Center <span style="white-space: pre;"> </span>7579</p><p>(0028,1051) <span style="white-space: pre;"> </span>Window Width <span style="white-space: pre;"> </span>16269</p><p>(0028,1055) <span style="white-space: pre;"> </span>Window Center & Width Explanation <span style="white-space: pre;"> </span>Algo1</p></blockquote><p></p><div><br /></div><p>run 3 (dark/artifact)</p><p></p><blockquote><p>(0028,0107) <span style="white-space: pre;"> </span>Largest Image Pixel Value <span style="white-space: pre;"> </span>3229</p><p>(0028,1050) <span style="white-space: pre;"> </span>Window Center <span style="white-space: pre;"> </span>1218</p><p>(0028,1051) <span style="white-space: pre;"> </span>Window Width <span style="white-space: pre;"> </span>3298</p><p>(0028,1055) <span style="white-space: pre;"> </span>Window Center & Width Explanation <span style="white-space: pre;"> </span>Algo1</p></blockquote><p></p><p><br /></p><p>run 4 (typical)</p><p></p><blockquote><p>(0028,0107) <span style="white-space: pre;"> </span>Largest Image Pixel Value <span style="white-space: pre;"> </span>31787</p><p>(0028,1050) <span style="white-space: pre;"> </span>Window Center <span style="white-space: pre;"> </span>7423</p><p>(0028,1051) <span style="white-space: pre;"> </span>Window Width <span style="white-space: pre;"> </span>15912</p><p>(0028,1055) <span style="white-space: pre;"> </span>Window Center & Width Explanation <span style="white-space: pre;"> </span>Algo1 </p></blockquote><p> </p><p>And here's yet another view from the three functional runs, in which I played with the contrast a bit. There's definitely a difference in which structures are brightest between the three.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiEkQ83QRAdHUgQwwK66YWa8_0woK2CKaCWUPIFECwgMNbowAG0or3PsbSuzWCXT9WiJAiCGSQHXRMyF1SkNZ8b6ZhwlJOcPGVhYf0hrGwqm6KLGfdFYTzKsepYxYl5QU_TcVzgRCA34huDUFujnX7-7XI4HVIIECUgs-LBjyrpYIxaQ8rLVfhLxSV/s1548/runs.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="755" data-original-width="1548" height="156" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiEkQ83QRAdHUgQwwK66YWa8_0woK2CKaCWUPIFECwgMNbowAG0or3PsbSuzWCXT9WiJAiCGSQHXRMyF1SkNZ8b6ZhwlJOcPGVhYf0hrGwqm6KLGfdFYTzKsepYxYl5QU_TcVzgRCA34huDUFujnX7-7XI4HVIIECUgs-LBjyrpYIxaQ8rLVfhLxSV/s320/runs.JPG" width="320" /></a></div><br /> <p></p><p></p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com7tag:blogger.com,1999:blog-5737874959005852552.post-18805682544814312452022-06-13T13:39:00.002-05:002022-06-13T13:39:31.917-05:00now available: DMCC55B rejected structurals and ratings<div>We received several requests for the <b>rejected</b> <a href="https://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html" target="_blank">DMCC55B</a> structural images, since comparing images of different quality can be useful for training. Thanks to the assistance of <a href="https://sites.wustl.edu/ccplab/people/rachel-brough/" target="_blank">Rachel Brough</a>, we have now released T1 and T2 images for the 13 DMCC55B people whose initial structural scans we rated of poor quality, as well as our ratings for both sets of images (the initial rejected images and better repeated ones). </div><div><br /></div><div>The rejected structural images (with session name ādoNotUseā) and ratings are in a new sub-component (<a href="https://osf.io/a7w39/" target="_blank">postPublication_rejectedStructurals</a>) of the <a href="https://osf.io/vqe92/" target="_blank">DMCC55B supplemental site</a>, rather than with the released dataset on <a href="https://openneuro.org/datasets/ds003465/" target="_blank">openneuro</a>, to avoid confusion about which images should be used for processing (use the previously-released ones available at <a href="https://openneuro.org/datasets/ds003465/" target="_blank">openneuro</a>).</div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-3056056133508647832022-06-08T16:03:00.004-05:002022-06-08T16:07:52.486-05:00troubleshooting run-level failed preprocessing<p>Sometimes preprocessed images for a few runs are just ... wrong. (These failures can be hard to find without <b>looking</b>, one of the reasons I strongly suggest <a href="https://www.nipreps.org/qc-book/dataset-qc/Example2.html" target="_blank">including visual summaries</a> in your QC procedures; make sure you have one that works for your study population and run types.) </p><p>Here's an example of a run with "just wrong" preprocessing that I <a href="https://neurostars.org/t/repeated-fmriprep-distortion-perhaps-from-susceptibility-distortion/21519" target="_blank">posted on neurostars</a>: the images are all of the same person and imaging session, but one of the runs came out of the preprocessing seriously distorted.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://neurostars.org/uploads/default/original/2X/b/bd546bb8fc1a0ea826b41c714b4652f980cd07db.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="341" data-original-width="800" height="136" src="https://neurostars.org/uploads/default/original/2X/b/bd546bb8fc1a0ea826b41c714b4652f980cd07db.jpeg" width="320" /></a></div><br /><p>And here's another: the images from the run at the lower right are clearly tilted and squashed compared to the others:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvGsKtcngpV5YmOCjJYwCqBSC81JiM3hqL0WgtTCWt-Biph6kTHqH-Vrj0lvQzfA4R94tMlbb7DnjbVEFz8wd-DWUoNoGy60FZEF-XV6CHobnooA_wUZ-aIDI0Bm1d1NP8-E1GO5vrwBoMBO51N7UbIVSi2TS_Oxko1kh7omiWGKAPofQhmfFajBv1/s829/Capture.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="361" data-original-width="829" height="139" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvGsKtcngpV5YmOCjJYwCqBSC81JiM3hqL0WgtTCWt-Biph6kTHqH-Vrj0lvQzfA4R94tMlbb7DnjbVEFz8wd-DWUoNoGy60FZEF-XV6CHobnooA_wUZ-aIDI0Bm1d1NP8-E1GO5vrwBoMBO51N7UbIVSi2TS_Oxko1kh7omiWGKAPofQhmfFajBv1/s320/Capture.JPG" width="320" /></a></div><p>The above images are temporal means made after <a href="https://fmriprep.org/en/stable/" target="_blank">fmriprep</a> preprocessing, including transformation to the MNI template anatomy (i.e., <span style="font-family: courier;">_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz</span>); <a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic_18.html" target="_blank">see this post</a> for more details and code. </p><h3 style="text-align: left;">How to troubleshoot this type of partial failed preprocessing?</h3><p>First, note that this is not due to the entire preprocessing pipeline failing: we have the expected set of derivative images for the participant, and everything looks fine for most runs. This suggests that the problem is with not with the pipeline, or that something is unusual about the participant.</p><p>fmriprep (and I think most preprocessing pipelines?) is <a href="https://neurostars.org/t/is-fmriprep-deterministic/5394" target="_blank">not fully deterministic</a>: if you run the same script twice with the same input files and settings the output images will not be exactly the same. They should be quite similar, but not identical. We have found that sometimes simply rerunning the preprocessing will correct the type of sporadic single-run normalization/alignment failures shown above.</p><p>If repeating the preprocessing doesn't fix the failure, or you want to investigate more before rerunning, I suggest checking the "<b>raw</b>" (before preprocessing; as close to the images coming off the scanner as practicable) images for oddities. Simply looking at <b>all</b> the images, comparing those from the run with failed preprocessing and the other (successful) runs from the same session can often make the problem apparent.</p><p>Look at the actual functional images of the problematic run (e.g., by loading the DICOMs into <a href="https://ric.uthscsa.edu/mango/" target="_blank">mango</a> and viewing as a movie): do you see anything strange? If the problematic run seems to have more-or-less the same orientation, movement, visible blood vessels, etc. as the non-problematic runs for the person, it is unlikely that the source of the problem is the functional run itself and you should keep looking. (If the functional run itself is clearly unusual/full of artifacts, it is most likely simply unusable and should be marked as missing.)</p><p>If the functional run seems fine, look at all of the <b>other </b>images used in preprocessing, especially any fieldmaps and single-band reference (SBRef) images. Depending on your acquisition you may have one or more fieldmaps and SBRef images per run, or per set of runs. For the <a href="https://sites.wustl.edu/dualmechanisms/" target="_blank">DMCC</a> we use <a href="https://www.cmrr.umn.edu/multiband/" target="_blank">CMRR multiband</a> sequences, so have an SBRef image for every functional run, plus a fieldmap each session. Both the fieldmaps and SBRef images are used in preprocessing, but differently, and if either has artifacts they will affect the preprocessing.</p><p>How an artifact in the fieldmap or SBRef affects the preprocessing can be difficult to predict; both can cause similar-looking failures. In the two examples above, the first was due to <a href="https://neurostars.org/t/repeated-fmriprep-distortion-perhaps-from-susceptibility-distortion/21519/8" target="_blank">artifacts in the fieldmaps</a>, the second in the SBRef.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLyKhQgb2jHxOLvLQe7FHc-3xTMWdCuUHsfTo0d7KnBjRKrFYvYWm6z0XORCVLRPiMgEtCr_B-JyyU4OmY-gQvMhJ52SEZsr02Xx5VgjLfV-wFsQnkjEJLosezu7x7TpvAsaZlgfo_YQ-NovW7kmOeAeALjhsxnk2nhAH4i3ldvciFjXVj-oNhUB46/s1429/badSBRefs.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="504" data-original-width="1429" height="113" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLyKhQgb2jHxOLvLQe7FHc-3xTMWdCuUHsfTo0d7KnBjRKrFYvYWm6z0XORCVLRPiMgEtCr_B-JyyU4OmY-gQvMhJ52SEZsr02Xx5VgjLfV-wFsQnkjEJLosezu7x7TpvAsaZlgfo_YQ-NovW7kmOeAeALjhsxnk2nhAH4i3ldvciFjXVj-oNhUB46/s320/badSBRefs.JPG" width="320" /></a></div><p>This is a screen capture of three SBRef images from the session shown in the second example. The numbers identify the scans and are in temporal order; image 41 is the SBRef for the affected run (a "tilted" brain); 23 for the correctly-preprocessed run above it. There are dark bands in parts of scan 41 (red), and it looks a bit warped compared to the other two; below is how they look in coronal slices:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqVkhdJJsjo7mnbjqntmwjYB7VrPh5KjROafNp6xDCzIzH1yy1sva1u4MrPyZuBKUj1PIfjEMnGOlWRLYCKgh2yJz982Fa-ZIo4R6aF03TqwM0_qAKXfN3lV_ioQXdH1j-78cyYy72tp_eQBlz1nWtXvjO06oybNPIWv3eSdDmNPx4H0UzHxg2aCiK/s1533/badSBRef.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="536" data-original-width="1533" height="112" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqVkhdJJsjo7mnbjqntmwjYB7VrPh5KjROafNp6xDCzIzH1yy1sva1u4MrPyZuBKUj1PIfjEMnGOlWRLYCKgh2yJz982Fa-ZIo4R6aF03TqwM0_qAKXfN3lV_ioQXdH1j-78cyYy72tp_eQBlz1nWtXvjO06oybNPIWv3eSdDmNPx4H0UzHxg2aCiK/s320/badSBRef.JPG" width="320" /></a></div><div><br /></div><div>All SBRef images look rather odd (here, there's some expected ear dropout (yellow) and encoding direction stretching), but the key is to compare the images for runs in which preprocessing was successful (23 and 32) with those for which it failed (41). The SBRef for 41 is obviously different (red): black striping in the front, and extra stretching in the lowest slices. This striping and stretching in the SBRef (probably from movement) translated to tilting in the preprocessed output above.<div><h3 style="text-align: left;">What to do about it?</h3><p>Ideally, you won't collect SBRef or fmaps with strange artifacts; the scanner will work properly and participants will be still. If the images are checked during acquisition it is often possible to repeat problematic scans or fix incorrect settings. This is of course the best solution!</p><p>However, sometimes an artifact or movement is not apparent during data collection, the scan can't be repeated, or you are working with an existing dataset and so are stuck with problematic scans. In these cases, I suggest doing something like in this post: look at all of the scans from the session (and other sessions if relevant) and try to determine the extent and source of the problem. </p><p>In the case of <a href="https://neurostars.org/t/repeated-fmriprep-distortion-perhaps-from-susceptibility-distortion/21519/8" target="_blank">the fieldmap artifacts</a>, every fieldmap from the scanning session was affected, but fieldmaps for the same person from two other sessions were fine. We "fixed" the failed preprocessing by building a new BIDS dataset, swapping out bad fieldmaps for good ones and changing the filenames accordingly. Before trying this I checked that the person's head position, distortion, etc. were quite similar between the runs. I do not really recommend this type of image swapping; things can go badly wrong. But it is something to consider if you have similar artifact-filled and good images from the same person and acquisition. With the SBRef image we have another option: SBRef images are not required, so we could delete poor ones (here, scan 41) and repeat the preprocessing without them. </p><p>Neither workaround (swapping or deleting) should be used lightly or often. But it has produced adequate quality images for us in a few cases. To evaluate I look very closely at the resulting preprocessed images and BOLD timecourses, both anatomy and comparing the <a href="https://www.nipreps.org/qc-book/dataset-qc/Example3.html" target="_blank">positive control analysis</a> (<a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial_29.html" target="_blank">see also</a>) results for runs/sessions with normal and workaround preprocessing. For example, confirm that the changes in BOLD timed with your stimuli in particular visual parcels are essentially the same in runs with the different preprocessing. If different visual parcels show the stimulus-caused changes in BOLD depending on preprocessing, the workaround was not successful.</p></div></div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-22277282269224532722021-09-15T11:27:00.001-05:002021-09-15T11:27:21.243-05:00"The Dual Mechanisms of Cognitive Control Project": post-publication analyses<p>The recently-published "The Dual Mechanisms of Cognitive Control Project" paper (<a href="https://www.biorxiv.org/content/10.1101/2020.09.18.304402v1.full" target="_blank">preprint</a>; <a href="https://doi.org/10.1162/jocn_a_01768" target="_blank"> publisher</a>) describes the motivations and components of the project as a whole, but also contains several analyses of its task fMRI data. The<a href="https://osf.io/xvzrf/" target="_blank"> supplemental information</a> for the manuscript has the <a href="https://mvpa.blogspot.com/2020/03/introductory-knitr-tutorial.html" target="_blank">R (knitr)</a> code and input files needed to generate the results and figures in the manuscript. </p><p>The <a href="https://osf.io/xvzrf/" target="_blank">supplemental information</a><span class="scripted" data-bind="if: hasDoi()" style="display: inline;"><span data-bind="text: doi"> </span></span>now has more results than those included in the paper, however: versions of the analyses using different sets of participants, parcels, and estimating the HDR curves (GLMs), which I will briefly describe here. </p><p>For example, <a href="https://direct.mit.edu/jocn/article/33/9/1990/106990/The-Dual-Mechanisms-of-Cognitive-Control-Project" target="_blank">Figure 5 (below) shows</a> the estimated HDR curves for each task (high - low control demand) in the Baseline (blue, BAS) and Reactive (green, REA) sessions. The grey shading marks the target; when we expect the control demand difference to be greatest. Of interest is that the estimates are greater in the target window for all tasks and sessions, with the Baseline session estimates larger than those from the Reactive session (see manuscript for more explanation and framing).</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-cPweuUfQSmI/YUISCEkfSWI/AAAAAAAAB44/yS6to0bdI8YC5YNyFsOYZTdVDNgFZZlUgCLcBGAsYHQ/s1370/origFig5.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="350" data-original-width="1370" height="82" src="https://1.bp.blogspot.com/-cPweuUfQSmI/YUISCEkfSWI/AAAAAAAAB44/yS6to0bdI8YC5YNyFsOYZTdVDNgFZZlUgCLcBGAsYHQ/s320/origFig5.JPG" width="320" /></a></div><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-r4PUQYP7MD4/YUISCK62UlI/AAAAAAAAB48/9fQe94KIwIYE7r-ysYdG8YR1SqD-nXsKQCLcBGAsYHQ/s1180/fig5b.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="304" data-original-width="1180" height="82" src="https://1.bp.blogspot.com/-r4PUQYP7MD4/YUISCK62UlI/AAAAAAAAB48/9fQe94KIwIYE7r-ysYdG8YR1SqD-nXsKQCLcBGAsYHQ/s320/fig5b.JPG" width="320" /></a></div><p>The top version (<a href="https://osf.io/yrnbe/" target="_blank">supplemental file</a>) includes 80 participants in the estimates (some related), averages the estimates from a set of 35 parcels (from the <a href="https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal" target="_blank">Schaefer 400x7 parcellation</a>) found to be particularly sensitive to DMCC tasks, and uses GLMs estimating <a href="https://mvpa.blogspot.com/2020/11/dmcc-glms-afni-tentzero-knots-and-hdr.html" target="_blank">one knot for every 2 TRs</a>.</p><p>The second version (<a href="https://osf.io/s84yt/" target="_blank">supplemental file</a>) shares the theoretically interesting aspects: curves mostly peak in the target (grey) area, blue curves mostly above green. There are many differences, though: the second graph is from a post-publication analysis using the <a href="https://openneuro.org/datasets/ds003465" target="_blank">DMCC55B</a> participants (55 unrelated people; 48 of whom are in both 55B and the 80-participant set), the <a href="https://mvpa.blogspot.com/2021/09/approximately-matching-different.html" target="_blank">set of 32 Schaefer 400x7 parcels approximating</a> the <a href="https://doi.org/10.1093/cercor/bhaa023" target="_blank">Core Multiple Demand network</a> (12 parcels are in both sets), and GLMs estimating one knot for every TR.<br /></p><p>It is reassuring to see that the analysis results are generally consistent despite these fairly substantial changes to its inputs. Sometimes results can look great, but are due to a statistical fluke or overfitting; in these cases small changes to the analysis that shouldn't matter (e.g., removing or replacing several participants) often make large changes in the results. The opposite occurred here: fairly substantial changes to the parcels, participants, and (to a lesser extent) GLMs led to generally matching results.</p><p>The <a href="https://osf.io/xvzrf/" target="_blank">paper's osf site</a> now contains results files for all the different ways to set up the analyses, within the "postPublication_coreMD" and "postPublication_1TRpK" subdirectories. The variations:<br /></p><ul style="text-align: left;"><li>80 or 55 participants. Files for analyses using the DMCC55B participants have a "DMCC55B" suffix; files for the original set of 80 participants has either no suffix or "DMCC80".</li><li>35 or 32 parcels. The set of 35 parcels identified via DMCC data are referred to as the 35-megaparcel or "parcels35"; the 32 parcels approximating the core MD are referred to as "core32".</li><li>GLMs with 1 or 2 knots per TR. The original analyses all used GLMs with 2 TRs per knot ("2TRpK"). The 1 TR per knot GLMs are abbreviated "1TRpK", including in the file names.<br /></li></ul>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-89026820597311878212021-09-07T12:03:00.002-05:002021-09-07T12:10:23.999-05:00approximately matching different parcellations<p>We want to use the the core MD (multiple demand) regions described in <a href="https://doi.org/10.1093/cercor/bhaa023" target="_blank">Assem et al., 2020</a> in some analyses but ran into a difficulty: Assem2020's core MD regions are defined in terms of <a href="https://balsa.wustl.edu/study/show/RVVG" target="_blank">HCP MMP1.0 (Multimodal Parcellation, MMP)</a> parcels and the <a href="https://balsa.wustl.edu/reference/show/pkXDZ" target="_blank">fsLR (HCP, fsLR32K</a> surface), but for the project we wanted to use <a href="https://doi.org/10.1093/cercor/bhx179" target="_blank">Schaefer et al., 2018</a> 400x7 (<a href="https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal" target="_blank">400 parcels by 7 networks) parcellation</a> and the <a href="https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferWiki" target="_blank">fsaverage5</a> surface. Is it possible to approximate the core MD regions with a set of Schaefer 400x7 parcels? How to determine which set produces the best approximation? This post describes our answer, as well as the resulting set of Schaefer 400x7 parcels<span style="background-color: white;">; its<span> logic <a href="https://osf.io/wv7ze/" target="_blank">and code</a> should be adaptable to other parcellations.</span></span></p><div></div><div>Surface calculations can be complex because vertices do not correspond to a fixed area in the same way that voxels do (e.g., a cubical "searchlight" of eight voxels will have the same volume no matter which voxel it's centered on, but the surface area covered by the circle of a vertex's neighbors will vary across the brain according to the degree of folding at that vertex). I decided to work with parcellations defined in the same space (here, fsLR), and match at the <b>vertex level</b>. Matching at the vertex level has some implications, including that all vertices are equally important for determining the degree of correspondence between the parcellations; vertices are not weighted by the surface area of their neighborhood. This has the advantage of being independent of factors like surface inflation, but may not be sensible in all cases.</div><div><br /></div><div>The approximation procedure is iterative, and uses the Dice coefficient to quantify how well two collections of vertices match; a larger Dice coefficient is better. This use was inspired by the parcellation comparisons in <a href="https://doi.org/10.1038/s41597-021-00849-3" target="_blank">Lawrence et al., 2021</a>, and I adapted <a href="https://github.com/neurodata/neuroparc/blob/master/scripts/dice_correlation.py" target="_blank">their calculation code</a>. The MMP parcellation is symmetric but the Schaefer is not, so each hemisphere was run separately.</div><div><br /></div><div><b>Start: </b>Make a vector (length 32,492 since fsLR) with 1s in vertices belonging to a core MD MMP
parcel and 0 elsewhere. This vector does not change. </div><div>List all Schaefer 400x7 parcels with one or more vertex overlapping a core MD MMP parcel. (This starts with the most complete possible coverage of core MD vertices with Schaefer parcels.)</div><div><br /></div><div><b>Iterative Steps:</b></div><div><b>Step 1: </b>Make a vector (also length 32,492) with 1s in the vertices of all the listed Schaefer parcels and 0 elsewhere. Calculate the Dice coefficient between the two vectors (core MD and Schaefer). </div><div><br /></div><div>For each listed Schaefer parcel, make a vector with 1s in the vertices of all BUT ONE of the listed Schaefer parcels and 0 elsewhere. Calculate the Dice coefficient between these two vectors (core MD and Schaefer-1 subset). </div><div><br /></div><div><b>Step 2:</b> Compare the Dice coefficient of each subset to that of the entire list. Form a new list of Schaefer parcels, keeping only those whose removal made the fit worse (i.e., drop the parcel from the list if the Dice coefficient was higher without the parcel).</div><div><br /></div><div><b>Repeat</b> Steps 1 and 2 until removing any additional Schaefer
parcel makes the fit worse. </div><div><br /></div><div><br /></div><div>Using this procedure, we obtained a set of 32 Schaefer 400x7 parcels (IDs 99, 127, 129, 130, 131, 132, 137, 140, 141, 142, 148, 163, 165, 182, 186, 300, 332, 333, 334, 335, 336, 337, 340, 345, 349, 350, 351, 352, 354, 361, 365, 387) as best approximating the core MD described by Assem, et al. 2020 (colors are just to show boundaries):</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-TPNMiytP55g/YTeXOYtkn5I/AAAAAAAAB4w/cH7qcBmG7yM0aLNteoUrLBXWEvcu7e3owCLcBGAsYHQ/s1067/coreMDschaefer.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="626" data-original-width="1067" height="188" src="https://1.bp.blogspot.com/-TPNMiytP55g/YTeXOYtkn5I/AAAAAAAAB4w/cH7qcBmG7yM0aLNteoUrLBXWEvcu7e3owCLcBGAsYHQ/s320/coreMDschaefer.JPG" width="320" /></a></div><div><br /></div><div>The approximation seems reasonable, and we plan to use this set of parcels (and the procedure that found them) in future analyses. </div><div><br /></div><div>The R (<a href="https://mvpa.blogspot.com/2020/03/introductory-knitr-tutorial.html" target="_blank">knitr</a>) code to create the above figure, calculate Dice, and find the parcel set is in <a href="https://osf.io/wv7ze/" target="_blank">Schaefer400x7approxAssem2020.rnw</a>, along with the <a href="https://osf.io/8psvq/" target="_blank">compiled version</a>.</div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-58150916974884734832021-09-01T13:08:00.007-05:002021-09-02T11:19:23.127-05:00I'm glad we're censoring our task fMRI timecourses for motion ...<p>Some years ago I wrote <a href="https://mvpa.blogspot.com/2017/04/task-fmri-motion-censoring-scrubbing-1.html" target="_blank">a series of posts</a> describing how we settled on the FD > 0.9 censoring threshold we're using for the <a href="https://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html" target="_blank">DMCC </a>and some other task fMRI studies. While doing some routine checks yesterday a funny bit caught my eye, which I'll share here as an example of what happens with overt head motion during fMRI, and why censoring affected frames is a good idea.</p><p>The first thing that caught my eye was strange, non-anatomical, lines in the temporal standard deviation images for a single run:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-GnjlxoCQmXE/YTD0sGsG4nI/AAAAAAAAB34/nzhd8aWKsy8XJu6m8QONtomLIPIAYqhzQCLcBGAsYHQ/s616/volumeSD.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="616" data-original-width="511" height="320" src="https://1.bp.blogspot.com/-GnjlxoCQmXE/YTD0sGsG4nI/AAAAAAAAB34/nzhd8aWKsy8XJu6m8QONtomLIPIAYqhzQCLcBGAsYHQ/s320/volumeSD.jpg" width="265" /></a></div><p></p><p>(The above image shows slices through the temporal standard deviation image for three runs from the same participant and session; each row is a separate run. <a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic_18.html" target="_blank">See this post </a>for more explanation and code links.)</p><p>The stripy run's standard deviation images are bright and fuzzy around the edges, suggesting that head motion is a on the high side. I looked at the realignment parameters to investigate further (<a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic.html" target="_blank">see this post</a> for code and more explanation):</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-EEnft545I7g/YTD1IlT4QRI/AAAAAAAAB4A/Jj2Xn8UhzlseAeZsJuCqJlb05MN2C9Q5ACLcBGAsYHQ/s965/realignment.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="393" data-original-width="965" height="130" src="https://1.bp.blogspot.com/-EEnft545I7g/YTD1IlT4QRI/AAAAAAAAB4A/Jj2Xn8UhzlseAeZsJuCqJlb05MN2C9Q5ACLcBGAsYHQ/s320/realignment.jpg" width="320" /></a></div><p></p><p>The overt head motion was quite low after the first minute and a half of the run (vertical grey lines are at one-minute intervals), but there were some big jumps in the realignment parameters (and red xs - frames marked for censoring for FD > 0.9) in the first two minutes, particularly around frame 100. (TR is 1.2 s.)</p><p>I opened the preprocessed image (via <a href="https://fmriprep.org/en/stable/" target="_blank">fmriprep</a>; the image's suffix is "_acq-mb4PA_run-2_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz") in <a href="http://ric.uthscsa.edu/mango/mango.html" target="_blank">Mango</a> and stepped through the frames to see the motion directly. </p><p>Most of the run looks nice and stable; blood flow is clearly visible in the large vessels. But there's a large distortion in frame 98 (19 in the subset), as shown below (<a href="https://osf.io/r275f/" target="_blank">motionExample.nii.gz</a> on the <a href="https://osf.io/w7zkc/" target="_blank">blog osf site</a> is frames 80:110 from the run, for a better view). </p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-sB9oW1gZRH0/YTD41w3AkkI/AAAAAAAAB4I/aiDYCBQbtp4GbXp7LCqnvv69UWbFsQZcgCLcBGAsYHQ/s1120/frames.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="594" data-original-width="1120" height="170" src="https://1.bp.blogspot.com/-sB9oW1gZRH0/YTD41w3AkkI/AAAAAAAAB4I/aiDYCBQbtp4GbXp7LCqnvv69UWbFsQZcgCLcBGAsYHQ/s320/frames.JPG" width="320" /></a></div><p>The distorted appearance of frame 98 (center) is expected: head motion changes how
the brain is positioned in the magnetic field, and it takes time for the BOLD
to restabilize. (Frame 98 is the most obviously affected, but the adjacent also have some distortion.)</p>
<p>It would be bad to include frame 98 in our analyses: the brain areas are not where they should be. This type of error/artifact/weirdness is especially hard to identify in timeseries analysis: we can extract a complete timecourse for every voxel in this run, but its values in frame 98 do not correspond to the same sort of brain activity/BOLD as the rest of the run. Strange values in just one frame can skew the statistics summarizing the entire run, as happened here in the image at the top of this post: the stripes in the voxelwise standard deviation come from the stripes in frame 98. <br /></p>
<p>But, all is actually fine: frame 98, along with several surrounding ones, are marked for censoring, and so will not be included in statistical analyses. The <a href="https://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic_18.html" target="_blank">DMCC QC template code</a> that generated the temporal standard deviation images ignores censoring, however, so included the distorted frames, which made the stripes.</p>
<p>To summarize, this post is not describing some sort of bug or error: it is well known that head motion disrupts fMRI signal, and that such frames should not be included in analyses; I am reassured that our censoring procedure properly identified the affected frames. Instead, this post is intended as an example of how a very short head motion can have a large effect on run-level summary statistics. In this case, no harm was done: the run-level statistic is for QC and purposely ignored censoring. But if this frame was included in real analyses the true task-related activity could be lost, shifted, or otherwise distorted; many fMRI phenomena have larger impact on BOLD than does task performance.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-NdEjq6SS-QA/YTDzryOPznI/AAAAAAAAB3w/xs8vbc0xS5w50oAq6alTzXh8wIwVc7SVQCLcBGAsYHQ/s938/surface.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="682" data-original-width="938" height="233" src="https://1.bp.blogspot.com/-NdEjq6SS-QA/YTDzryOPznI/AAAAAAAAB3w/xs8vbc0xS5w50oAq6alTzXh8wIwVc7SVQCLcBGAsYHQ/s320/surface.jpg" width="320" /></a></div><p>Finally, here's how the same three frames look on the surface. If you look closely you can see that the central sulcus disappears in frame 98 (marked with blue in frame 99 for contrast), though, as usual, I think strange images are easier to spot in the volume. (also via fmriprep, suffix acq-mb4PA_run-2_space-fsaverage5_hemi-L.func.gii,
<a href="http://mvpa.blogspot.com/2020/03/volume-and-surface-brain-plotting-knitr.html" target="_blank">plotted
in R</a>.)</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-3868874550182411422021-06-29T16:59:00.001-05:002021-06-29T16:59:08.307-05:00DMCC55B supplemental as tutorial: positive control "ONs"-style timecourses<p>This is the sixth (and I believe final) post in a series describing the <a href="https://osf.io/vqe92/" target="_blank">DMCC55B supplemental files</a>. The first <a href="http://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html" target="_blank">introduces the dataset</a>, the second <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial.html" target="_blank">questionnaire data</a>, the <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic.html">third motion</a>, <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic_18.html">the fourth</a> creating and checking temporal mean, standard deviation, and tSNR images, and the fifth a <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial_24.html">parcel-wise button-press</a> classification analysis.</p><p>As discussed in the <a href="https://www.biorxiv.org/content/10.1101/2021.05.28.446178v1.full" target="_blank">manuscript</a> and the introduction to the <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial_24.html">previous post</a>, "ONs" is a positive control analysis: one that is not of experimental interest, but has a very predictable outcome; so predictable that if the expected result is not found, further analysis should stop until the problem is resolved. "ONs" is shorthand for "on-task": contrasting BOLD during all task trials against baseline..</p><p>The example DMCC55B ONs analysis uses the Stroop task and direct examination of the parcel-average timecourses for <a href="https://en.wikibooks.org/w/index.php?title=SPM/Haemodynamic_Response_Function&oldid=3556838" target="_blank">HRF</a>-like shape, rather than the GLMs we use for internal QC. This has the advantage of being very close to the data with few dependencies, but is only perhaps practical when there are many repetitions of short trials (such as the DMCC Stroop). The <a href="https://sites.wustl.edu/dualmechanisms/stroop-task/" target="_blank">DMCC Stroop task</a> is a color-word version, with spoken responses. Contrasting task (all trials, not e.g., by congruency) against baseline should thus show activity in visual, language, and motor (especially speaking) areas; the typical "task positive" and "task negative" areas. </p><p>As for the "buttons", both a <a href="https://osf.io/9ngwq/" target="_blank">surface</a> and <a href="https://osf.io/pshjq/" target="_blank">volume</a> version of the analysis are included, and the results are similar. The first step of the analysis is the same as for <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial_24.html">buttons</a>: normalize and detrend each vertex/voxel's timecourse using <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDetrend.html" target="_blank">3dDetrend</a> (the first code block of <a href="https://osf.io/f79p6/" target="_blank">controlAnalysis_prep.R</a> and <a href="https://osf.io/gfkpq/" target="_blank">controlAnalysis_prep_volume.R</a> processes both Sternberg and Stroop). The rest of the steps are brief enough to be in the <a href="https://yihui.org/knitr/" target="_blank">knitr</a> code chunks: make parcel-average timecourses with <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dROIstats.html" target="_blank">3dROIstats</a>, use the _events.tsv files to find the onset of each trial for each person, and average the timecourse across those events. </p><p>For example, here are the timecourses for two motor parcels (113 and 116 in the <a href="https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal/Parcellations/FreeSurfer5.3" target="_blank">1000-parcel, 17-network Schaefer parcellation</a>), surface (top) and volume (bottom):</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-hQkoqsFOxmE/YNuSlI_EMYI/AAAAAAAAB20/BkRMP1Cz8x8y23Cxg-h9mr21O3MbnCPAgCLcBGAsYHQ/s511/timecourses.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="443" data-original-width="511" src="https://1.bp.blogspot.com/-hQkoqsFOxmE/YNuSlI_EMYI/AAAAAAAAB20/BkRMP1Cz8x8y23Cxg-h9mr21O3MbnCPAgCLcBGAsYHQ/s320/timecourses.JPG" width="320" /></a></div><div><p>Each grey line is one participant, with the across-participants mean in blue. The x-axis is TRs (1.2 second TR; event onset at 0), and the y-axis is BOLD (parcel-average after the normalizing and detrending, averaged across all events for the person). As expected, the curves approximate the canonical HRF. Interestingly, the surface timecourses are taller than the volume, though the same parcels tend to show task-related activity (or not).</p><p>To summarize further and show all the parcels at once, I averaged the BOLD across the peak timepoints (yellowish band), which makes the areas clear in the group means:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-88B_b9CcxYA/YNuWhRtrNtI/AAAAAAAAB3E/yb3pvmLvZrIAOuORHx2cCo86PFTlQ0uvwCLcBGAsYHQ/s876/Capture.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="319" data-original-width="876" src="https://1.bp.blogspot.com/-88B_b9CcxYA/YNuWhRtrNtI/AAAAAAAAB3E/yb3pvmLvZrIAOuORHx2cCo86PFTlQ0uvwCLcBGAsYHQ/s320/Capture.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p></p></div>Please let me know if you explore the <a href="https://openneuro.org/datasets/ds003465/" target="_blank">DMCC55B </a>dataset and/or this supplemental; I hope you find these files useful!Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-86957864626580790122021-06-24T14:44:00.002-05:002021-06-24T14:44:38.130-05:00DMCC55B supplemental as tutorial: positive control "buttons" classification analysis<p>This is the fifth post in a series describing the <a href="https://osf.io/vqe92/" target="_blank">DMCC55B supplemental files</a>. The first <a href="http://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html" target="_blank">introduces the dataset</a>, the second <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial.html" target="_blank">questionnaire data</a>, the <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic.html">third motion</a>, and <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic_18.html">the fourth</a> creating and
checking temporal mean, standard deviation, and tSNR images.</p><p>I very strongly recommend that "positive control" analyses be performed as part of every fMRI (really, most any) analysis. The idea is that these analyses check for the existence of effects that, if the dataset is valid, <i>must </i>be present (and so, if they are not detected, something is most likely wrong in the dataset or analysis, and analysis should not proceed until the issues are resolved). (<a href="https://www.carleton.edu/perception-lab/open-science/error-tight/" target="_blank">Julia Strand's Error Tight</a> site <a href="https://doi.org/10.31234/osf.io/rsn5y" target="_blank">collects additional suggestions</a> for minimizing errors in research and analysis.)</p><p>The DMCC55B supplemental includes two of my favorite positive control analyses, "buttons" and "ONs", which can be adapted a wide variety of task paradigms. The "buttons" name is shorthand for analyzing the activity associated with the task responses (which are often moving fingers to press buttons). Task responses like button presses are excellent targets for positive controls because the occurrence of movement can be objectively verified (unlike e.g., psychological states, whose existence is necessarily inferred), and hand motor activity is generally strong, focal, and located in a <a href="https://doi.org/10.1016/j.neuroimage.2021.117965" target="_blank">low g-factor area</a> (i.e., with better fMRI SNR). Further, it is nearly often possible to design an analysis around the responses that is not tied to the experimental hypotheses. (To avoid circularity, control analyses must be independent of experimental hypotheses and have high face validity.)</p><p>The DMCC55B <a href="https://osf.io/rjzb2/" target="_blank">"buttons" example</a> uses the Sternberg task. In the DMCC Sternberg task (<a href="https://www.biorxiv.org/content/10.1101/2021.05.28.446178v1.full" target="_blank">Figure 3</a>) responses are made with the right hand, pressing the button under either the first or second finger to indicate whether the Probe word was a member of the current list or not. The target hypotheses for Sternberg involve aspects such as response time, whether the response was correct, and brain activity changes during List memorization; while the hypothesis for the buttons control analysis is simply that brain activity in somatomotor areas should change due to the finger motion necessary to press the response button. Rephrased, the contrast of button presses against baseline should show somatomotor activation. </p><p>The example DMCC55B Sternberg buttons positive control analysis was implemented as ROI-based classification MVPA (linear svm, c=1, <a href="https://cran.r-project.org/package=e1071" target="_blank">e1071 R libsvm</a> interface), with averaging (not GLMs) for the temporal compression. I ran this on the surface (fsaverage5 giftis produced by fmriprep preprocessing), within each ROI defined by the <a href="https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal/Parcellations/FreeSurfer5.3" target="_blank">1000-parcel, 17-network Schaefer parcellation</a>, with leave-one-subject-out cross-validation (55-fold). I do not generally advise leave-one-subject-out CV, especially with this many people, but used it here for simplicity; a more reasonable 11-fold CV version is also <a href="https://osf.io/f79p6/" target="_blank">in the code</a>.</p><p>The analysis code is split between two files. The first file, <a href="https://osf.io/f79p6/" target="_blank">controlAnalysis_prep.R</a>, is made up of consecutive code blocks to perform the analysis, while the second file, <a href="https://osf.io/d8xku/" target="_blank">controlAnalysis_buttons.rnw</a>, displays the results in tables and on brains (compiled <a href="https://osf.io/rjzb2/" target="_blank">controlAnalysis_buttons.pdf</a>). (Aside: I will sometimes include analysis code in knitr documents (e.g., <a href="https://osf.io/b5cv3/" target="_blank">QC_SD_surface.rnw</a>) if I judge the code is short and straightforward enough that the benefit of having everything in one file is greater than the drawback of increased code length and mixing of analysis in with results.) My intention is that the two controlAnalysis_buttons files together will serve as a "starter kit" for classification analysis; be adaptable to many other datasets and applications. </p><p>The analysis starts at the top of <a href="https://osf.io/f79p6/" target="_blank">controlAnalysis_prep.R</a>; the files produced by this script are used to make the results shown in <a href="https://osf.io/d8xku/" target="_blank">controlAnalysis_buttons.rnw</a>. The code blocks should be run in sequence, as later blocks depend on output from earlier blocks.</p><p></p><ul style="text-align: left;"><li>The first code block uses AFNI's <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDetrend.html" target="_blank">3dDetrend</a> to normalize and detrend each vertex's timecourse. At the time of writing, 3dDetrend does not accept gifti image inputs, so the script uses <a href="https://CRAN.R-project.org/package=gifti" target="_blank">readGIfTI </a>to read the files and write 3dDetrend-friendly "1D" text files. Aside: it's possible to avoid AFNI, implementing the <a href="https://mvpa.blogspot.com/2018/06/detrending-and-normalizing-timecourses.html">normalize and detrend</a> steps entirely in R. I prefer the AFNI function, however, to avoid introducing errors, and for clarity; using established functions and programs whenever possible is generally advisable.</li><li>The second code block reads the _events.tsv files, and finds matching sets of frames corresponding to button press and "not" (no button-press) events. Balance is very important to avoid biasing the classification: each "event" should be the same duration, and each run should have equal numbers of events of each type. Windows for each event (i.e., when event-related brain activity should be present in the BOLD) were set to start 3 seconds after the event, and end 8 seconds after. This 3-8 second window is roughly the peak of the canonical HRF given the very short button press events; longer windows would likely be better for longer duration events.</li><li>The third code block performs the temporal compression, writing one file for each person, run, class, and hemisphere. The files have one row for every vertex, and one column for each example (average of the 3:8 second windows found in the second code block), plus a column with the mean across examples. </li><li>The final two code blocks run the classification, with leave-one-subject-out (fourth block) and leave-five-subjects-out (11-fold, fifth block) cross-validation. Each version writes a results file with one row per parcel, and a column for the accuracy of each cross-validation fold, plus the mean accuracy over folds. These are the files read by <a href="https://osf.io/d8xku/" target="_blank">controlAnalysis_buttons.rnw</a> and shown on brains for <a href="https://www.biorxiv.org/content/10.1101/2021.05.28.446178v1.full" target="_blank">Figure 9</a>.</li></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://lh3.googleusercontent.com/-w4LV_ubUx5A/YNI9REq0f_I/AAAAAAAAB2E/d1duvP5WS9IA9cQUicCYd-6bFwOZaEgkwCLcBGAsYHQ/image.png" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="342" data-original-width="1065" height="103" src="https://lh3.googleusercontent.com/-w4LV_ubUx5A/YNI9REq0f_I/AAAAAAAAB2E/d1duvP5WS9IA9cQUicCYd-6bFwOZaEgkwCLcBGAsYHQ/image.png" width="320" /></a></div><h3 style="text-align: left;">Implementation notes:</h3><p>This code does MVPA with surface data, using text (vertices in rows) for the intermediate files. It is straightforward to identify which rows/vertices correspond to which parcels since the parcellation is of the same type (fsaverage5, in this case) as the functional data (see code in the fourth block, plus the preceding comments). Surface searchlight analyses are far less straightforward than ROI/parcel-based ones, and I don't think generally advisable, due to the varying distances between vertices.</p><p>Relatively few changes are needed to perform this same analysis with volumes. The purpose and steps within each code block are the same, though the functions to retrieve the timeseries vary, and there is only one volumetric image per run instead of the surface pair (one per hemisphere). The volumetric version of the buttons classification is divided between <a href="https://osf.io/gfkpq/" target="_blank">controlAnalysis_prep_volume.R</a> and <a href="https://osf.io/2bfjd/" target="_blank">controlAnalysis_buttons_volume.rnw</a>. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://lh3.googleusercontent.com/-lUymHmI5a3I/YNTZ1EsO1XI/AAAAAAAAB2M/I5PmJ2720r4Ay61Y-G_UCC1VlMUI8ORVQCLcBGAsYHQ/image.png" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="173" data-original-width="1067" height="52" src="https://lh3.googleusercontent.com/-lUymHmI5a3I/YNTZ1EsO1XI/AAAAAAAAB2M/I5PmJ2720r4Ay61Y-G_UCC1VlMUI8ORVQCLcBGAsYHQ/image.png" width="320" /></a></div>As we would hope (since the same original data are used for both), largely the same parcels have the highest accuracy in the surface and volume versions of the MVPA, and the parcel-wise accuracies are positively correlated. The parcel-wise accuracies from the surface version are often, though not uniformly a bit higher.<div><div class="separator" style="clear: both; text-align: center;"><a href="https://lh3.googleusercontent.com/-vWpOVVHDdsE/YNTgHPp_01I/AAAAAAAAB2U/2M0ytoLAYrw6ActiCu0TwabBV0QGfNKoACLcBGAsYHQ/image.png" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="420" data-original-width="883" height="152" src="https://lh3.googleusercontent.com/-vWpOVVHDdsE/YNTgHPp_01I/AAAAAAAAB2U/2M0ytoLAYrw6ActiCu0TwabBV0QGfNKoACLcBGAsYHQ/image.png" width="320" /></a></div><br /><br /><p><br /></p></div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-78612818369512799732021-06-18T14:05:00.005-05:002021-06-18T14:05:46.924-05:00DMCC55B supplemental as tutorial: basic fMRI QC: temporal mean, SD, and tSNR<p>This is the fourth post in a series describing the <a href="https://osf.io/vqe92/" target="_blank">DMCC55B supplemental files</a>. The first <a href="http://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html" target="_blank">introduces the dataset</a>, the second <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial.html" target="_blank">questionnaire data</a>, and the <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial-basic.html">third motion</a>. Here I'll highlight another basic fMRI QC practice: creating and checking temporal mean, standard deviation, and tSNR images. For background on these types of QC images I suggest starting with <a href="https://mvpa.blogspot.com/2017/09/voxelwise-standard-deviation-at.html" target="_blank">this post</a>, especially the multiple links in the first sentence.</p><p>For <a href="https://openneuro.org/datasets/ds003465/" target="_blank">DMCC55B</a>, we preprocessed the images as both volume and surface, so the QC was also both volume and surface. I generally find QC easier to judge with <a href="https://mvpa.blogspot.com/2020/02/when-making-qc-or-most-any-images-dont.html" target="_blank">unmasked volumes</a>, but if <a href="https://mvpa.blogspot.com/2020/01/working-with-surfaces-musings-and.html" target="_blank">surface analysis is planned</a>, the surface representations must be checked as well.</p><p>During DMCC data collection we create a QC summary file for each individual participant (<a href="https://osf.io/z62s5/" target="_blank">example here</a>), with volumes, surfaces, and motion (realignment parameters) for every run. A first QC check for (unmasked) volume images is straightforward: do they look like brains? This ("got brains?") check doesn't apply to surfaces, since any valid gifti image will be brain-shaped. Instead, our first QC pass for the surface images is to check the temporal mean for "tiger stripes" around the central sulcus. Below is the vertex-wise means for Bas Stern AP from the <a href="https://osf.io/z62s5/" target="_blank">example</a>, with the blue lines pointing to the stripes. If these stripes are not visible, something likely went wrong with the preprocessing.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-T89Fo8f_ytU/YMzdURgnXkI/AAAAAAAAB1g/SH-0gJxnxncMfZoVjCN-NcVPVZUVeL-6QCLcBGAsYHQ/s633/surfaceMeanQC.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="137" data-original-width="633" src="https://1.bp.blogspot.com/-T89Fo8f_ytU/YMzdURgnXkI/AAAAAAAAB1g/SH-0gJxnxncMfZoVjCN-NcVPVZUVeL-6QCLcBGAsYHQ/s320/surfaceMeanQC.JPG" width="320" /></a></div><br /><p>For ease of group comparison, the DMCC55B QC supplemental files are split by type rather than participant: <a href="https://osf.io/kd54v/" target="_blank">QC_SD_surface.pdf</a> for vertex-wise temporal standard deviation, <a href="https://osf.io/9dpja/" target="_blank">QC_tSNR_surface.pdf</a> for vertex-wise tSNR; <a href="https://osf.io/xchwm/" target="_blank">QC_SD_volume.pdf</a> and <a href="https://osf.io/4j8uv/" target="_blank">QC_tSNR_volume.pdf</a> for the corresponding voxel-wise images. With eight runs for each of 55 participants these QC summary files are still rather long, but the consistent scaling (same intensity used for all participants when plotting) makes it possible to scroll through and spot trends and oddities. Such group review can be very beneficial, especially before embarking on statistical analyses.</p><p>For example, the SD images for some participants/runs are much brighter and fuzzier than others (compare f2157me and f2499cq on page 8). Broadly, the more the temporal SD images resemble maps of <a href="https://doi.org/10.3389/fnana.2016.00012" target="_blank">brain vessel density</a>, the better the quality, though of course the resolution will vary with fMRI acquisition parameters (e.g., in the 2.4 mm isotropic DMCC images the Circle of Willis should be clearly visible). Artifacts can also be spotted, such as the ghostly "<a href="https://mvpa.blogspot.com/2018/01/holy-crescents-batman.html" target="_blank">crescents</a>" in some participant's PA images (e.g., f5001ob).</p><h3 style="text-align: left;">Implementation notes:</h3><p>The .rnw file with each (<a href="https://yihui.org/knitr/" target="_blank">knitr</a>) pdf has the R code to create the temporal statistic images, as well as produce the summary .pdfs. The <span style="font-family: courier;">startup </span>code chunk of each document has the code to make the images (from the 4d niftis and giftis produced by fmriprep), while the later chunk plots the images. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-HKeAUcl52ZY/YMzq_pCk0GI/AAAAAAAAB1o/YqvRGBf7ZL0plDrsw-xam_i7S8cXeLNTwCLcBGAsYHQ/s921/Capture.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="366" data-original-width="921" src="https://1.bp.blogspot.com/-HKeAUcl52ZY/YMzq_pCk0GI/AAAAAAAAB1o/YqvRGBf7ZL0plDrsw-xam_i7S8cXeLNTwCLcBGAsYHQ/s320/Capture.JPG" width="320" /></a></div><br /><p>The volumetric code uses the AFNI <a href="https://afni.nimh.nih.gov/afni/doc/help/3dTstat.html" target="_blank">3dTstat</a> and <a href="https://afni.nimh.nih.gov/afni/doc/help/3dcalc.html" target="_blank">3dcalc</a> functions (called from R using <a href="https://stat.ethz.ch/R-manual/R-devel/library/base/html/system2.html" target="_blank"><span style="font-family: courier;">system2()</span></a>) to make the temporal mean, standard deviation, and tSNR images, which are written out as NIfTI images. Saving the statistical images is useful for compilation speed, but more importantly, so that they can be examined in 3d if closer checking is needed. I strongly suggest using existing functions from a well-established program (such as AFNI) whenever possible, for clarity and to avoid introducing bugs.</p><p>The volumetric images are plotted with base R<a href="https://stat.ethz.ch/R-manual/R-patched/library/graphics/html/image.html" target="_blank"> image()</a> rather than with my <a href="https://osf.io/k8u2c/" target="_blank">volumetric plotting function</a>, because I wanted to plot one slice through each plane. Thus, the _volume 55B supplementals can also serve as an example of simple NIfTI image plotting. (<a href="https://osf.io/zc56w/" target="_blank">This is</a> my general volume-plotting knitr tutorial.) </p><p>Finally, there is a comment in the volume .rnw files that the input images are already in LPI orientation. I strongly suggest transforming NIfTI images to LPI immediately after preprocessing (e.g., with <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dresample.html" target="_blank">3dresample</a>), if they are not already, as this often avoids left/right flipping. </p><p><br /></p><p>
</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://lh3.googleusercontent.com/-7b8oDPVM6dY/YMzt4ZrigxI/AAAAAAAAB1w/1hXX5Nhq7R8nOPfK4P2cFt1BJcsXHeLewCLcBGAsYHQ/image.png" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="323" data-original-width="919" height="112" src="https://lh3.googleusercontent.com/-7b8oDPVM6dY/YMzt4ZrigxI/AAAAAAAAB1w/1hXX5Nhq7R8nOPfK4P2cFt1BJcsXHeLewCLcBGAsYHQ/image.png" width="320" /></a></div><br /><p></p><p>Unlike volumes, the surface QC code <a href="https://mvpa.blogspot.com/2020/01/working-with-surfaces-musings-and.html">calculates the vertex-wise mean and standard deviation in R</a>, and <a href="https://osf.io/u9e82/" target="_blank">my gifti plotting functions</a>. Many AFNI functions can work with giftis, but these calculations (mean, standard deviation) are so simple, and plotting surfaces from a vector so straightforward, that I decided to save the images as text rather than gifti.</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-30653737664665497712021-06-04T16:31:00.002-05:002021-06-04T16:31:45.211-05:00DMCC55B supplemental as tutorial: basic fMRI QC: motion<p>This is the third post in a series describing the <a href="https://osf.io/vqe92/" target="_blank">DMCC55B supplemental files</a>. The first <a href="http://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html" target="_blank">introduces the dataset</a>, and the second the <a href="http://mvpa.blogspot.com/2021/06/dmcc55b-supplemental-as-tutorial.html" target="_blank">questionnaire data</a>. Here I'll highlight the first in the set of basic fMRI QC files: looking at the realignment parameters (head motion). </p><p>While we use an <a href="http://mvpa.blogspot.com/2017/05/task-fmri-motion-censoring-scrubbing-2.html">FD threshold for censoring</a> during the GLMs, I think plots with <a href="http://mvpa.blogspot.com/2017/04/task-fmri-motion-censoring-scrubbing-1.html">the six separate parameters</a> are more useful for QC. These plots make up the bulk of <a href="https://osf.io/st5z4/" target="_blank">QC_motion.pdf</a>; one for each functional run included in DMCC55B. Chunk <span style="font-family: courier;">code5</span> of <a href="https://osf.io/9xspj/" target="_blank">QC_motion.rnw</a> creates these plots directly from the <span style="font-family: courier;">_desc-confounds_regressors.tsv</span> produced during fmriprep preprocessing (note: <a href="https://neurostars.org/t/naming-change-confounds-regressors-to-confounds-timeseries/17637" target="_blank">newer versions name</a> this file <span style="font-family: courier;">_desc-confounds_timeseries.tsv</span>, but the columns used here are unchanged), and is relatively straightforward to adapt to other datasets; the code chunks between <span style="font-family: courier;">startup</span> and <span style="font-family: courier;">code5</span> produce group summary statistics and can be omitted.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://lh3.googleusercontent.com/-sCN95O5JkY4/YLf6RF1ptUI/AAAAAAAAB00/u_b9XlvsC4wCLjxU-_Li0TGD8WFWT1yRgCLcBGAsYHQ/image.png" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="596" data-original-width="922" height="207" src="https://lh3.googleusercontent.com/-sCN95O5JkY4/YLf6RF1ptUI/AAAAAAAAB00/u_b9XlvsC4wCLjxU-_Li0TGD8WFWT1yRgCLcBGAsYHQ/image.png" width="320" /></a></div><br />Above is the top of page 5 of <a href="https://osf.io/st5z4/" target="_blank">the compiled file</a>, including the motion plot for the first run. This shows very little <a href="http://mvpa.blogspot.com/2017/09/yet-more-with-respiration-and-motion.html">overt or apparent motion</a>, just a slight wiggle around frame 400 (8 minutes; grey lines are 1 minute intervals), which suggests they shifted during the break between the second and third task blocks (green horizontal lines). This run would be given an "A" (top) grade in our qualitative assessment.<div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-UwIledJiOf8/YLqW-Xk9koI/AAAAAAAAB08/C2WME2by09smNS3Xwx0_TNozWoUNw9k5wCLcBGAsYHQ/s1298/Capture.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="549" data-original-width="1298" src="https://1.bp.blogspot.com/-UwIledJiOf8/YLqW-Xk9koI/AAAAAAAAB08/C2WME2by09smNS3Xwx0_TNozWoUNw9k5wCLcBGAsYHQ/s320/Capture.JPG" width="320" /></a></div><br /><div>f1659oa has extremely low overt head motion across this run, but clear apparent motion (breathing), primarily in the dark blue trans_y line.</div><div><br /></div><div><b>R note: </b>These motion (and the other) plots are made with <a href="https://mvpa.blogspot.com/2020/03/introductory-knitr-tutorial.html">base R graphics in a knitr</a> document. The "stacked" 6-parameter and FD plots are made by using three <span style="font-family: courier;"><a href="https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/par.html" target="_blank">par()</a></span> commands within each iteration of the loop (chunk <span style="font-family: courier;">code5</span>): first the size and margins for the upper plot; second the size and margins of the lower plot, with <span style="font-family: courier;">new=TRUE</span>; finally, setting <span style="font-family: courier;">new=FALSE</span> to finish and reset for the next run. </div></div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-24962342830107369742021-06-02T15:30:00.000-05:002021-06-02T15:30:16.318-05:00DMCC55B supplemental as tutorial: questionnaire data<p>The <a href="http://mvpa.blogspot.com/2021/06/an-introduction-to-dmcc55b.html">previous post</a> introduces the DMCC55B dataset, with pointers to the dataset, documentation, and why you might want to work with it. As mentioned at the end of that post, the <a href="https://doi.org/10.1101/2021.05.28.446178 " target="_blank">description manuscript</a> is accompanied by files intended to be both an introduction to working with DMCC55B data specifically, and more general tutorials for some common analyses. This series of posts will describe the <a href="https://osf.io/vqe92/" target="_blank">DMCC55B supplemental files</a> in their tutorial aspect and methodological details, starting with the <b>questionnaire data</b>.</p><p>Separately from the fMRI sessions, DMCC participants complete up to 28 individual difference questionnaires (described in the "Behavioral Session" section of the manuscript and the <a href="https://nda.nih.gov/edit_collection.html?id=2970" target="_blank">DMCC NDA site</a>). Sharing questionnaire data is difficult, since different groups often use different versions of the same questionnaires, and critical details like response ranges and scoring algorithm may not be stated. Several projects are working on standard formats for questionnaire data; as an NIH-funded project the DMCC is using the format developed by the <a href="https://nda.nih.gov/" target="_blank">NIMH Data Archive</a> (NDA). For DMCC55B, the questionnaire data is released under <a href="https://openneuro.org/datasets/ds003465" target="_blank">derivatives/questionnaires/</a>, with one (NDA-format) csv file per questionnaire.</p><p>The <a href="https://osf.io/yv56d/" target="_blank">behavioralSession_individualDifferences.rnw</a> file contains (knitr) code to read these NDA-format csv files, then calculate and display both group and single-subject summary statistics for each. This code should be useful for anyone reading data from NDA-format questionnaire data files, though with the warning that frequent consultation with the <a href="https://nda.nih.gov/general-query.html?q=query=data-structure%20~and~%20orderBy=shortName%20~and~%20orderDirection=Ascending%20~and~%20resultsView=table-view" target="_blank">NDA Data Dictionary</a> is required. There is unfortunately quite a bit of variability between the structure definitions, even in fundamentals such as the code used for missings. <a href="https://osf.io/yv56d/" target="_blank">behavioralSession_individualDifferences.rnw</a> does not calculate all possible summary statistics, only those currently used in the DMCC. </p><p>Here is an example of the summaries in the compiled <a href="https://osf.io/8teup/" target="_blank">behavioralSession_individualDifferences.pdf</a>, for the <a href="https://sites.google.com/a/decisionsciences.columbia.edu/dospert/" target="_blank">DOSPERT (Domain-Specific Risk-Taking)</a> questionnaire. The NDA names this questionnaire <a href="https://nda.nih.gov/data_structure.html?short_name=dospert01" target="_blank">dospert01</a>, so that name is also used for the DMCC55B derivative files. In the case of dospert01, we were able to match our questions with those in the Dictionary, so both individual item responses and summary scores are included. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-h0Hp-lZD5ws/YLfcTV2syDI/AAAAAAAAB0c/yPzPhzd30FkCPpDuCsseMFXDrAUY9okhQCLcBGAsYHQ/s543/dospert1.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="153" data-original-width="543" height="56" src="https://1.bp.blogspot.com/-h0Hp-lZD5ws/YLfcTV2syDI/AAAAAAAAB0c/yPzPhzd30FkCPpDuCsseMFXDrAUY9okhQCLcBGAsYHQ/w200-h56/dospert1.JPG" width="200" /></a><a href="https://lh3.googleusercontent.com/-arxn3IKZwYo/YLfhchIMdLI/AAAAAAAAB0k/43gIlravV7k6imwGq3cG3tcXIhKuyHXagCLcBGAsYHQ/image.png" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="411" data-original-width="1103" height="74" src="https://lh3.googleusercontent.com/-arxn3IKZwYo/YLfhchIMdLI/AAAAAAAAB0k/43gIlravV7k6imwGq3cG3tcXIhKuyHXagCLcBGAsYHQ/w200-h74/image.png" width="200" /></a></div><p style="text-align: left;">Above left is a small bit of the released dospert01_DMCC55B.csv file. The contents are in accord with the <a href="https://nda.nih.gov/data_structure.html?short_name=dospert01" target="_blank">NDA Data Dictionary</a> (above right), except for the first few columns: GUIDs, HCP IDs, sex, and other restricted/sensitive information is omitted. We can use the Dictionary definitions to interpret the other columns, however; for example, that the first participant's answer of 7 to rt_1 corresponds to an answer of "Extremely Likely" to the question of whether they would "Admitting that your tastes are different from those of a friend.".</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-4JFqcFlrf3E/YLfjpFhjmfI/AAAAAAAAB0s/eSMij4fdl40fDGzbJVe2T1Tx3IgATWrcgCLcBGAsYHQ/s765/Capture.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="215" data-original-width="765" src="https://1.bp.blogspot.com/-4JFqcFlrf3E/YLfjpFhjmfI/AAAAAAAAB0s/eSMij4fdl40fDGzbJVe2T1Tx3IgATWrcgCLcBGAsYHQ/s320/Capture.JPG" width="320" /></a></div><p>Scoring the DOSPERT produces five measures, which are reported as group means (above left) and for individual participants (above right); <a href="https://osf.io/8teup/" target="_blank">behavioralSession_individualDifferences.pdf</a> presents similar summaries for each scored questionnaire.</p><p><b>Caution:</b> <a href="https://osf.io/yv56d/" target="_blank">behavioralSession_individualDifferences.rnw</a> has code to parse NDA-format csv files and produce group score summaries. However, its scoring code and the Dictionary definitions and should be carefully reviewed before using with non-DMCC datasets, to make sure the desired calculations are being performed. </p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-83894739493949728472021-06-02T11:36:00.005-05:002021-09-13T13:47:40.431-05:00an introduction to DMCC55BIt's hard to believe this is my first post of 2021! I've done a few updates to existing posts during the last few months, but not the detailed new ones I've been planning; hopefully this will be the first of a small "flurry" of new posts.<div><br /></div><div>Here, I'm happy to announce the first big public release of <a href="https://sites.wustl.edu/dualmechanisms/" target="_blank">Dual Mechanisms of Cognitive Control</a> (DMCC) project data, <b>"<a href="https://openneuro.org/datasets/ds003465" target="_blank">DMCC55B</a>"</b>, together with a <a href="https://doi.org/10.1101/2021.05.28.446178" target="_blank">detailed description</a> and <a href="https://osf.io/vqe92/" target="_blank">example analyses</a>. We've previously released smaller parts of the DMCC (<a href="https://openneuro.org/datasets/ds003452" target="_blank">DMCC13benchmark</a>), which <a href="https://mvpa.blogspot.com/search?q=dmcc13benchmark" target="_blank">has been very useful</a> for methods exploration, but isn't really enough data for detailed analysis of the task-related activity or individual differences. A wide range of analyses are possible with DMCC55B, and we hope the documentation both clear and detailed enough to make its use practical.</div><div><br /></div><div>A few highlights: DMCC55B is data from 55 unrelated young adults. Each participant performed four cognitive control tasks (Stroop, AX-CPT, Cued Task-Switching, and Sternberg Working Memory) while undergoing moderately high-resolution fMRI scanning (MB4, 2.4 mm isotropic voxels, 1.2 s TR). Two runs of each task (one with AP encoding, the other PA), of approximately 12 minutes each (about 1.5 hours of task fMRI per person). There are also scores for the participants on 28 state and trait self-report questionnaires, as well as finger photoplethysmograph and respiration belt recordings collected during scanning.</div><div><br /></div><div><a href="https://doi.org/10.1101/2021.05.28.446178 " target="_blank">This manuscript</a> is intended to be the practical introduction to DMCC55B, providing details such as e.g., the file format of questionnaire data, the order in which task stimuli were presented, fMRI preprocessing (<a href="https://openneuro.org/datasets/ds003465" target="_blank">fmriprep output is included</a>), and how Stroop responses were collected and scored. The manuscript also contains links and references to materials used for the main DMCC project (e.g., <a href="https://sites.wustl.edu/dualmechanisms/tasks/" target="_blank">eprime task presentation scripts</a>, code to <a href="https://github.com/ccplabwustl/dualmechanisms/tree/master/preparationsAndConversions/audio" target="_blank">extract reaction times</a> from the Stroop audio recordings), which may be of interest or use to others in some cases, but likely only in a few specialized instances. A separate manuscript (<a href="https://www.biorxiv.org/content/10.1101/2020.09.18.304402v1.full" target="_blank">preprint</a>; <a href=" https://doi.org/10.1162/jocn_a_01768" target="_blank">accepted version</a>) describes the wider DMCC project, including the theoretical basis for the task design. If some information is confusing or missing, please let me know!</div><div><br /></div><div>Last, but most definitely not least, I want to highlight the <a href="https://osf.io/vqe92/" target="_blank">"supplemental" accompanying DMCC55B</a>. This designed to perform and summarize some standard behavioral and quality control analyses, with the intention of the files serving both as an introduction to working with DMCC data (e.g., how do I obtain the onset time of all AX trials?) and analysis tutorials (e.g., of a <a href="https://osf.io/rjzb2/" target="_blank">parcel-wise classification MVPA with surface data</a>). Currently, the best introduction to this material is in the DMCC55B manuscript and the files themselves. The supplemental files are primarily <a href="https://yihui.org/knitr/" target="_blank">knitr</a> (<a href="https://mvpa.blogspot.com/2020/03/introductory-knitr-tutorial.html">R and LaTeX</a>); they call <a href="https://afni.nimh.nih.gov/" target="_blank">afni</a> functions (e.g., 3dTstat) directly when needed, and are entirely "untidy" base R, including base R graphics. (The last bit refers to the different "flavors" of R programming; I only rarely have need to visit the Tidyverse.)</div><div> </div><div> UPDATE 13 September 2021: Added a link to the published version of the <a href=" https://doi.org/10.1162/jocn_a_01768" target="_blank">DMCC overview manuscript</a>.<br /></div>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-18152187569919421912020-12-15T14:18:00.005-06:002021-01-22T13:45:26.988-06:00DMCC13benchmark: public again<p>Apologies: I've had to temporarily delete DMCC13benchmark from <a href="https://openneuro.org/" target="_blank">OpenNeuro</a>. We plan to upload a new version ASAP. The images and associated data in the new version will be the same, and I will keep the same dataset name.<br /></p><p>The problem was with the subject ID codes: some of the <a href="https://sites.wustl.edu/dualmechanisms/" target="_blank">DMCC</a> participants were members of the <a href="https://www.humanconnectome.org/study/hcp-young-adult" target="_blank">Young Adult HCP</a>, and we released DMCC13benchmark with those HCP ID codes. In consultation with members of the HCP we decided that it would be most appropriate to release data with unique ID codes instead. The <a href="https://www.humanconnectome.org/study/hcp-young-adult/document/creating-and-using-subject-keys-connectomedb" target="_blank">subject key</a> (linking the subject IDs released on openneuro to the actual HCP ID codes) is available via the HCP <a href="https://db.humanconnectome.org/app/template/Login.vm" target="_blank">ConnectomeDatabase</a> for people who have accepted the <a href="https://www.humanconnectome.org/study/hcp-young-adult/document/wu-minn-hcp-consortium-open-access-data-use-terms" target="_blank">HCP data use terms</a> (ConnectomeDB00026, titled "DMCC (Dual Mechanisms of Cognitive Control) subject key").</p><p>Again, I apologize for any inconvenience. We are working to upload a corrected version of DMCC13benchmark soon. Please contact me with questions or if you need something sooner; I will add links to this post when DMCC13benchmark is again public.</p><p><br /></p><p><b>UPDATE 4 January 2021</b>: The corrected version of DMCC13benchmark is now available as <a href="https://openneuro.org/datasets/ds003452/versions/1.0.1" target="_blank">openneuro dataset ds003452</a>, doi: 10.18112/openneuro.ds003452.v1.0.1. We ask anyone using the dataset to use this version (all subject IDs
starting with "f") rather than the previous. Every file should be identical between the versions, excepting the subject IDs.<br /></p><p>I again apologize for any inconvenience this change may have caused you, and urge you to contact me with any questions or concerns.<br /></p><p> <br /></p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-67166792972297124382020-11-11T12:28:00.004-06:002021-01-04T14:17:22.221-06:00DMCC GLMs: afni TENTzero, knots, and HDR curves, #2<p>This post is the second in a series, which begins <a href="http://mvpa.blogspot.com/2020/11/dmcc-glms-afni-tentzero-knots-and-hdr.html" target="_blank">with this post</a>. In that first post I included examples of the estimated HDR curves for a positive control analysis, one of which is copied here for reference. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-AaY5F5WdypI/X6rJ6-Fvw-I/AAAAAAAABxQ/NT7G_55ri0wI9edgHjxWLj-AQI-3MrZRwCLcBGAsYHQ/s1425/visualONs_2TRpKa.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="198" data-original-width="1425" src="https://1.bp.blogspot.com/-AaY5F5WdypI/X6rJ6-Fvw-I/AAAAAAAABxQ/NT7G_55ri0wI9edgHjxWLj-AQI-3MrZRwCLcBGAsYHQ/s320/visualONs_2TRpKa.JPG" width="320" /></a></div><p>What are the axis labels for the estimated HDR curves? How did we generate them? What made us think about changing them? </p><p>At the first level the (curve) axis label question has a short answer: y-axes are average (across subjects) beta coefficient estimates, x-axes are knots. But this answer leads to another question: how do knots translate into time? Answering this question is more involved, because we can't translate knots to time without knowing both the TR and the <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDeconvolve.html" target="_blank">afni 3dDeconvolve</a> command. The TR is easy: 1.2 seconds. Listing the relevant afni commands is also easy: <br /></p>
<pre style="background-attachment: scroll; background-clip: border-box; background-color: #f0f0f0; background-image: URL(http://2.bp.blogspot.com/_z5ltvMQPaa8/SjJXr_U2YBI/AAAAAAAAAAM/46OqEP32CJ8/s320/codebg.gif); background-origin: padding-box; background-position: 0% 0%; background-repeat: repeat; background-size: auto; background: rgb(240, 240, 240) none repeat scroll 0% 0%; border: 1px dashed rgb(204, 204, 204); color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; overflow-wrap: normal; word-wrap: normal;">3dDeconvolve \
-local_times \
-x1D_stop \
-GOFORIT 5 \
-input 'lpi_scale_blur4_tfMRI_AxcptBas1_AP.nii.gz lpi_scale_blur4_tfMRI_AxcptBas2_PA.nii.gz' \
-polort A \
-float \
-censor movregs_FD_mask.txt \
-num_stimts 3 \
-stim_times_AM1 1 f1031ax_Axcpt_baseline_block.txt 'dmBLOCK(1)' -stim_label 1 ON_BLOCKS \
-stim_times 2 f1031ax_Axcpt_baseline_blockONandOFF.txt 'TENTzero(0,16.8,8)' -stim_label 2 ON_blockONandOFF \
-stim_times 3 f1031ax_Axcpt_baseline_allTrials.txt 'TENTzero(0,21.6,10)' -stim_label 3 ON_TRIALS \
-ortvec motion_demean_baseline.1D movregs \
-x1D X.xmat.1D \
-xjpeg X.jpg \
-nobucket
3dREMLfit \
-matrix X.xmat.1D \
-GOFORIT 5 \
-input 'lpi_scale_blur4_tfMRI_AxcptBas1_AP.nii.gz lpi_scale_blur4_tfMRI_AxcptBas2_PA.nii.gz' \
-Rvar stats_var_f1031ax_REML.nii.gz \
-Rbuck STATS_f1031ax_REML.nii.gz \
-fout \
-tout \
-nobout \
-verb
</code></pre><p></p><p>Unpacking these commands is not so easy, but I will try to do so here for the most relevant parts; please comment if you spot anything not quite correct in my explanation.</p><p>First, the above commands are for the Axcpt task, baseline session, subject f1031ax (aside: their data, including the event timing text files named above, is in <a href="https://openneuro.org/datasets/ds003452" target="_blank">DMCC13benchmark</a>). The 3dDeconvolve command has the part describing the knots and HDR estimation; I included the 3dREMLfit call for completeness, because the plotted estimates are from the Coef sub-bricks of the STATS image (produced by <span style="font-family: courier;">-Rbuck</span>).<br /></p><p>Since this is a control GLM in which all trials are given the same label there are only three stimulus time series: BLOCKS, blockONandOFF, and TRIALS (this is a mixed design; see <a href="http://mvpa.blogspot.com/2020/11/dmcc-glms-afni-tentzero-knots-and-hdr.html" target="_blank">introduction</a>). The TRIALS part is what generates the estimated HDR curves plotted above, as defined by <span style="font-family: courier;">TENTzero(0,21.6,10)</span>.<br /></p><p>We chose TENTzero instead of TENT for the HDR estimation because we do not expect anticipatory activity (the trial responses should start at zero) in this task. (see afni message board, e.g., <a href="https://afni.nimh.nih.gov/afni/community/board/read.php?1,153636,153645#msg-153645" target="_blank">here</a> and <a href="https://afni.nimh.nih.gov/afni/community/board/read.php?1,162210,162224#msg-162224" target="_blank">here</a>) To include the full response we decided to model the trial duration plus at least 14 seconds (not "exactly" 14 seconds because we want the durations to be a multiple of the TENT duration). My understanding is that it's not a problem if more time than needed is included in the duration; if too long the last few knots should just approximate zero. I doubt you'd often want to use a duration much shorter than that of the canonical HRF (there's probably some case when that would be useful, but for the DMCC we want to model the entire response).</p><p>From the <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDeconvolve.html" target="_blank">AFNI 3dDeconvolve help</a>: </p><p></p><blockquote><span style="font-family: courier;">'TENT(b,c,n)': n parameter tent function expansion from times b..c after stimulus time [piecewise linear] [n must be at least 2; time step is (c-b)/(n-1)]</span></blockquote><p> </p><blockquote><span style="font-family: courier;">You can also use 'TENTzero' and 'CSPLINzero',which means to eliminate the first and last basis functions from each set. The effect of these omissions is to force the deconvolved HRF to be zero at t=b and t=c (to start and and end at zero response). With these 'zero' response models, there are n-2 parameters (thus for 'TENTzero', n must be at least 3). </span></blockquote><p> </p><p></p><p>The first TENTzero parameter is straightforward: <span style="font-family: courier;">b=0</span> to start at the event onset.</p><p>For the trial duration (<span style="font-family: courier;">c</span>), we need to do some calculations. Here, the example is the DMCC Axcpt task, which has a trial duration of 5.9 seconds, so the modeled duration should be at least 14 + 5.9 = 19.9 seconds. That's not the middle value in the TENTzero command, though.<br /></p><p>For these first analyses we decided to have the TENTs span two TRs (1.2 * 2 = 2.4 seconds), in the (mostly) mistaken hope it would improve our signal/noise, and also to make the file sizes and GLM estimation speeds more manageable (which it does). Thus, <span style="font-family: courier;">c=21.6</span>, the shortest multiple of our desired TENT duration greater than the modeled duration (19.9/2.4 = 8.29, rounded up to 9; 9*2.4 = 21.6). </p><p>Figuring out <span style="font-family: courier;">n</span> requires a bit of mind bending but less calculation; I think the slide below (#5 in <a href="https://afni.nimh.nih.gov/pub/dist/edu/latest/afni_handouts/afni06_decon.pdf">afni06_decon.pdf</a>) helps: in tent function deconvolution, the n basis functions are divided into n-1 intervals. In the above calculations I rounded 8.29 up to 9; we want to model 9 intervals (of 2.4 s each). Thus, <span style="font-family: courier;">n = 10</span> basis functions gives us the needed n-1 = 9 intervals. <br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-n_3KIh5kyzU/X6wlZl0zqaI/AAAAAAAABxk/XKaouI11I_o2QeAjwaaNc_b-Ujq-zh8GQCLcBGAsYHQ/s785/afniDeconSlide5.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="570" data-original-width="785" src="https://1.bp.blogspot.com/-n_3KIh5kyzU/X6wlZl0zqaI/AAAAAAAABxk/XKaouI11I_o2QeAjwaaNc_b-Ujq-zh8GQCLcBGAsYHQ/s320/afniDeconSlide5.JPG" width="320" /></a></div><p></p><p>Recall that since we're using <span style="font-family: courier;">TENTzero</span>, the first and last tent functions are eliminated. Thus, having n=10 means that we will get 10-2=8 beta coefficients ("knots") out of the GLM. These, finally, are the x-axes in the estimated GLM curves above: the knots at which the HDR estimates are made, of which there are 8 for Axcpt. </p><p>(Aside: 0-based afni can make some confusion for those of us more accustomed to 1-based systems, especially with TENTzero. The top plots put the value labeled as #0_Coef at 1; a point is added at 0=0 since TENTzero sets the curves to start at zero. I could also have added a zero for the last knot (9, for this Axcpt example), but did not.)<br /></p><p>These GLMs seem to be working fine, producing sensible HDR estimates. We've been using these results in many analyses (multiple lab members with papers in preparation!), including the fMRIprep and HCP pipeline comparisons <a href="https://mvpa.blogspot.com/2019/01/comparing-fmriprep-and-hcp-pipelines_38.html" target="_blank">described in previous posts</a>. We became curious, though, if we could set the knot duration to the TR, rather than twice the TR (as done here): would the estimated HDR then follow the trial timing even more precisely? In the next posts(s) I'll describe how we changed the GLMs to have the knot duration match the TR, and a (hopefully interesting) hiccup we had along the way.</p><p>UPDATE 4 January 2021: Corrected DMCC13benchmark openneuro links.</p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-37550689328052256462020-11-10T09:48:00.006-06:002021-10-11T13:24:17.737-05:00DMCC GLMs: afni TENTzero, knots, and HDR curves, #1<p>One of the things we've been busy with during the pandemic-induced data collection pause is reviewing the <a href="https://sites.wustl.edu/dualmechanisms/" target="_blank">DMCC (Dual Mechanisms of Cognitive Control</a>; more descriptions <a href="https://www.biorxiv.org/content/10.1101/2020.09.18.304402v1" target="_blank">here</a> and <a href="https://osf.io/xfe32/" target="_blank">here</a>) project GLMs: did we set them up and are we interpreting them optimally? </p><p>We decided that the models were not quite right, and are rerunning all of them. This post (and its successors) will describe how the original GLMs were specified, why we thought a change was needed, what the change is, and how the new results look. The posts are intended to summarize and clarify the logic and process for myself, but also for others, as I believe many people find modeling BOLD responses (here, with <a href="https://afni.nimh.nih.gov/" target="_blank">afni</a> <a href="https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDeconvolve.html" target="_blank">3dDeconvolve TENTs</a>) confusing.</p>
<div class="separator"><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-ilYQPkFxK5c/X5H1W9_Q2FI/AAAAAAAABwU/FtD_t5nFklYzwr1sLVfhv6JDeokYgZ0MACLcBGAsYHQ/s571/Capture.JPG" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="494" data-original-width="571" src="https://1.bp.blogspot.com/-ilYQPkFxK5c/X5H1W9_Q2FI/AAAAAAAABwU/FtD_t5nFklYzwr1sLVfhv6JDeokYgZ0MACLcBGAsYHQ/s320/Capture.JPG" width="320" /></a></div><p style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"> <br /></p><p>I won't just start explaining the DMCC GLMs however, because they're complex: the DMCC has a mixed design (task trials embedded in long blocks), and we're estimating both event-related ("transient") and block-related ("sustained") responses. If you're not familiar with them, <a href="https://doi.org/10.1016/j.neuroimage.2011.09.084" target="_blank">Petersen and Dubis (2012)</a> review mixed block/event fMRI designs, and Figure 1 from <a href="https://doi.org/10.1016/S1053-8119(03)00178-2" target="_blank">Visscher, et. al (2003)</a> (left) illustrates the idea. In addition to the event and block regressors we have "blockONandOFF" regressors in the DMCC, which are intended to capture the transition between the control and task periods (see <a href="https://doi.org/10.1016/j.neuron.2006.04.031" target="_blank">Dosenbach et. al (2006)</a>).</p></div>
<p>We fit many separate GLMs for the DMCC, some as controls and others to capture effects of interest. For these posts I'll describe one of the control analyses (the "ONs") to make it a bit more tractable. The ON GLMs include all three effect types (event (transient), block (sustained), blockONandOFF (block start and stop)), but do not distinguish between the trial types; all trials (events) are given the same label ("on"). The ON GLMs are a positive control; they should show areas with changes in activation between task and rest (30 seconds of fixation cross before and after task blocks). For example, there should be positive, HRF-type responses in visual areas, because all of our task trials include visual stimuli.<br /></p><p>Below is an example of the estimated responses from our original ONs GLMs (generated by <a href="https://mvpa.blogspot.com/2020/03/introductory-knitr-tutorial.html" target="_blank">R knitr</a> code <a href="https://osf.io/54qhb/" target="_blank">similar to this</a>). The GLMs were fit for every voxel individually, then averaged within the <a href="https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal" target="_blank">Schaefer, et al. (2018) parcels</a> (400x7 version); these are results for four visual parcels and volumetric preprocessing. These are (robust) mean and SEM over 13 participants (<a href="https://openneuro.org/datasets/ds003452" target="_blank">DMCC13benchmark</a>). The results have multiple columns both because there are <a href="https://www.biorxiv.org/content/10.1101/2020.09.18.304402v1" target="_blank">four DMCC tasks</a> (Axcpt (AX-CPT), Cuedts (Cued task-switching), Stern (Sternberg working memory), and Stroop (color-word variant)), and because of the mixed design, so we generate estimates for both event/transient (lines) and sustained (blue bars) effects (blockONandOFF are not included here). </p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-sVLK9pJ8g6I/X5H9OfoB7sI/AAAAAAAABwo/jroX8ZvNcgck8swQhzQdjmr1jpj7Jlm-wCLcBGAsYHQ/s1101/visualONs_2TRpK.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="643" data-original-width="1101" height="234" src="https://1.bp.blogspot.com/-sVLK9pJ8g6I/X5H9OfoB7sI/AAAAAAAABwo/jroX8ZvNcgck8swQhzQdjmr1jpj7Jlm-wCLcBGAsYHQ/w400-h234/visualONs_2TRpK.JPG" width="400" /></a></div><p>The line graphs show the modeled response at each time point ("knot"). Note that the duration (and so number of estimated knots) for each task varies; e.g., Stroop has
the shortest tasks and Stern the longest. (We set the modeled response to trial length + 14 seconds.)<br /></p><p>The double-peak response shape
for Axcpt, Cuedts, and Stern is expected, as the trials have two
stimuli slides separated by a 4 second ITI; the Stroop response should
resemble a canonical HRF since each trial has a single stimulus and is short. The task timing and some sample trials are shown below (this is Figure 1 in a <a href="https://www.biorxiv.org/content/10.1101/2020.09.18.304402v1" target="_blank">preprint </a>(<a href="https://doi.org/10.1162/jocn_a_01768" target="_blank">accepted version</a>) which has much more task detail; see also the <a href="https://doi.org/10.1101/2021.05.28.446178" target="_blank">DMCC55B description</a>). </p><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-xQUDrILa_zs/X6q0OZwb7iI/AAAAAAAABw8/6pUzM9Hk5GMnq4s1Od_E00WtilSKwnsegCLcBGAsYHQ/s1294/DMCCtaskSummary.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="694" data-original-width="1294" src="https://1.bp.blogspot.com/-xQUDrILa_zs/X6q0OZwb7iI/AAAAAAAABw8/6pUzM9Hk5GMnq4s1Od_E00WtilSKwnsegCLcBGAsYHQ/s320/DMCCtaskSummary.JPG" width="320" /></a></div><p></p><p>So far, so good: the estimated responses for each task have a sensible shape, reflecting both the HRF and task timing. But what exactly is along the x-axis for the curves? And how did we fit these GLMs, and then decide to change them a bit? ... stay tuned.</p><p>later posts in this series: <a href="http://mvpa.blogspot.com/2020/11/dmcc-glms-afni-tentzero-knots-and-hdr_11.html">#2</a></p><p>UPDATE 4 January 2021: Corrected DMCC13benchmark openneuro links. <br /></p>Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com0tag:blogger.com,1999:blog-5737874959005852552.post-22924270620869584122020-07-27T16:56:00.000-05:002020-07-27T16:56:24.082-05:00overlaying in Connectome Workbench 1.4.2 This post answers a question posted on my <a href="http://mvpa.blogspot.com/2020/03/getting-started-with-connectome.html" target="_blank">Workbench introductory tutorial</a>: Rui asked how to visualize the common regions for two overlays. I've found two ways of viewing overlap.<br />
<br />
The first is to reduce the Opacity (circled in red) of the top-most Layer. In this screen capture I have a blueish ROI on top of a yellow one. (I scribbled the colors over their corresponding rows in the Layers toolbox; blue is listed above - over - the yellow. Click the Layers On and off if you're not sure which is which.) Compare the appearance of the ROIs when the Opacity is set to 1.0 for both (left) versus 0.7 for the top (blue, right): the borders of both ROIs are visible on the right side, and the blue is less opaque all over (not just in the area that overlaps).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-A_nCHpHslu8/Xx9M8BKIxQI/AAAAAAAABu4/kNpXERzySUU-L22j5Nsfz79iw4KH-HBLQCLcBGAsYHQ/s1600/opaque.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="428" data-original-width="660" height="207" src="https://1.bp.blogspot.com/-A_nCHpHslu8/Xx9M8BKIxQI/AAAAAAAABu4/kNpXERzySUU-L22j5Nsfz79iw4KH-HBLQCLcBGAsYHQ/s320/opaque.JPG" width="320" /></a></div>
The second method is to set one (or both) ROIs to Outline (border) mode. Here, I set the top (blue) ROI to "Outline Only" in the Overlay and Map Settings dialog box (click the little wrench button to bring up the dialog).<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-l-lZY8kQplU/Xx9J4CUJ4NI/AAAAAAAABuY/ZbVubswT1eUTIddRCV4Ul_TKZ40NGPvYQCLcBGAsYHQ/s1600/outline.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="566" data-original-width="755" height="239" src="https://1.bp.blogspot.com/-l-lZY8kQplU/Xx9J4CUJ4NI/AAAAAAAABuY/ZbVubswT1eUTIddRCV4Ul_TKZ40NGPvYQCLcBGAsYHQ/s320/outline.JPG" width="320" /></a></div>
<br />Jo Etzelhttp://www.blogger.com/profile/04277620767760987432noreply@blogger.com1