Monday, June 13, 2022

now available: DMCC55B rejected structurals and ratings

We received several requests for the rejected DMCC55B structural images, since comparing images of different quality can be useful for training. Thanks to the assistance of Rachel Brough, we have now released T1 and T2 images for the 13 DMCC55B people whose initial structural scans we rated of poor quality, as well as our ratings for both sets of images (the initial rejected images and better repeated ones). 

The rejected structural images (with session name “doNotUse”) and ratings are in a new sub-component (postPublication_rejectedStructurals) of the DMCC55B supplemental site, rather than with the released dataset on openneuro, to avoid confusion about which images should be used for processing (use the previously-released ones available at openneuro).

Wednesday, June 8, 2022

troubleshooting run-level failed preprocessing

Sometimes preprocessed images for a few runs are just ... wrong. (These failures can be hard to find without looking, one of the reasons I strongly suggest including visual summaries in your QC procedures; make sure you have one that works for your study population and run types.) 

Here's an example of a run with "just wrong" preprocessing that I posted on neurostars: the images are all of the same person and imaging session, but one of the runs came out of the preprocessing seriously distorted.


And here's another: the images from the run at the lower right are clearly tilted and squashed compared to the others:

The above images are temporal means made after fmriprep preprocessing, including transformation to the MNI template anatomy (i.e., _space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz);  see this post for more details and code. 

How to troubleshoot this type of partial failed preprocessing?

First, note that this is not due to the entire preprocessing pipeline failing: we have the expected set of derivative images for the participant, and everything looks fine for most runs. This suggests that the problem is with not with the pipeline, or that something is unusual about the participant.

fmriprep (and I think most preprocessing pipelines?) is not fully deterministic: if you run the same script twice with the same input files and settings the output images will not be exactly the same. They should be quite similar, but not identical. We have found that sometimes simply rerunning the preprocessing will correct the type of sporadic single-run normalization/alignment failures shown above.

If repeating the preprocessing doesn't fix the failure, or you want to investigate more before rerunning, I suggest checking the "raw" (before preprocessing; as close to the images coming off the scanner as practicable) images for oddities. Simply looking at all the images, comparing those from the run with failed preprocessing and the other (successful) runs from the same session can often make the problem apparent.

Look at the actual functional images of the problematic run (e.g., by loading the DICOMs into mango and viewing as a movie): do you see anything strange? If the problematic run seems to have more-or-less the same orientation, movement, visible blood vessels, etc. as the non-problematic runs for the person, it is unlikely that the source of the problem is the functional run itself and you should keep looking. (If the functional run itself is clearly unusual/full of artifacts, it is most likely simply unusable and should be marked as missing.)

If the functional run seems fine, look at all of the other images used in preprocessing, especially any fieldmaps and single-band reference (SBRef) images. Depending on your acquisition you may have one or more fieldmaps and SBRef images per run, or per set of runs. For the DMCC we use CMRR multiband sequences, so have an SBRef image for every functional run, plus a fieldmap each session. Both the fieldmaps and SBRef images are used in preprocessing, but differently, and if either has artifacts they will affect the preprocessing.

How an artifact in the fieldmap or SBRef affects the preprocessing can be difficult to predict; both can cause similar-looking failures. In the two examples above, the first was due to artifacts in the fieldmaps, the second in the SBRef.

This is a screen capture of three SBRef images from the session shown in the second example. The numbers identify the scans and are in temporal order; image 41 is the SBRef for the affected run (a "tilted" brain); 23 for the correctly-preprocessed run above it. There are dark bands in parts of scan 41 (red), and it looks a bit warped compared to the other two; below is how they look in coronal slices:


All SBRef images look rather odd (here, there's some expected ear dropout (yellow) and encoding direction stretching), but the key is to compare the images for runs in which preprocessing was successful (23 and 32) with those for which it failed (41). The SBRef for 41 is obviously different (red): black striping in the front, and extra stretching in the lowest slices. This striping and stretching in the SBRef (probably from movement) translated to tilting in the preprocessed output above.

What to do about it?

Ideally, you won't collect SBRef or fmaps with strange artifacts; the scanner will work properly and participants will be still. If the images are checked during acquisition it is often possible to repeat problematic scans or fix incorrect settings. This is of course the best solution!

However, sometimes an artifact or movement is not apparent during data collection, the scan can't be repeated, or you are working with an existing dataset and so are stuck with problematic scans. In these cases, I suggest doing something like in this post: look at all of the scans from the session (and other sessions if relevant) and try to determine the extent and source of the problem. 

In the case of the fieldmap artifacts, every fieldmap from the scanning session was affected, but fieldmaps for the same person from two other sessions were fine. We "fixed" the failed preprocessing by building a new BIDS dataset, swapping out bad fieldmaps for good ones and changing the filenames accordingly. Before trying this I checked that the person's head position, distortion, etc. were quite similar between the runs. I do not really recommend this type of image swapping; things can go badly wrong. But it is something to consider if you have similar artifact-filled and good images from the same person and acquisition. With the SBRef image we have another option: SBRef images are not required, so we could delete poor ones (here, scan 41) and repeat the preprocessing without them. 

Neither workaround (swapping or deleting) should be used lightly or often. But it has produced adequate quality images for us in a few cases. To evaluate I look very closely at the resulting preprocessed images and BOLD timecourses, both anatomy and comparing the positive control analysis (see also) results for runs/sessions with normal and workaround preprocessing. For example, confirm that the changes in BOLD timed with your stimuli in particular visual parcels are essentially the same in runs with the different preprocessing. If different visual parcels show the stimulus-caused changes in BOLD depending on preprocessing, the workaround was not successful.