fMRI studies, which measure blood oxygenation-dependent (BOLD) signal, do report brain activations.

It seems obvious, but it’s been subject to debate.  Why? – well, largely because validations of the technique have tended to involve stimulating neurons using electrodes and measuring the resulting BOLD response using fMRI.  Unfortunately, whenever you stick an electrode into the brain to stimulate a region, that electrical stimulation tends to have extremely non-specific effects.  It’s like testing whether the energy-saving light bulbs (and not the tube fluorescents or the traditional filament bulbs) in the Dobbins Lab explode at high currents by sending a power surge to the whole university and checking to see if the fire department is called to the Psychology building.  There are any number of other, related events associated with the power surge that could have caused the fire-department to be called out.

But now we have some more solid evidence for fMRI doing what we think it does.  Using a pretty cool technique called optogenetic stimulation, mouse neurons can be modified (by a locally-injected virus) to fire when exposed to light, and very specific neuronal firing can be non-invasively stimulated using ‘optical stimulation’.  Resultant changes in local BOLD signal can be assessed using high field-strength fMRI to see whether there is a BOLD activation that corresponds directly to the neuronal firing.  Thankfully, as reported by Lee et al. in Nature, excitatory neuronal firing does lead to the sort of BOLD activation we typically see in fMRI studies.

So, excitatory neurons firing causes an elevated BOLD response.  But wait, there’s more:

“Evoked BOLD was dominated by positive signals while driving these excitatory CaMKIIα-positive cells; in contrast, optically driving inhibitory parvalbumin-positive cells10, which may have unique connectivity with local neuronal circuitry or vasculature, additionally gave rise to a zone of negative BOLD, consistent with the GABAergic phenotype, surrounding the local positive BOLD signal (Supplementary Fig. 4).”

The suggestion there is that the firing of inhibitory neurons leads to negative BOLD signal.  The justification for this statement is hidden away in the supplementary materials, but if it’s well-supported (and replicated,  of course) then fMRI may start being the intuitively plausible brain interrogation tool that we’ve also shied away from allowing it to be.  It doesn’t get too much simpler than: more excitation = more activation = more blood; more inhibition = less activation = less blo0d, does it?

It’s good to know I may not be in the snake-oil business.

Here’s a link to the article:

This is proving to be a little more interesting that I had imagined.  The particularly noteworthy finding is a null finding – it doesn’t seem to matter how you do the connectivity analysis, the recovered maps tend to look very similar.

I’m going to think about how to present this best, and what analyses will best illustrate this sort of null finding, but I think it may have publication legs so I’ll hold off preemptively discussing the data until I know exactly what I’ll do to it and how I’ll disseminate it.

Things I have (re-)learned this week:
– Conducting resting connectivity analyses on 60mins worth of 2s TR fMRI data from 19 participants takes time;
– It is always a good idea to do a quick literature search before proclaiming any findings as Earth-shattering.

The first point is no surprise; especially when I consider that the previous resting analyses I have done have all been conducted on approx. 12 min worth of 2.2s TR fMRI data.

The second point helped to temper my initial enthusiasm for some pretty interesting initial findings, which I won’t explain just yet (that will come in pt. 3).  Suffice to say, an article headed by Damien Fair published in NeuroImage that we had read for a lab meeting back in 2008 had already considered quite comprehensively whether on-task fMRI scans could be used to inform fcMRI analyses.

I’m now in the process of trying to replicate Fair et al’s on-task connectivity analyses which examines the residuals following on-task model fitting (once again necessitating that I employ the SPM residuals tweak outlined here), which should give me two different sorts of on-task connectivity analyses to compare to a standard resting connectivity analysis in pt. 3.

Stay tuned.

Last week, during a very interesting Brain, Behavior and Cognition colloquium given by Steve Nelson, Jeff Zacks asked a thought-provoking question.  He wanted to know what fMRI connectivity maps would look like if you performed a resting-connectivity-type analysis on data that wasn’t collected at rest, but was instead collected whilst participants were ‘on-task’, i.e. doing things other than resting.

As background to this:
– Each fMRI participant in our lab typically carry out one or two connectivity scans prior to, or following a the bulk of the experimental scanning;
– Connectivity scans require that the participant keep their eyes open, fixate on a fixation cross, and try not to fall asleep for about 5 minutes;
– Experimental scans have participants engage in much more effortful cognitive processes (such as trying to recognise words and scenes), with multiple tasks presented in relatively quick succession;
– Resting connectivity is thought to be driven be low frequency fluctuations in signal ( approx. 0.1 Hz; a peak every 10 seconds or so), on-task BOLD activation is much more event-related, ramping up or down following certain events  which occur every 6 seconds or so (which we assume results from the engagement of a particular cognitive process).
This Raichle article in Scientific American is a a very accessible primer on the current state of resting connectivity.  This Raichle  article in PNAS is a more comprehensive scientific discussion of the same topic.

Two resting connectivity networks (red and blue, overlap in purple) seeded with 4mm radius spherical seeds on the PFC mid-line.

Jeff’s question was interesting to me because it asks how robust these slow-wave oscillations across distal brain loci really are.  To what extent would they be (un-)modulated by task-related fluctuations in BOLD signal?

My initial thoughts were that on-task resting connectivity maps would look pretty similar to resting resting connectivity maps – after all, it has been suggested that resting connectivity networks, such as the fronto-parietal control network, arise because of their frequency of coactivation during development, i.e. that DLPFC, MPFC and IPL are coactive when on-task so often that it makes metabolic sense for their activity to synchronise even when not on-task.  But, there’s no need to be satisfied with your initial thoughts when you can simply look at some data, so that’s what I did.

On Friday I began the process of running a resting state-style connectivity analysis on the on-task scans of the data that went into the Journal of Neuroscience paper we had published a few weeks ago.   It was a nice dataset to use as we had also collected resting  scans and carried out a connectivity analysis that yielded some interesting results.  I entered the same two seeds (from the the PFC mid-line) that were central to our connectivity analysis into a connectivity analysis using the four, 10-minute on-task scans that we analysed for the event-related fMRI analysis.  In part two, I’ll have an informal look at the differences between the output from the resting scans and the on-task scans when subjected to the same resting connectivity analyses.

Occasionally, it’s nice to look under the bonnet and see what’s going on during any automated process that you take for granted.  More often than not, I do this when the automaticity has broken down and I need to fix it (e.g. my computer won’t start), or if I need to modify the process in a certain way as to make its product more useful to me (e.g. installing a TV card to make my computer more ‘useful’ to me).  This is especially true with tools such as SPM.

One of the greatest benefits associated with using SPM is that it’s all there, in one package, waiting to be unleashed on your data.  You could conduct all of your analyses using SPM only, and you could never need to know how SPM makes the pretty pictures that indicate significant brain activations according to your specified model.  That’s probably a bad idea.  You, at least, need to know that SPM is conducting lots and lots of statistical tests – regressions – as discussed in the previous post.  If you have a little understanding of regressions, you’re then aware that what isn’t fit into your regression model is called a ‘residual’ and there are a few interesting things you can do with residuals to establish the quality of the regression model you have fit to your data.  Unfortunately with SPM, this model fitting happens largely under the bonnet, and you could conduct all of your analyses without ever seeing the word ‘residual’ mentioned anywhere in the SPM interface.

Why is this?  I’m not entirely sure.  During the process of ‘Estimation’, SPM writes an image containing all of your residuals to disk (in the same directory as the to-be-estimated SPM.mat file) in a series of image files as follows:

ResI_0001.img ResI_0001.hdr
ResI_0002.img ResI_0002.hdr
ResI_0003.img ResI_0003.hdr

ResI_xxxx.img ResI_xxxx.hdr
(xxxx corresponds to the number of scans that contribute to the model.)

Each residual image will look something like this when displayed in SPM. You can see from the black background that these images are necessarily subject to the same masking as the beta or con images.

SPM then deletes these images once estimation is complete, leaving you having to formulate a workaround if you want to recover the residuals for your model.  One reason SPM deletes the residual image files is that they take up a lot of disk space – the residuals add nearly 400MB (in our 300 scan model) for each participant which is a real pain if you’re estimating lots of participants and lots of models.

If you’re particularly interested in exploring the residual images (for instance, you can extract the timecourse of residuals for the entire run from an ROI using Marsbar), you need to tweak SPM’s code.  As usual, the SPM message-board provides information on how to do this.

You can read the original post here, or see the relevant text below:

… See spm_spm.m, searching for the text “Delete the residuals images”.  Comment out the subsequent spm_unlink lines and you’ll have the residual images (ResI_xxxx.img) present in the analysis directory.
Also note that if you have more than 64 images, you’ll also need to change spm_defaults.m, in particular the line
defaults.stats.maxres   = 64;
which is the maximum number of residual images written.
There are a few steps here:
1) open spm_spm.m for editing by typing
>> edit spm_spm
2) Find the following block of code (lines 960-966 in my version of SPM5):
%-Delete the residuals images
for  i = 1:nSres,
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
and comment it out so it looks like:
%-Delete the residuals images
%for  i = 1:nSres,
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
3) open spm_defaults.m for editing by typing

>> edit spm_defaults

4) Find the following line (line 35 in my version of SPM5):

defaults.stats.maxres   = 64;

and change to:

defaults.stats.maxres   = Inf;

5) Save both files and run your analysis.

Make sure that once you no longer need to see the residual images, you unmodify the code, otherwise you’ll run out of harddisk-space very very quickly!

I recently came across a web-page I should have committed to memory years ago, when I was first starting to get to grips with SPM analysis:

Matthew Brett’s Introduction to SPM Statistics

It’s a fantastically straightforward guide to how SPM analysis uses a regression model defined by your contrasts to establish the voxels in your scanner images that have significant activations.

It doesn’t take too much understanding on top of what you get from this web-page to appreciate that when you’re specifying that you want onsets modeled as a haemodynamic response function (hrf), the software is simply building a timecourse by stacking hrfs on top of one another according to your design-defined onsets.  It them fits the regression which is now defined by parameters resulting from the estimated hrf timecourse rather than task difficulty values from 1-5.

I’d say this should be required reading for all those getting to grips with SPM.

Whole brain masks are produced by SPM when estimating a model.  They’re great to look over if you want to check  the extent of participant movement (a quick heuristic is to examine whether movement has been so severe that it has noticeably chopped off bits of the brain, e.g. the cerebellum).

These masks can also be used as large, whole-brain ROIs from which to extract signal to covary out of resting connectivity analyses.  I’ll write more about conducting resting connectivity analyses using SPM, without the need for a dedicated connectivity toolbox, at a later date, but it involves extracting timecourses from the whole brain, white matter, CSF and entering these as nuisance regressors alongside movement paramters and their first derivatives.  I use Marsbar to extract the timecourses from the ROI files saved in the *roi.mat format.

Recently, when combining a few different datasets into one bank of resting connectivity data, I noticed that the whole brain mask aggregated across the large number of participants was dropping out a lot of the brain – not enough to consider excluding individual participants, but cumulatively quite deleterious for the overall mask.  I therefore used Imcalc to generate a binary-thresholded image (thresholded at 0.2) of the SPM-bumdled EPI template.  As you can see below, once you remove the eyeballs, this makes for a nice whole-brain mask.

whole-brain mask image
Whole-brain mask constructed from SPM EPI template

I’ve zipped this mask and made available in roi.mat and .nii format here.

Masking is an extremely useful function within SPM.  For example, you might want to see whether your parietal cue-related activation is subsumed by, or independent of, the parietal retrieval-success activation that is reliably found in the literature – in this case you would mask your cueing activation by retrieval success (inclusively to see the overlap, exclusively to see the independence).

The drawback with standard SPM5 masking procedure is that you are only able to select a contrast with which to mask that is defined within the same SPM.mat as your original contrast of interest.  There are a few ways around this limitation which allow you to mask be a contrast defined in another SPM.mat file, such as by using the ImCalc function, or using F-contrasts in which you specify multiple contrasts at the second-level.  However, the best solution I have found was posted to the SPM mailing list by Jan Gläscher.

If you follow the instructions and replace the existing spm_getSPM.m file with Jan’s modified file.  You’ll need to restart SPM (maybe even Matlab), but once you do, when you click through Results and select the your first contrast of interest from the first SPM.mat (the one you want to mask) you will be able to select a different SPM.mat from which to choose a contrast to mask the first one with as follows:

masking options1
1) You get the standard masking dialog;
But now you are given the choice of masking from within the same analysis or selecting 'other'. If you select 'other' you will be able to choose a new SPM.mat from which to select a masking contrast.

It certainly beats messing about with ImCalc.