Occasionally, it’s nice to look under the bonnet and see what’s going on during any automated process that you take for granted.  More often than not, I do this when the automaticity has broken down and I need to fix it (e.g. my computer won’t start), or if I need to modify the process in a certain way as to make its product more useful to me (e.g. installing a TV card to make my computer more ‘useful’ to me).  This is especially true with tools such as SPM.

One of the greatest benefits associated with using SPM is that it’s all there, in one package, waiting to be unleashed on your data.  You could conduct all of your analyses using SPM only, and you could never need to know how SPM makes the pretty pictures that indicate significant brain activations according to your specified model.  That’s probably a bad idea.  You, at least, need to know that SPM is conducting lots and lots of statistical tests – regressions – as discussed in the previous post.  If you have a little understanding of regressions, you’re then aware that what isn’t fit into your regression model is called a ‘residual’ and there are a few interesting things you can do with residuals to establish the quality of the regression model you have fit to your data.  Unfortunately with SPM, this model fitting happens largely under the bonnet, and you could conduct all of your analyses without ever seeing the word ‘residual’ mentioned anywhere in the SPM interface.

Why is this?  I’m not entirely sure.  During the process of ‘Estimation’, SPM writes an image containing all of your residuals to disk (in the same directory as the to-be-estimated SPM.mat file) in a series of image files as follows:

ResI_0001.img ResI_0001.hdr
ResI_0002.img ResI_0002.hdr
ResI_0003.img ResI_0003.hdr

ResI_xxxx.img ResI_xxxx.hdr
(xxxx corresponds to the number of scans that contribute to the model.)

Each residual image will look something like this when displayed in SPM. You can see from the black background that these images are necessarily subject to the same masking as the beta or con images.

SPM then deletes these images once estimation is complete, leaving you having to formulate a workaround if you want to recover the residuals for your model.  One reason SPM deletes the residual image files is that they take up a lot of disk space – the residuals add nearly 400MB (in our 300 scan model) for each participant which is a real pain if you’re estimating lots of participants and lots of models.

If you’re particularly interested in exploring the residual images (for instance, you can extract the timecourse of residuals for the entire run from an ROI using Marsbar), you need to tweak SPM’s code.  As usual, the SPM message-board provides information on how to do this.

You can read the original post here, or see the relevant text below:

… See spm_spm.m, searching for the text “Delete the residuals images”.  Comment out the subsequent spm_unlink lines and you’ll have the residual images (ResI_xxxx.img) present in the analysis directory.
Also note that if you have more than 64 images, you’ll also need to change spm_defaults.m, in particular the line
defaults.stats.maxres   = 64;
which is the maximum number of residual images written.
There are a few steps here:
1) open spm_spm.m for editing by typing
>> edit spm_spm
2) Find the following block of code (lines 960-966 in my version of SPM5):
%-Delete the residuals images
%==========================================================================
for  i = 1:nSres,
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
end
and comment it out so it looks like:
%-Delete the residuals images
%==========================================================================
%for  i = 1:nSres,
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
%end
3) open spm_defaults.m for editing by typing

>> edit spm_defaults

4) Find the following line (line 35 in my version of SPM5):

defaults.stats.maxres   = 64;

and change to:

defaults.stats.maxres   = Inf;

5) Save both files and run your analysis.

Make sure that once you no longer need to see the residual images, you unmodify the code, otherwise you’ll run out of harddisk-space very very quickly!

I recently came across a web-page I should have committed to memory years ago, when I was first starting to get to grips with SPM analysis:

Matthew Brett’s Introduction to SPM Statistics

It’s a fantastically straightforward guide to how SPM analysis uses a regression model defined by your contrasts to establish the voxels in your scanner images that have significant activations.

It doesn’t take too much understanding on top of what you get from this web-page to appreciate that when you’re specifying that you want onsets modeled as a haemodynamic response function (hrf), the software is simply building a timecourse by stacking hrfs on top of one another according to your design-defined onsets.  It them fits the regression which is now defined by parameters resulting from the estimated hrf timecourse rather than task difficulty values from 1-5.

I’d say this should be required reading for all those getting to grips with SPM.

Whole brain masks are produced by SPM when estimating a model.  They’re great to look over if you want to check  the extent of participant movement (a quick heuristic is to examine whether movement has been so severe that it has noticeably chopped off bits of the brain, e.g. the cerebellum).

These masks can also be used as large, whole-brain ROIs from which to extract signal to covary out of resting connectivity analyses.  I’ll write more about conducting resting connectivity analyses using SPM, without the need for a dedicated connectivity toolbox, at a later date, but it involves extracting timecourses from the whole brain, white matter, CSF and entering these as nuisance regressors alongside movement paramters and their first derivatives.  I use Marsbar to extract the timecourses from the ROI files saved in the *roi.mat format.

Recently, when combining a few different datasets into one bank of resting connectivity data, I noticed that the whole brain mask aggregated across the large number of participants was dropping out a lot of the brain – not enough to consider excluding individual participants, but cumulatively quite deleterious for the overall mask.  I therefore used Imcalc to generate a binary-thresholded image (thresholded at 0.2) of the SPM-bumdled EPI template.  As you can see below, once you remove the eyeballs, this makes for a nice whole-brain mask.

whole-brain mask image
Whole-brain mask constructed from SPM EPI template

I’ve zipped this mask and made available in roi.mat and .nii format here.

The APA 6th Edition of the Publication Guide ‘recommends’ that we include DOI (Digital Object Identifiers) in our reference list.

Now first of all, what are DOIs?  Well, they’re actually pretty nifty. According to the DOI website:

“The Digital Object Identifier (DOI®) System is for identifying content objects in the digital environment. DOI® names are assigned to any entity for use on digital networks. They are used to provide current information, including where they (or information about them) can be found on the Internet. Information about a digital object may change over time, including where to find it, but its DOI name will not change.”

As to what ‘recommends’ means – right now you certainly won’t get a manuscript rejected for not including DOIs, but that may well be the way the tide is turning .  Reference management software distributors have made the APA 6th style available, and as long as you have the DOI for each reference you cite in your library, these styles will tend to list them at the end of each reference, e.g.

Yonelinas, A. P. (1994). Receiver-operating characteristics in recognition memory: Evidence for a dual-process model. Journal of Experimental Psychology: Learning, Memory & Cognition, 20, 1341-1354. doi: 10.1037/0278-7393.20.6.1341

The problem is finding a DOI for each reference you cite.  Although this won’t be as much of an issue if you’re building your reference lists from scratch now, it’s still one that will surface from time to time as it seems that older articles are having DOIs rolled out to them too – that means that an article you cite now might not yet have a DOI, but when you get round to writing your next paper, it will have been assigned one, and it will be ‘recommended’ that you duly note this in your Reference section.  Due to this post-hoc rolling out of DOIs, you can’t simply rely on finding the original paper and checking it’s title-page DOI listing either.  So how should you go about updating their reference libraries?

If you don’t have a newfangled reference management tool that does this for you automatically, there’s another pretty good solution.  Crossref have made a free DOI lookup facility available.  Using it, you can find DOIs one-by-one.  However, an even better method is nestled away at the bottom of the page.  If you click on the simple text query link, you’ll be taken to a page where you can simply paste your existing Reference section into a text box, submit it for analysis, and receive back your original text, with DOIs added.

Individual DOI lookup

Reference list DOI lookup

Try it – it’ll make the transition to compulsory DOI use (which will probably come in the APA 7th) that little bit less painful.

Every now and again, Microsoft Powerpoint or Excel graphs or illustrations turn out just as you want them. In these situations, it’s handy to have a way of saving each slide as a high-quality image file.  I’ve used the one-off registry tweak described below to successfully generate figures for journal articles from Powerpoint slides.

I used to do this by starting the Powerpoint show in fullscreen (having pasted the Excel graph into a slide, if necessary) and pressing Print-Screen (PrtScn) and pasting the screen-grab-quality image into GIMP to edit and save. This is perfect if the image you need only needs to be high enough quality to display on screen e.g. if you’re making instruction screens for experiments and don’t fancy messing about with coding each block of text for the instructions in E-Prime, Superlab, Matlab etc. However, if you need to produce files that you can submit to journals as figures, then you need something that’s much higher quality (usually journals will stipulate a minimum resolution of 300dpi).

The standard “Save As” .bmp, .tif, .jpg options in Powerpoint will produce some decidedly jagged, 96dpi images, which aren’t much good for anything other than making thumbnails of your slides. However, these is a tweak in the form of a Microsoft-suggested registry edit that fixes this and allows you to saves images with resolutions in excess of 300dpi.

http://support.microsoft.com/default.aspx?scid=kb;en-us;827745&Product=ppt2003

If  you follow the instructions, you’ll be able to set resolutions of up to 307dpi in Powerpoint 2003.

96_307dpi_images
96dpi image (left) and 307dpi image (right)

The images you see here are examples created from a 1″ x 1″ Powerpoint slide that I have enlarged (96dpi) and shrunk (307dpi) so they are comparable on the same scale.  You can see the fuzziness of the Powerpoint default output image on the left compared to the one on the product of the registry-tweaked image on the right.

WARNING: Don’t try and set the resolution to be any higher than 307dpi (on Powerpoint 2003) though, as if you do so and manage to avoid causing a crash every time you try and save presentations, on large images you’ll end up with images where the bottom-half looks like it has been squished up, leaving the text isolated – worse than standard 96dpi images as far as reader-comprehension goes!

Masking is an extremely useful function within SPM.  For example, you might want to see whether your parietal cue-related activation is subsumed by, or independent of, the parietal retrieval-success activation that is reliably found in the literature – in this case you would mask your cueing activation by retrieval success (inclusively to see the overlap, exclusively to see the independence).

The drawback with standard SPM5 masking procedure is that you are only able to select a contrast with which to mask that is defined within the same SPM.mat as your original contrast of interest.  There are a few ways around this limitation which allow you to mask be a contrast defined in another SPM.mat file, such as by using the ImCalc function, or using F-contrasts in which you specify multiple contrasts at the second-level.  However, the best solution I have found was posted to the SPM mailing list by Jan Gläscher.

https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0703&L=SPM&P=R1823&X=062BAA6FF6E63DF21C&Y

If you follow the instructions and replace the existing spm_getSPM.m file with Jan’s modified file.  You’ll need to restart SPM (maybe even Matlab), but once you do, when you click through Results and select the your first contrast of interest from the first SPM.mat (the one you want to mask) you will be able to select a different SPM.mat from which to choose a contrast to mask the first one with as follows:

masking options1
1) You get the standard masking dialog;
maskoption2
But now you are given the choice of masking from within the same analysis or selecting 'other'. If you select 'other' you will be able to choose a new SPM.mat from which to select a masking contrast.

It certainly beats messing about with ImCalc.

This week, as well as being published in the Journal of Neuroscience (see here), our article-related artwork was also chosen to go on the cover of the journal.

JoN Cover - Brain Mosaic
The Journal of Neuroscience - Feb 24, 2010

When I was trying to think of something we could submit as a cover, I initially thought of trying to create a photo-mosaic of our key activation using the raw, mosaic images from the scanner.  That didn’t work out so well, largely because there is very little that differentiates one image from another amongst the thousands of mosaics that are gathered over the course of a single scan – they’re all grey, fuzzy and extremely boring to look at.  So, I tried it using renderings of the key activation viewed from different angles, and rendered in slightly different shades of red/orange/yellow, and it didn’t look too bad at all.

In order to do this, I used Steffen Schirmer’s Photo-Mosaik-Edda software.  It’s a wonderful program that’s pretty easy to use and extremely customisable and I was pleasantly surprised to find out that I could produce very high quality images (e.g. suitable for printing as a magazine cover) using the built-in settings.  I simply built a library of images which would act as the tiles of the mosaic and, then selected the image that I wanted the mosaic to represent.  In order to create the tiles, I used the indispensable MRIcron by Chris Rorden, and to get the size and layout of the larger image that I wanted, I simply messed about with one of the images in GIMP (I also relied heavily on both of these pieces of software to make the figures in the manuscript itself presentable).

I’m keen on the image because I hope it still captures some of the essence of what I was trying to get at – that in fMRI research we use tonnes and tonnes of data to create the pretty pictures that make it all intelligible.  Admittedly, it is pretty hard to make out the text on the cover of the Journal, but I’m glad that whoever saw fit to use the image was happy to take a hit on that front in order to use the image.

I am a postdoctoral cognitive neuroscientist / experimental psychologist working in the Psychology department at Washington University in St. Louis.

I have been here close to two years and my training has involved learning an awful lot of ‘stuff’ to be able to conduct the sort of research that I came here to do – behavioural (i.e. computerised experiments) and neuroscientific (in my case, fMRI) study of memory decision-making.  Whether I’ve been pointed in the right direction (as is almost always the case) or have had to work things out from scratch, the internet has been my best-friend, helping me find resources and decipher much of the necessary technicality that pervades the field of psychology.

I hope that this blog helps me to gather to one place many of the bits and pieces, both on- and off-line, that have helped me over the past couple of years.  I also hope that I get into the habit of updating it in a way that is useful when I need to find out how I tackled a particular problem in the past, or even when I simply need to find a web-site I know was previously useful to me.  If it proves to be useful to anyone else in the process, then that will be a marvelous outcome too.

There will be a lot added to the blog over the coming months and weeks as I get it to resemble the sort of thing I have in mind at the moment.  I hope it turns out to be useful.