Speed

Here’s a link to some pretty useful tweaks for getting the most speed possible out of a Windows Remote Desktop connection:

http://www.tech-recipes.com/rx/11235/how-to-improve-remote-desktop-protocol-performance/

The end result isn’t too much faster than when using the default settings for a slow connection, but the difference is noticeable.

Multiple Connections to Computers behind a Router

This is for when you want to Remote Desktop into more than one computer sharing an internet connection through a router.

http://www.pchell.com/support/useremotedesktoptoaccessmultiplecomputers.shtml

You’ll need to:
1) change to the default port designated for the Remote Desktop connection on each computer locally (default is 3389);
2) set up port-forwarding appropriately on your router;
3) make sure that whoever runs your  networks allows external access not only to the default port for Remote Desktop on the IP address occupied by your router, but also to the additional ports you have specified on each additional target computer.  This is a particularly important step if you’re doing this on a work connection external access to most ports is blocked by default.

Having just skimmed the Lifehacker article below, I started thinking about what habits I have started incorporating into my work-day, and what habits I really need to cultivate.

Lifehacker’s “Why and How I Switched to a Standing Desk”

Drinking More Water: At the start of the year I bought a Brita filter jug with the aim of drinking more water.  Seeing the jug on my desk every morning compels me to fill it up and I probably drink a couple of litres throughout an average working day.  This new habit has got rid of a lot of evening headaches and I’m pretty happy with it.  More frequent toilet breaks don’t hurt with breaking up the monotony of a day sat at the desk either.

Standing Desk: There seem to be a few benefits to making the switch to a standing desk.  First, my posture is worsening by the year, and I seem to be collecting muscular pains which are exacerbated by hefting an all-action two-year old around in my spare time.  I imagine a standing desk would get me focused much more on my posture and the body-mechanics that facilitate my working day.  Second, anything to get a bit more physical activity into my life right now would be a good thing – Scottish winters aren’t blessed with an abundance daylight hours or days that scream “Go out for a run” at me.  The barrier making the conversion seems mostly to be social. I don’t want to become ‘The guy in Psychology who has his desk up on reams of printer paper.’  I’m also worried that I wouldn’t make it through the initial 5-day breaking-in phase.

Running: I ran my first half- and full-marathons in St. Louis.  As part of the training for these events, I got into a nice routine of running around Forest Park (close to 7 miles) at least twice a week.  that’s fallen by the wayside recently.  I hope it’ll pick up again in the summer, but I think I’ll try and catalyse that change by going for runs during my lunch break.  I just need to find a suitable shower facility in order to maintain basic standards of hygiene.

Being less wasteful with toner/paper: I don’t like reading journal articles on computer monitors.  Therefore, I print thousands of pages a year, most of which I only read once.  Most of these articles end up catalogued in my Endnote database (if they’re lucky) and locked in a metal filing cabinet with a few notes scrawled on them.  That’s quite a waste of paper and ridiculously expensive toner, which I now have to buy myself.  Motivated by saving trees and money, I’m starting to consider other options.  Now that they’ll display pdfs, I’ve thought about a Kindle; the e-ink is easier on the eye than an LCD screen, the battery lasts for weeks and they’re (relatively) cheap.  BUT they won’t display colour, something I need if I’m to follow the neuroimaging papers I read.  Colour alternatives like the iPad and Nook Colo(u)r have some combination of a shocking battery life, back-lit screens and a horrendous price-tag and I’m not sure it’s worth taking a punt on a gadget that may end up presenting me with more problems than it solves.  For instance, I don’t know how I’d make notes effectively on an electronic pdf document using each of these devices.  I’m settling on the thought that I’ll wait for colour e-ink before committing to wasting less paper, but it does seem like a shame that there isn’t something suitable on the market right now… and I’ll probably be waiting years.

I’d be interested in reading comments from anyone who has converted to a standing desk or bought a Kindle/iPad for the purposes of reading journal articles.  Nothing’s ever going to be without its own problems, but do these innovations improve overall working conditions?

One of the most annoying and stressful things that can happen during an fMRI experiment is for system notifications, pop-ups or even the Windows taskbar to suddenly appear on the screen on which you are presenting stimuli to participants.  Here I outline a few things that I do to minimise the likelihood of this sort of disruption when running Matlab on a Windows XP machine.

1) Turn off your wireless network adapter. This reduces the processing burden on your system – crucial if you’re interested in measuring response times – and stops a lot of annoyances (Flash updates, Windows updates etc.) being pushed to your system.  My laptop has a manual switch on the exterior than I can flick to turn it off.  Alternatively, the wireless network can be disabled within windows by navigating Network Connections, right-clicking on the wireless network, and selecting ‘disable’.

2) Disable Real-Time Antivirus Protection and Windows Automatic Updates. This again reduces the burden on your system  and stops annoying notifications popping up.  Whatever it is, it can wait.  However, disabling real-time protection  will probably lead to an ugly warning in your system tray, but no-one needs to see that if you…

3) Turn off the ‘always on top’ property of the Windows Taskbar. Once you do this, Matlab will sit entirely on top of the taskbar, and the taskbar shouldn’t ever become visible at inopportune moments (something I inexplicable struggled with when designing my latest fMRI experiment).  Right click on the taskbar, select Properties, and untick the ‘Keep the taskbar on top of other windows’ checkbox.

4) Disable balloon tips in the notification area. Whilst you could turn off the system tray altogether, that shouldn’t be necessary if you’ve already followed step 3.  (One reason I like to keep the system tray visible is that I find it a handy way to t manage wireless networks, Dropbox, etc. and I don’t want to lose that functionality entirely. ) However, to reduce the chances of anything else you haven’t already thought of ‘helpfully’ reminding you of something mid-experiment, turn off bubble notifications, as detailed in this Microsoft TechNet article.

That should give you the best crack at getting through an experiment with an ugly, flickering, Windows interruption.  Now that you’ve covered your bases,  all you need to do is make sure that your Matlab coding doesn’t give you any  grief – easier said than done.

UPDATE: These steps aren’t exclusive to Matlab stimulus presention either.  They could give you peace of mind before hooking your laptop up to give a formal presentation or jobtalk on Powerpoint… I’ve seen too many talks interrupted by pesky Windows Update notifications and ‘Found new wireless network’ bubbles.

Here’s an interesting wikibooks page detailing how you can make SPM faster.

http://en.wikibooks.org/wiki/SPM/Faster_SPM

Some of the tweaks involve simply adjusting the spm_defaults.m to utilise the amount of RAM you have installed at the model estimation stage.  Others involve a more costly (and potentially hugely beneficial?) purchase of the Parallel Computing Toolbox to utilise many cores in a single machine, or many machines served by a server.  I’ll certainly be taking a look at these tweaks in the coming weeks and months.

EDIT: Changing the defaults.stats.maxmem parameter from its default value of 20 to 33 (in order to use a maximum of 8GB of available memory; as outlined in the screengrab from the wikibooks site below) looks to have sped model estimation up by maybe a factor of 10.

A defaults variable 'maxmem' indicates how much memory can be used at the same time when estimating a model. If you have loads of memory, you can increase that memory setting in spm_defaults.m

Assuming you have a large amount of RAM to utilise, this is a HUGE time-saving tweak.  Even if you don’t have a large amount of RAM, I’m sure you can speed things up considerably by specifying the value as something greater than the meagre 1MB (2^20) SPM allocates by default.

SEE ALSO: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SPM;863a049c.1105 which has the following recommendation:

[change] line 569 in spm_spm from:

nbz = max(1,min(zdim,floor(mmv/(xdim*ydim)))); nbz = 1; %-# planes

to:

nbz = max(1,min(zdim,floor(mmv/(xdim*ydim)))); %-# planes

[i]n order to load as much data as possible into the RAM at once.

By default, SPM masks the images that contribute to an analysis at the Estimation stage.  If a voxel is masked out because it fails to exceed an arbitrary analysis threshold (set to a defulat of 0.8 in SPM5), then its values are replaced with NaNs, and that voxel does not contribute to the final output.  Incidentally, this masking contributes to the non-analysis of orbitofrontal and frontopolar regions as a consequence of signal dropout.

If you want to include voxels that do not exceed the threshold (useful if you are interesting in analysing data presented in unusual units, e.g. maps of residuals), you can edit the spm_defaults.m file.  Around line 42 should be the following text:

% Mask defaults
%=======================================================================
defaults.mask.thresh    = 0.8;
This can be edited (e.g. replaced with -Inf if you want to remove the threshold altogether), the spm_defaults.m file saved, and the analysis run with a more liberal masking threshold implemented.  This can greatly increase the number of comparison that are made, and can include a lot of computationally expensive junk i.e. comparisons with non-brain tissue.  To get round this issue, it is worthwhile setting an explicit mask in the model specification stage (e.g. the whole brain mask I wrote about here) whenever you lower the implicit masking threshold.

There is a little more on this from the SPM list here.  As with all SPM tweaks, make note of what you have tweaked, and make sure you change it back to its default setting once you have done what you set out to do.

Occasionally, it’s nice to look under the bonnet and see what’s going on during any automated process that you take for granted.  More often than not, I do this when the automaticity has broken down and I need to fix it (e.g. my computer won’t start), or if I need to modify the process in a certain way as to make its product more useful to me (e.g. installing a TV card to make my computer more ‘useful’ to me).  This is especially true with tools such as SPM.

One of the greatest benefits associated with using SPM is that it’s all there, in one package, waiting to be unleashed on your data.  You could conduct all of your analyses using SPM only, and you could never need to know how SPM makes the pretty pictures that indicate significant brain activations according to your specified model.  That’s probably a bad idea.  You, at least, need to know that SPM is conducting lots and lots of statistical tests – regressions – as discussed in the previous post.  If you have a little understanding of regressions, you’re then aware that what isn’t fit into your regression model is called a ‘residual’ and there are a few interesting things you can do with residuals to establish the quality of the regression model you have fit to your data.  Unfortunately with SPM, this model fitting happens largely under the bonnet, and you could conduct all of your analyses without ever seeing the word ‘residual’ mentioned anywhere in the SPM interface.

Why is this?  I’m not entirely sure.  During the process of ‘Estimation’, SPM writes an image containing all of your residuals to disk (in the same directory as the to-be-estimated SPM.mat file) in a series of image files as follows:

ResI_0001.img ResI_0001.hdr
ResI_0002.img ResI_0002.hdr
ResI_0003.img ResI_0003.hdr

ResI_xxxx.img ResI_xxxx.hdr
(xxxx corresponds to the number of scans that contribute to the model.)

Each residual image will look something like this when displayed in SPM. You can see from the black background that these images are necessarily subject to the same masking as the beta or con images.

SPM then deletes these images once estimation is complete, leaving you having to formulate a workaround if you want to recover the residuals for your model.  One reason SPM deletes the residual image files is that they take up a lot of disk space – the residuals add nearly 400MB (in our 300 scan model) for each participant which is a real pain if you’re estimating lots of participants and lots of models.

If you’re particularly interested in exploring the residual images (for instance, you can extract the timecourse of residuals for the entire run from an ROI using Marsbar), you need to tweak SPM’s code.  As usual, the SPM message-board provides information on how to do this.

You can read the original post here, or see the relevant text below:

… See spm_spm.m, searching for the text “Delete the residuals images”.  Comment out the subsequent spm_unlink lines and you’ll have the residual images (ResI_xxxx.img) present in the analysis directory.
Also note that if you have more than 64 images, you’ll also need to change spm_defaults.m, in particular the line
defaults.stats.maxres   = 64;
which is the maximum number of residual images written.
There are a few steps here:
1) open spm_spm.m for editing by typing
>> edit spm_spm
2) Find the following block of code (lines 960-966 in my version of SPM5):
%-Delete the residuals images
%==========================================================================
for  i = 1:nSres,
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
end
and comment it out so it looks like:
%-Delete the residuals images
%==========================================================================
%for  i = 1:nSres,
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
%end
3) open spm_defaults.m for editing by typing

>> edit spm_defaults

4) Find the following line (line 35 in my version of SPM5):

defaults.stats.maxres   = 64;

and change to:

defaults.stats.maxres   = Inf;

5) Save both files and run your analysis.

Make sure that once you no longer need to see the residual images, you unmodify the code, otherwise you’ll run out of harddisk-space very very quickly!

Every now and again, Microsoft Powerpoint or Excel graphs or illustrations turn out just as you want them. In these situations, it’s handy to have a way of saving each slide as a high-quality image file.  I’ve used the one-off registry tweak described below to successfully generate figures for journal articles from Powerpoint slides.

I used to do this by starting the Powerpoint show in fullscreen (having pasted the Excel graph into a slide, if necessary) and pressing Print-Screen (PrtScn) and pasting the screen-grab-quality image into GIMP to edit and save. This is perfect if the image you need only needs to be high enough quality to display on screen e.g. if you’re making instruction screens for experiments and don’t fancy messing about with coding each block of text for the instructions in E-Prime, Superlab, Matlab etc. However, if you need to produce files that you can submit to journals as figures, then you need something that’s much higher quality (usually journals will stipulate a minimum resolution of 300dpi).

The standard “Save As” .bmp, .tif, .jpg options in Powerpoint will produce some decidedly jagged, 96dpi images, which aren’t much good for anything other than making thumbnails of your slides. However, these is a tweak in the form of a Microsoft-suggested registry edit that fixes this and allows you to saves images with resolutions in excess of 300dpi.

http://support.microsoft.com/default.aspx?scid=kb;en-us;827745&Product=ppt2003

If  you follow the instructions, you’ll be able to set resolutions of up to 307dpi in Powerpoint 2003.

96_307dpi_images
96dpi image (left) and 307dpi image (right)

The images you see here are examples created from a 1″ x 1″ Powerpoint slide that I have enlarged (96dpi) and shrunk (307dpi) so they are comparable on the same scale.  You can see the fuzziness of the Powerpoint default output image on the left compared to the one on the product of the registry-tweaked image on the right.

WARNING: Don’t try and set the resolution to be any higher than 307dpi (on Powerpoint 2003) though, as if you do so and manage to avoid causing a crash every time you try and save presentations, on large images you’ll end up with images where the bottom-half looks like it has been squished up, leaving the text isolated – worse than standard 96dpi images as far as reader-comprehension goes!

Masking is an extremely useful function within SPM.  For example, you might want to see whether your parietal cue-related activation is subsumed by, or independent of, the parietal retrieval-success activation that is reliably found in the literature – in this case you would mask your cueing activation by retrieval success (inclusively to see the overlap, exclusively to see the independence).

The drawback with standard SPM5 masking procedure is that you are only able to select a contrast with which to mask that is defined within the same SPM.mat as your original contrast of interest.  There are a few ways around this limitation which allow you to mask be a contrast defined in another SPM.mat file, such as by using the ImCalc function, or using F-contrasts in which you specify multiple contrasts at the second-level.  However, the best solution I have found was posted to the SPM mailing list by Jan Gläscher.

https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0703&L=SPM&P=R1823&X=062BAA6FF6E63DF21C&Y

If you follow the instructions and replace the existing spm_getSPM.m file with Jan’s modified file.  You’ll need to restart SPM (maybe even Matlab), but once you do, when you click through Results and select the your first contrast of interest from the first SPM.mat (the one you want to mask) you will be able to select a different SPM.mat from which to choose a contrast to mask the first one with as follows:

masking options1
1) You get the standard masking dialog;
maskoption2
But now you are given the choice of masking from within the same analysis or selecting 'other'. If you select 'other' you will be able to choose a new SPM.mat from which to select a masking contrast.

It certainly beats messing about with ImCalc.