To present stimuli for my experiments in the lab, I use Psychophysics Toolbox (Psychtoolbox) in conjunction with Matlab.
One limitation of Psychtoolbox is that the included DrawFormattedText function does not allow text to be horizontally centered on a point other than the horizontal center of the screen. That frustration doesn’t seem to make much sense, but what I mean by it is that you cannot offset the centering (as you could by choosing to centering within different columns of a table) – If you try and place the text anywhere other than the horizontal center of the screen, the text must be left-aligned.
This means that, when using the original DrawFormattedText, instead of nice-looking screens like this:
you get this:
which is a little messy.
To fix this, I have modified the DrawFormattedText file to include an xoffset parameter. It’s a very basic modification, that allows text to be centered on points offset from the horizontal center of the screen. For example, calling DrawFormattedText_mod with:
1) xoffset set to -100, centers text horizontally on a point 100 pixels to the left of the horizontal center of the screen.
2) xoffset set to rect(3)/4 (where rect = Screen dimensions e.g. [0 0 1024 768]), centers text horizontally 1/3 of the way from the left hand edge.
I haven’t replaced my DrawFormattedText.m with my DrawFormattedText_mod.m just yet, but it has been added to the path and seems to be doing the trick.
Which brings me to why I sought out Codecademy in the first place (thanks to @m_wall for the twitter-solicited tip-off) – I am preparing to teach Psychology undergrads how to code. From 2012/2013 onwards, my academic life is going to be a little more ‘balanced’. As well as the research, admin and small-group teaching I currently enjoy, I’m also going to be doing some large-group teaching. Although I have plenty to say to undergraduates on cognitive neuroscience and cognitive psychology, I think giving them some coding skills will actually be much more useful to most. As my experience with Codecademy has recently reinforced to me, coding basics are the fundamental building-blocks of programming in any language. They will hold you in good stead whatever dialect you end up speaking to your computer in. What’s more, they will hold you in good stead whatever you end up doing, as long as it involves a computer: coding is the most versatile of transferable skills to be imparting to psychology graduates who (rightly) believe they are leaving university with the most versatile of degrees.
I think the Raspberry Pi is going to be fantastic, for reasons summed up very nicely by David McGloin – the availability of such a cheap and versatile barebones technology will kickstart a new generation of tinkerers and coders.
It’s worth mentioning that this kickstart wouldn’t just be limited to the newest generation currently going through their primary and secondary school educations. Should my hands-on experience of the device live up to my expectations (and the expectations of those who have bought all the units that went on sale this morning), I will be ordering a couple for each PhD student I take on. After all, what’s the point in using an expensive desktop computer running expensive software on an expensive OS to run simple psychology experiments that have hardly changed in the past 15 years? What’s the point when technology like the Raspberry PI is available for £22? Moreover, if you can get researchers to present experiments using a medium that has also helped them pick up some of the most desirable employment skills within and outwith academia, expertise with and practical experience in programming, then I think that’s a fairly compelling argument that it would be irresponsible not to.
But won’t I have missed a critical period in my students’ development from technology consumers into technology hackers?
Every psychology student can and should learn how to code (courtesy of Matt Wall), and it’s never too late. I learned to code properly in my twenties, during my postdoc. The reason it took me so long was that I had never needed to code in any serious goal-driven way before this time. Until the end of my PhD, Superlab and E-Prime had been perfectly passable vehicles by which I could present my experiments to participants. My frustration with the attempts of these experiment presentation packages to make things ‘easy’, which ended up making things sub-optimal, led me to learn how to use the much ‘harder’ Matlab and Psychophysics Toolbox to present my experiments. Most importantly, I was given license to immerse myself in the learning process by my boss. This is what I hope giving a PhD student a couple of Raspberry Pis will do, bypassing the tyranny of the GUI-driven experiment design package in the process. Short-term pain, long-term gain.
In a few years, my behavioural testing lab-space could simply be a number of rooms equipped with HDMI monitors, keyboards and mice. Just before testing participants, students and postdocswill connect these peripherals to their own code-loaded Raspberry Pis, avoiding the annoyances of changed hardware settings, missing dongles and unre
liable network licenses. It could be brilliant, but whatever it is, it will be cheap.
I ran into the following error when trying to use a script to make Marsbar extract betas
Error in pr_stat_compute at 34
Indices too large for contrast structure
This problem occurred when I was trying to extract the betas for an unusual participant who had an empty bin for one condition, and for whom I had therefore had to manually alter the set of contrasts. In doing this, it turns out that I had inadvertently duplicated one contrast vector. Although the names were different, the number of contrasts had been amended to reflect the number of unique contrast vectors in SPM.xCon but not in Marsbar’s D, meaning that pr_stat_compute’s ‘xCon = SPM.xCon’ (line 23), did not return the same value as my own script’s ‘xCon = get_contrasts(D)’. This was causing the two xCons to differ in their length and the resultant error in pr_stat_compute.
The solution lay in removing the duplicate contrasts from the contrast specification for that participant.
Below (indented) is a straightforward MATLAB/SPM/Marsbar script for generating separate Marsbar ROI and Nifti ROI spheres (user-defined radius) from a set of user-defined MNI coordinates stored in a text file (‘spherelist.txt’ saved in the same directory as the script). ROI files are saved to mat and img directories that the script creates.
I use this script to generate *.mat files as seeds for resting connectivity analyses. Once a *.mat ROI is generated, Marsbar can be used to extract raw timecourses from this region to feed into connectivity analysis as the regressor of interest. Because MRIcron doesn’t read *.mat Marsbar ROI files, I render the equivalent *.img seed regions on a canonical brain when I need to present them graphically.
% This script loads MNI coordinates specified in a user-created file,
% spherelist.txt, and generates .mat and .img ROI files for use with
% Marsbar, MRIcron etc. spherelist.txt should list the centres of
% desired spheres, one-per-row, with coordinates in the format:
% X1 Y1 Z1
% X2 Y2 Z2 etc
% .mat sphere ROIs will be saved in the script-created mat directory.
% .img sphere ROIs will be saved in the script-created img directory.
% SPM Toolbox Marsbar should be installed and started before running script.
% specify radius of spheres to build in mm
radiusmm = 4;
% Specify Output Folders for two sets of images (.img format and .mat format)
roi_dir_img = ‘img’;
roi_dir_mat = ‘mat’;
% Make an img and an mat directory to save resulting ROIs
% Go through each set of coordinates from the specified file (line 2)
spherelistrows = length(spherelist(:,1));
for spherenumbers = 1:spherelistrows
% maximum is specified as the centre of the sphere in mm in MNI space
maximum = spherelist(spherenumbers,1:3);
sphere_centre = maximum;
sphere_radius = radiusmm;
sphere_roi = maroi_sphere(struct(‘centre’, sphere_centre, …
% save ROI as MarsBaR ROI file
saveroi(sphere_roi, fullfile(roi_dir_mat, sprintf(‘%dmmsphere_%s_roi.mat’,…
% Save as image
save_as_image(sphere_roi, fullfile(roi_dir_img, sprintf(‘%dmmsphere_%s_roi.img’,…
Dropbox is a fantastically versatile piece of software based on seamless integration of user-defined folders with ‘the cloud‘. Much has been written about how it can be used for general computing, e.g. from Lifehacker:
…Dropbox instantaneously backs up and syncs your files over the internet and to any computer. After you install the application, it will create a Dropbox folder on your hard drive. Any file you put inside that folder will automatically be synced and monitored for changes, and each time a change is saved, it backs up and syncs the file again. Even better, Dropbox does revision history, so if you accidentally saved a file and wanted to revert to an old version or deleted a file, Dropbox can recover any previous version.
It also has some nice collaborative features that allow you to share documents you’re working on, pushing updated versions out to all synchronised dropbox directories as changes are made. Crucially, whenever dropbox detects a connection to the internet, it synchronises all the files contained within the cloud-synched directories, but it doesn’t require an internet connection to work on those files. This feature has revolutionised the way I programme, debug and transport participant data from psychological experiments.
Programming Experiments: I, like most psychologists I know, don’t programme my experiments on the machines on which I test participants. I have a workhorse desktop machine on which I programme, and I use a number of lower-specced machines to gather data. For instance, I have an fMRI-dedicated laptop which I take to and from the scanner, from which I present stimuli to participants, and on which I store their behavioral data in transit.
I don’t programme on the fMRI laptop because I don’t like spending lots of time working on the cramped keyboard, touchpad and small screen, and because I try and keep the number of applications installed on the machine to a minimum. A problem arises when I need to test my programming to make sure that: a) it runs on the fMRI laptop; and b) what I have programmed on my 1920×1200 monitor translates well to presentation on 1024×768, the resolution I set the laptop to to be compatible with the scanner projection system.
It’s easy enough to save the experiment and all of its associated files onto a memory stick and transfer them to the laptop whenever I need to test it, but it’s a hassle; one that dropbox eliminates.
I have dropbox installed on both my desktop machine and the fMRI laptop. When programming, I just set the laptop up next to my desktop keyboard and create an ‘experiment’ directory in my dropbox using my desktop. I then programme as normal, using my desktop to edit picture stimuli using GIMP and generate instruction slides using Powerpoint, saving everything into the ‘experiment’ directory. When it comes to testing the experiment, I simply turn to the laptop, where I find all the experiment files have been updated in real-time over wifi. Perfect! If I run the experiment on the laptop and find that some of the images I’m using are the wrong size I can simply resize them on the desktop and try again, no memory stick required. I’m able to debug a lot more efficiently like this – it places far less working memory load on me as I can make the required changes as I notice them, rather than once I’ve run through the entire experiment.
Running the Experiment and Transporting Participant Data: I’ve already mentioned that I turn off the laptop’s wifi connection at the scanner. I also exit the dropbox application (which runs in the background and coordinates the cloud-synching. The beauty of the system is that all your dropbox files remain available locally when internet connections aren’t available. I can still run my Matlab scripts and collect data into the dropbox directory. As soon as I’ve finished testing, I start the dropbox application and enable the wifi connection and the new participant data gets uploaded to the cloud and pushed out to my desktop machine. Just like that, the data is backed-up and transported to my desktop. Again, no memory stick required.
This is just one domain of my day-to-day work that dropbox has changed for the better. I also use the dropbox ‘Public Link’ capability to make my CV available on the web. Instead of sending my CV to each web-site that wants to host it (e.g. the Dobbins lab website), I now provide a link to the CV in the ‘Public’ folder of my dropbox. Whilst this difference might seem trivial, it enables me to update my web-based CV in real-time without having to e-mail the administrator of each web-site with an attachment each time I want a change pushed out.
I’m sure there are many other uses I have yet to discover and that’s the beauty of such a straightforward yet polished technology.
WARNING: When you install dropbox, you give it control over everything you put in your dropbox folder – you have to be aware that all changes made to files in your dropbox on one machine will instantaneously get pushed out to all your other machines. Use antivirus software. If a virus makes its way into any file on your dropbox, it will get pushed out to all the other computers synced to your account. Be disciplined about backing up your files, even cloud-synced files. Protect youself against the accidental deletion of files in your dropbox. Once they are deleted on one machine, they will get deleted on all your other dropbox-synced computers. I have a recurring nightmare where I lose my experiment data because it gets deleted on the fMRI laptop (e.g. it gets stolen and prior to selling it on, the thief deletes everything in the ‘My Dropbox’ folder). Because I have a nightly backup running, this wouldn’t be terminal, but the prospect of it happening is still scary.
One of the most annoying and stressful things that can happen during an fMRI experiment is for system notifications, pop-ups or even the Windows taskbar to suddenly appear on the screen on which you are presenting stimuli to participants. Here I outline a few things that I do to minimise the likelihood of this sort of disruption when running Matlab on a Windows XP machine.
1) Turn off your wireless network adapter. This reduces the processing burden on your system – crucial if you’re interested in measuring response times – and stops a lot of annoyances (Flash updates, Windows updates etc.) being pushed to your system. My laptop has a manual switch on the exterior than I can flick to turn it off. Alternatively, the wireless network can be disabled within windows by navigating Network Connections, right-clicking on the wireless network, and selecting ‘disable’.
2) Disable Real-Time Antivirus Protection and Windows Automatic Updates. This again reduces the burden on your system and stops annoying notifications popping up. Whatever it is, it can wait. However, disabling real-time protection will probably lead to an ugly warning in your system tray, but no-one needs to see that if you…
3) Turn off the ‘always on top’ property of the Windows Taskbar. Once you do this, Matlab will sit entirely on top of the taskbar, and the taskbar shouldn’t ever become visible at inopportune moments (something I inexplicable struggled with when designing my latest fMRI experiment). Right click on the taskbar, select Properties, and untick the ‘Keep the taskbar on top of other windows’ checkbox.
4) Disable balloon tips in the notification area. Whilst you could turn off the system tray altogether, that shouldn’t be necessary if you’ve already followed step 3. (One reason I like to keep the system tray visible is that I find it a handy way to t manage wireless networks, Dropbox, etc. and I don’t want to lose that functionality entirely. ) However, to reduce the chances of anything else you haven’t already thought of ‘helpfully’ reminding you of something mid-experiment, turn off bubble notifications, as detailed in this Microsoft TechNet article.
That should give you the best crack at getting through an experiment with an ugly, flickering, Windows interruption. Now that you’ve covered your bases, all you need to do is make sure that your Matlab coding doesn’t give you any grief – easier said than done.
UPDATE: These steps aren’t exclusive to Matlab stimulus presention either. They could give you peace of mind before hooking your laptop up to give a formal presentation or jobtalk on Powerpoint… I’ve seen too many talks interrupted by pesky Windows Update notifications and ‘Found new wireless network’ bubbles.
Occasionally, it’s nice to look under the bonnet and see what’s going on during any automated process that you take for granted. More often than not, I do this when the automaticity has broken down and I need to fix it (e.g. my computer won’t start), or if I need to modify the process in a certain way as to make its product more useful to me (e.g. installing a TV card to make my computer more ‘useful’ to me). This is especially true with tools such as SPM.
One of the greatest benefits associated with using SPM is that it’s all there, in one package, waiting to be unleashed on your data. You could conduct all of your analyses using SPM only, and you could never need to know how SPM makes the pretty pictures that indicate significant brain activations according to your specified model. That’s probably a bad idea. You, at least, need to know that SPM is conducting lots and lots of statistical tests – regressions – as discussed in the previous post. If you have a little understanding of regressions, you’re then aware that what isn’t fit into your regression model is called a ‘residual’ and there are a few interesting things you can do with residuals to establish the quality of the regression model you have fit to your data. Unfortunately with SPM, this model fitting happens largely under the bonnet, and you could conduct all of your analyses without ever seeing the word ‘residual’ mentioned anywhere in the SPM interface.
Why is this? I’m not entirely sure. During the process of ‘Estimation’, SPM writes an image containing all of your residuals to disk (in the same directory as the to-be-estimated SPM.mat file) in a series of image files as follows:
(xxxx corresponds to the number of scans that contribute to the model.)
SPM then deletes these images once estimation is complete, leaving you having to formulate a workaround if you want to recover the residuals for your model. One reason SPM deletes the residual image files is that they take up a lot of disk space – the residuals add nearly 400MB (in our 300 scan model) for each participant which is a real pain if you’re estimating lots of participants and lots of models.
If you’re particularly interested in exploring the residual images (for instance, you can extract the timecourse of residuals for the entire run from an ROI using Marsbar), you need to tweak SPM’s code. As usual, the SPM message-board provides information on how to do this.
You can read the original post here, or see the relevant text below:
… See spm_spm.m, searching for the text “Delete the residuals images”. Comment out the subsequent spm_unlink lines and you’ll have the residual images (ResI_xxxx.img) present in the analysis directory.
Also note that if you have more than 64 images, you’ll also need to change spm_defaults.m, in particular the line
defaults.stats.maxres = 64;
which is the maximum number of residual images written.
There are a few steps here:
1) open spm_spm.m for editing by typing
>> edit spm_spm
2) Find the following block of code (lines 960-966 in my version of SPM5):