This year, I decided to learn how to present cognitive psychology experiments online. Five months in, I’m happy with my progress.

Since the second-year of my PhD, when I spent a couple of weeks getting nowhere with Java, I have been keen to use the web to present experiments. What enabled me to move from thinking about it to doing it was Codecademy. I’ve previously blogged about how useful I found the codecademy website in getting me familiar with the syntax of Javascript, but at the time of writing that post, I was unsure of how a knowledge of the coding architecture alone (and certainly not coding aesthetic) would translate into a webpage presenting a functional cognitive psychology experiment. Thankfully I did have a barebones knowledge of basic HTML, much of which is now obsolete and deprecated, from which I was able to salvage snippets to combine with CSS (thanks to w3schools) to get something functional and not hideously ugly.

Syllable-counting in the Study Phase.
(click to be taken to the experiment)

Before I present the experiment I have spent the past few months working on, here are a few things I have learned from the experience so far.

1) In choosing Javascript over a medium like Flash, I hoped to maximise the number of devices on which the experiments would run. I think I made the right choice. Pressing response buttons with your finger on an iPad or an Android phone feels like a Human Factors triumph!

2) Javascript-driven user-interaction operates quite differently to user-interaction in languages like Matlab. Javascript is user-driven, which means you can’t have the browser start an event that waits for a response – the browser will crash. Instead, you must start an event that changes the state of the elements within the browser, such that should those elements be responded to, it will be as if the browser had waited for a response.

3) It is very quick and very easy to learn how to code functionally… if it works – it is generally functional. It is much more difficult to learn how to code both elegantly and functionally. I do not know how to code elegantly and I don’t think I ever will. (I’m not flippant about this either.  This is something I would really like to learn how to do).

4) Getting everything to look OK in different browsers is a pain. It wasn’t so much the Javascript as the newer snippets of HTML5 that I have struggled to get to work in every browser.

5) Web security is a subject on which I have very little knowledge.

6) Sending information from a browser to a server is a pain in the arse.


And finally, here is the experiment:

It is a fairly straightforward recognition experiment, takes about 15 minutes to complete and should provide data for use in a larger project, so do feel free to take it as seriously as you want. As I have already mentioned, it works on an iPad, and I thoroughly recommend you give it a go this way if you have access to one.

Below are some quick-and-dirty brain outline images I’m using in a talk I’m giving in a couple of weeks. I like the calligraphic quality that the axial and sagittal slices have. The coronal image is a little more colouring-book in its outline.

axial outline
sagittal outline
coronal outline

They’re very easily generated with screengrabs from MRIcron that are processed in GIMP with a straightforward series of the following steps:

1) Edge-detect
2) Invert Colours
3) Gaussian Blur
4) Brightness-Contrast

Repeating steps 3 and 4 a couple of times will get the consistency of line seen in the coronal image.

The past few days have seen my fMRI analysis server taken over by a Linux virtual machine. I have installed FSL, and have been using MELODIC to plough my way through ICA analyses of fcMRI data, a first for me.

One of the annoyances I have had to deal with as part of this project has been the difference in input data required for SPM, for which my preprocessing stream is targeted, and FSL, for which it is not. Specifically, this difference has necessitated the conversion of data runs from 3D NIFTI files to a single 4d NIFTI file. FSL has a utility for this (fslmerge), but being the Linux novice that I am, I have struggled to script the merging within the virtual machine.

Thankfully, SPM has a semi-hidden utility for this conversion.

SPM's 3d to 4d NIFTI conversion tool
SPM's 3d to 4d NIFTI conversion tool

The GUI is located within the Batch Editor’s SPM>Util menu, and be default saves the specified 3D NIFTI images to a single 4D NIFTI image within the same directory. It doesn’t gzip the output image, like the fslmerge script, but, it’s scriptable using the ‘View>Show .m Code’ menu option, and it’s good enough for me.


Points and badges

For the past week or so, I have been working my way through Codecademy’s JavaScript tutorials. I can’t recommend them highly enough.

As things stand, I have a full house of 480 points and 35 badges and, as the Codecademy creators would undoubtedly hope, I am rather satisfied with the JavaScript proficiency I have attained. ‘Attained’ is probably the wrong word to use though. Being a self-taught Matlab hacker, I have found most of my coding know-how has translated fairly well into Javascript. A few concepts (recursion in particular) have presented me with some difficulty, but the overall experience has been more like learning a new coding dialect  than a new language altogether. I haven’t attained a proficiency, so much as uncovered a hidden one.

Which brings me to why I sought out Codecademy in the first place (thanks to @m_wall for the twitter-solicited tip-off) – I am preparing to teach Psychology undergrads how to code. From 2012/2013 onwards, my academic life is going to be a little more ‘balanced’. As well as the research, admin and small-group teaching I currently enjoy, I’m also going to be doing some large-group teaching. Although I have plenty to say to undergraduates on cognitive neuroscience and cognitive psychology, I think giving them some coding skills will actually be much more useful to most. As my experience with Codecademy has recently reinforced to me, coding basics are the fundamental building-blocks of programming in any language. They will hold you in good stead whatever dialect you end up speaking to your computer in. What’s more, they will hold you in good stead whatever you end up doing, as long as it involves a computer: coding is the most versatile of transferable skills to be imparting to psychology graduates who (rightly) believe they are leaving university with the most versatile of degrees.

With all this in mind, one of Codecademy’s limitations is the difficulty with which its students can translate their new-found JavaScript skills into useful ‘stuff’ implemented outside the Codecademy editor. As Audrey Watters points out, there is barely any acknowledgement within the Codecademy tutorials that the goal of all of these points and badges is to encourage you to write interactive web contact in an IDE. Indeed, last night when I thought about how I would use JavaScript to administer online  memory experiments, I had to do a lot more reading. This could all be about to change though. If the latest Code Year class on HTML is anything to go by, the folks at Codecademy are mindful of this limitation, and are attempting to remedy it.

It’s just a shame that the html integration has come so late in the Code Year (yes, I say this with full awareness that we’re only on week 13).  If the HTML-Javascript confluence had come a little further upstream, I think there probably would have been a fledgling memory experiment linked to from this blogpost!

déjà vu
déjà vu (Photo credit: steve loya)

In what feels like a former life, I did a fair amount of research on déjà vu. In fact, it’s the domain in which I cut my psychological teeth,  learned about the importance of good experiment design, and was eventually awarded a PhD.

One of the sadnesses of déjà vu research is that, although the sensation is so utterly intriguing, it is very difficult to experimentally generate (though see Anne Cleary’s work, particularly this paper). This has led people interested in déjà vu to try coming at it from a few different angles, including hypnosis,  caloric stimulation* and, of course, drugs, drugs and more drugs. But, given its infrequent occurrence and its fairly memorable nature (a blessing and a curse, see below), the most consistently successful approach to studying the experience has been to use questionnaires.

Christine Wells, a collaborator and friend of mine at the University of Leeds is currently looking for people to complete her online questionnaire on anxiety, dissociative experiences and déjà vu.  One of the nice departures from the standard questionnaire format, afforded by its online administration, is that you fill in Part 1 at your leisure, and the much shorter Part 2 as soon as possible after your next déjà vu experience. This is  a really neat feature of the research, as it goes some way towards minimising the clichés that may be swamping our memories of déjà vu experiences, when assessed weeks and months after we have had them.

If you would like to take part in the research and are aged 18 or over, the following links may be of use:

Part 1: Anxiety, dissociative experiences and déjà vu questionnaire (takes approx. 20 mins):

Part 2: Follow-up questionnaire for after your next déjà vu experience (takes approx. 5 mins):

Sure, filling in the questionnaires won’t leave you feeling anything like this guy, but that’s probably a good thing ( I wouldn’t wish an experience I could liken to the movie Hellraiser on anyone!).  What it will do, is contribute to scientific understanding by telling us a little bit more about how people evaluate their déjà vu experiences.

* that’s ‘squirting water in someone’s ear’ to the layman

Français : Projecteur cinématographique 35mm
Image via Wikipedia

I heard Andrew Bird play his new album Break it Yourself at the Barbican on Monday. Having previewed it on NPR’s First Listen the previous week, I was familiar with the gist, but the show game me opportunity to scrutinise it.  One rather important detail I had previously missed was the subject of the sixth album track ‘Lazy Projector.’

The song explores the fallible impermanence of memory:

It’s all in the hands of a lazy projector,
That forgetting, embellishing, lying machine.

As artistic interpretations of memory go, this isn’t ground-breaking, but the preceding lyrics set a context that belies a man who has thought about the purpose and mechanism of this fallibility.

If memory serves us, then who owns the master?
How do we know who’s projecting this reel?

The awareness that we can be fully conscious of ourselves, with our own psychological interests to protect, yet still be unaware of the source (and often the presence) of the unconscious reconstructions of memory is an insight into a paradox of memory. Bad memories hurt less the less we ruminate on them, but we aren’t able to actively, effortfully forget them.  If we could, the projector would have no need to hide himself from the rememberer – they would be the same part of the same person working to attain the same goal. As it is, distraction, the passage of time, and all of the multitude of things that happen after a bad event give the projector opportunity to work undercover, in the only conditions in which he can work.  These conditions give him the opportunity to get his alterations made before the rememberer has the chance to interrogate his memory and discover, to his relief, that it has softened.

It’s a lovely insight into memory, metacognition and the self and an example of why I appreciate Andrews Bird’s music.


Raspberry Pi schematic from

I think the Raspberry Pi is going to be fantastic, for reasons summed up very nicely by David McGloin – the availability of such a cheap and versatile barebones technology will kickstart a new generation of tinkerers and coders.

It’s worth mentioning that this kickstart wouldn’t just be limited to the newest generation currently going through their primary and secondary school educations. Should my hands-on experience of the device live up to my expectations (and the expectations of those who have bought all the units that went on sale this morning), I will be ordering a couple for each PhD student I take on. After all, what’s the point in using an expensive desktop computer running expensive software on an expensive OS to run simple psychology experiments that have hardly changed in the past 15 years? What’s the point when technology like the Raspberry PI is available for £22? Moreover, if you can get researchers to present experiments using a medium that has also helped them pick up some of the most desirable employment skills within and outwith academia, expertise with and practical experience in programming, then I think that’s a fairly compelling argument that it would be irresponsible not to.

But won’t I have missed a critical period in my students’ development from technology consumers into technology hackers?


Every psychology student can and  should learn how to code (courtesy of Matt Wall), and it’s never too late.  I  learned to code properly in my twenties, during my postdoc. The reason it took me so long was that I had never needed to code in any serious goal-driven way before this time. Until the end of my PhD, Superlab and E-Prime had been perfectly passable vehicles by which I could present my experiments to participants.  My frustration with the attempts of these experiment presentation packages to make things ‘easy’, which ended up making things sub-optimal, led me to learn how to use the much ‘harder’ Matlab and Psychophysics Toolbox to present my experiments.  Most importantly, I was given license to immerse myself in the learning process by my boss. This is what I hope giving a PhD student a couple of Raspberry Pis will do, bypassing the tyranny of the GUI-driven experiment design package in the process.  Short-term pain, long-term gain.

In a few years, my behavioural testing lab-space could simply be a number of rooms equipped with HDMI monitors, keyboards and mice. Just before testing participants, students and postdocswill connect these peripherals to their own code-loaded Raspberry Pis, avoiding the annoyances of changed hardware settings, missing dongles and unre

liable network licenses. It could be brilliant, but whatever it is, it will be cheap.

Image via Wikipedia

Here are a number of free online sources of music and background noise that are particularly good for high-concentration tasks.

And here are a number of (not free) albums I find particularly good for this purpose.

This isn’t a comprehensive list by any means (that sort of thing can be found on Lifehacker, as you might expect), but over the past few years, I have tweeted and blogged about this a bit, so I’m simply tying up some of the odds and ends in one place.