A couple of months ago, the folks at codecademy were nice enough to respond to a complimentary e-mail I’d sent them by writing something nice back about me and publishing it on their codecademy.com/stories page.


It was great to get this sort of coverage on a site I think is fantastic (despite the picture of me with a rather supercilious looking one-year old on my back). The only problem was that I didn’t have a working example of code available for them to link to. The JavaScript experiments I had previously coded had run their course, garnering approximately 200 participants each, and had been taken offline, leaving the lab experiment page with nothing for people to try their hand at.

That changed today.  I have a very simple new experiment for people to try their hand at, which can be accessed via the experiments (online) link on the right, or directly, here: http://www.st-andrews.ac.uk/~oclab/memorywords/.

I’m afraid it doesn’t quite do justice to the “neuroscientist discovers javascript” headline on the codecademy stories page, but it’s something.

On Monday I gave a talk on how internet tools can be used to make the job of being an academic a little easier.  I had given a very short version of the talk to faculty in the department over a year ago, but this time I was given an hour in a forum for  early career researchers, PhD students and postdocs.  The subject of twitter, covered early on in the talk, aroused a lot of interest, probably because I got very animated about its benefits for those in the early stages of their careers.

To provide a little context for my enthusiasm, it probably helps to know a few things about me, about my situation, and about my recent experiences.

  1. I am an introvert.  Despite my best (and occasionally successful) efforts to project a different image, I do not find talking to people I don’t know very enjoyable.
  2. I am an early career cognitive neuroscientist keen to build my own research programme and develop links with other researchers.
  3. Last month I attended the Society for Neuroscience conference, at which I attended the best conference social I have ever attended.

Given the received wisdom that people in my position ought to be networking, I often drag myself kicking and screaming to conference socials. The result tends to be a lot of standing around on my own drinking beer, which gives me something to do, but which I could do much more comfortably with one or two people I know well.  The major problem at these events is not my nature, or my status as an early career researcher, but the fact that the people I have imagined myself talking to usually don’t know who I am.  Conversation is therefore awkward, one-sided and introductory.  Once the niceties have dried up, and the level of accumulated conversational silence edges into awkward territory I invariably finish my drink and bugger off to get another one, ending the misery for all involved.  This is probably a universal experience for those starting out in academia, though thankfully it is happening less and less to me as I build something of network of real friends who attend the same conferences as me.  But as a PhD student and postdoc, the experience was excruciating.

I had a totally different experience when I attended the SfN Banter tweetup*.  The event, organised by @doc_becca and @neuropolarbear, was a social for neuroscientists who use twitter and changed my view of conference socials.  They do not have to be endured, even by those doing PhDs and postdocs. They can be enjoyed.

I was excited about going and that excitement didn’t leave me feeling shortchanged by the time I left.  I spoke (actually spoke!) to everyone I wanted to speak to.  Moreover, I had good conversations with people to whom I was speaking for the first time. The reason is fairly obvious – twitter allowed us to build on a body of shared (or at least assumed) knowledge. I follow people, they follow me,  I reply to or retweet their tweets, they do the same – and this is all before we’ve introduced ourselves. When I finally meet someone with whom I have such a history of communication, introducing myself is the least awkward thing I can do. The barriers to conversation are removed**.

Sure, this pattern followed for most interactions at the tweetup because we were all there to do exactly that.  Would the experience be the same at the ‘fMRI social’? No.  But, I don’t think that matters.  If I could have had one of those conference social experience during my time as a PhD student, it would have given me an idea of what I might have to look forward to from conferences if I stuck at it.  Light at the end of the tunnel, a pot of gold at the end of the rainbow, a variable-ratio schedule-determined stimulation of the limbic system following an umpteenth lever press.

It will take a while (there’s no point joining in September 2013 and expecting great things at the SfN tweetup in San Diego), and it’s probably not the primary reason to join twitter (see  Dorothy Bishop’s blog and Tom Hartley’s blog for far more comprehensive discussions  of how and why you should join), but it’s another reason, and it’s one that could make you feel good about your role in academia.  It’s worth a shot.


* tw(itter) (m)eetup, see?

** What you do afterwards is up to you.  I still had some awkward interactions, but I think that’s probably down to me (see context point 1).

Tomorrow morning I fly to SfN 2012 in New Orleans. There has been some turbulence in the immediate lead-up to it – an unscheduled flight to Ireland putting back my departure by a couple of days and a trip to A&E with anaphylaxis being the major ones- but nonetheless, the trip is happening.

This is a pretty big deal for me and for the lab. It’s the first major presentation of data we have collected independently of more senior PIs (and they’re not bad-looking data). It’s also only the second time I will have presented at this particular conference, which can be a tad overwhelming. This could be the start of something good.

from http://www.sfn.org/AM2012/

Of course, there’s also some more social fun planned. I’m looking forward to the #SfN12 #tweetup (Monday night), when I’ll be putting some real-life personalities to some online ones, and the Scottish Neuroscience Group drinks (Tuesday). I’m not anticipating any more storms, other than the odd hurricane, rum and all.


I’ve been enjoying the views from my North Sydney apartment since I moved in. The view is incredible.


This morning, a couple of lorikeets landed on my balcony railing. These birds’ colours are incredible. I grabbed the ipad and snapped a few pictures through the window.

They seemed pretty curious, so I plucked up the courage to open the door and try and snap a few close-ups

They didn’t fly away, so I grabbed the first bird-appropriate food that came to hand.

A moment I will treasure.


The lab’s first Javascript experiment has been online for about 3 weeks now, and has amassed close to 200 participants. It’s been a great experience discovering that the benefits of online testing (60+ participants a week, many of them run while I’m asleep!) easily outweight the costs (the time expended learning Javascript and coding all the fiddly bits, particularly the informed consent procedures and performance-appropriate feedback).

On top of the study completion data that’s obvious from the 7 KB csv file that each happily-debriefed participant leaves behind, the Google Analytics code embedded in each page of the experiment provides further opportunity to explore participation data.


As the experiment structure is entirely linear, it’s possible to track the loss of participants from each page to the next.

Study Attrition

The major point of attrition is between the Participant Information Page and the Consent Form – not surprising given quite how text-heavy the first page was, and how ‘scary’ headings like “Are there any potential risks to taking part?” make the study sound. The content of that first page is entirely driven by the Informed Consent requirements of the University of St Andrews, but the huge attrition rate here has prompted a bit of a redesign in the next follow-up study.


New Visits by Browser

Other information useful for the design of future studies has been the browser data. As might be expected, Firefox and its relatives are the dominant browsers, with Chrome a distant second and Internet Explorer lagging far behind. Implementing fancy HTML5 code that won’t work in Firefox is therefore a bad idea. On top of that, despite how tablet- and phone-friendly the experiment was, very few people used this sort of device to complete the study – it’s probably a waste of time optimising the site specifically for devices like iPads.

Study Completions by Browser
Study Completions by Browser

Curiously enough, when the data for study completions are explored by browser, the three major platforms start to level up. Chrome, Firefox and IE all yield similar completion statistics, suggesting that IE browsers are far more likely to follow through and complete the study once they visit the site. I’m speculating here, but I suspect that this has something to do with a) this being a memory study and b) IE being used by an older demographic of internet user who may be interested in how they perform. Of the three major browsers, Firefox users have the worst completion rate.


Another consideration with word-based experiments is the location of participants. This could impact on the choice of words used in future studies (American or UK spellings) and could be considered important by those who are keen to exclude those who don’t speak English as their first language. Finer grained information about participants’ first languages is something we got from participant self-reports in the demographic questionnaire, but the table of new visits and study completions is still rather interesting.

New Visits and Study Completions by Country

Once again, there are few surprises here, with the US dominating the new visits list, though one new visit from a UK- or India-based browser is more likely to lead to a study completion. A solid argument for using North American spellings and words could also be made from these data.

Source of Traffic

The most important thing to do to make potential participants aware of an online psychology study is to advertise it. But where?

Study Completions by Source

While getting the study listed on stumbleupon was a real coup, it didn’t lead to very many study completions (a measly 2.5%). That’s not surprising – the study doesn’t capture the attention from page 1 and doesn’t have much in the way of internet meme-factor. That is, of course, something that we should be rectifying in future studies if we want them to go viral, but it’s tough to do within the rigid constraints of the informed consent pages that must precede the study itself.

The most fruitful source of participants was the psych.hanover.edu Psychological Research on the Net page. It was much more successful at attracting visits and study completions than facebook, the best of the social networks, and the other online experiment listing sites on which we advertised the study (onlineresearch.co.uk and http://www.socialpsychology.org/expts.htm). What’s more, there has been a sustained stream of visitors from the psych.hanover.edu page that hasn’t tailed off as the study has been displaced from the top of the Recently Added Studies list.

These statistics, surprised me more than any other.  I assumed that social networking, not a dedicated experiment listing page, would be how people would find the study. But in retrospect, it all makes sense. There is clearly a large number of people out there who want to do online psychology studies, and what better way to find them than to use a directory that lists hundreds of them.  If there’s one place you should advertise your online studies, it’s psych.hanover.edu.

To present stimuli for my experiments in the lab, I use Psychophysics Toolbox (Psychtoolbox) in conjunction with Matlab.

One limitation of Psychtoolbox is that the included DrawFormattedText function does not allow text to be horizontally  centered on a point other than the horizontal center of the screen. That frustration doesn’t seem to make much sense, but what I mean by it is that you cannot offset the centering (as you could by choosing to centering within different columns of a table) – If you try and place the text anywhere other than the horizontal center of the screen, the text must be left-aligned.

This means that, when using the original DrawFormattedText,  instead of nice-looking screens like this:

Note that words 1 and 3 are well centered within their boxes

you get this:

Note that word 2 is centered, but words 1 and 3 are left-aligned within their boxes

which is a little messy.

To fix this, I have modified the  DrawFormattedText file to include an xoffset parameter. It’s a very basic modification, that allows text to be centered on points offset from the horizontal center of the screen.  For example, calling DrawFormattedText_mod with:
1) xoffset set to -100, centers text horizontally on a point 100 pixels to the left of the horizontal center of the screen.
2) xoffset set to rect(3)/4 (where rect = Screen dimensions e.g. [0 0 1024 768]), centers text horizontally 1/3 of the way from the left hand edge.
I haven’t replaced my DrawFormattedText.m with my DrawFormattedText_mod.m just yet, but it has been added to the path and seems to be doing the trick.

You can download my DrawFormattedText_mod.m here: https://dl.dropbox.com/u/4127083/Scripts/DrawFormattedText_mod.m

This year, I decided to learn how to present cognitive psychology experiments online. Five months in, I’m happy with my progress.

Since the second-year of my PhD, when I spent a couple of weeks getting nowhere with Java, I have been keen to use the web to present experiments. What enabled me to move from thinking about it to doing it was Codecademy. I’ve previously blogged about how useful I found the codecademy website in getting me familiar with the syntax of Javascript, but at the time of writing that post, I was unsure of how a knowledge of the coding architecture alone (and certainly not coding aesthetic) would translate into a webpage presenting a functional cognitive psychology experiment. Thankfully I did have a barebones knowledge of basic HTML, much of which is now obsolete and deprecated, from which I was able to salvage snippets to combine with CSS (thanks to w3schools) to get something functional and not hideously ugly.

Syllable-counting in the Study Phase.
(click to be taken to the experiment)

Before I present the experiment I have spent the past few months working on, here are a few things I have learned from the experience so far.

1) In choosing Javascript over a medium like Flash, I hoped to maximise the number of devices on which the experiments would run. I think I made the right choice. Pressing response buttons with your finger on an iPad or an Android phone feels like a Human Factors triumph!

2) Javascript-driven user-interaction operates quite differently to user-interaction in languages like Matlab. Javascript is user-driven, which means you can’t have the browser start an event that waits for a response – the browser will crash. Instead, you must start an event that changes the state of the elements within the browser, such that should those elements be responded to, it will be as if the browser had waited for a response.

3) It is very quick and very easy to learn how to code functionally… if it works – it is generally functional. It is much more difficult to learn how to code both elegantly and functionally. I do not know how to code elegantly and I don’t think I ever will. (I’m not flippant about this either.  This is something I would really like to learn how to do).

4) Getting everything to look OK in different browsers is a pain. It wasn’t so much the Javascript as the newer snippets of HTML5 that I have struggled to get to work in every browser.

5) Web security is a subject on which I have very little knowledge.

6) Sending information from a browser to a server is a pain in the arse.


And finally, here is the experiment:


It is a fairly straightforward recognition experiment, takes about 15 minutes to complete and should provide data for use in a larger project, so do feel free to take it as seriously as you want. As I have already mentioned, it works on an iPad, and I thoroughly recommend you give it a go this way if you have access to one.

Below are some quick-and-dirty brain outline images I’m using in a talk I’m giving in a couple of weeks. I like the calligraphic quality that the axial and sagittal slices have. The coronal image is a little more colouring-book in its outline.

axial outline
sagittal outline
coronal outline

They’re very easily generated with screengrabs from MRIcron that are processed in GIMP with a straightforward series of the following steps:

1) Edge-detect
2) Invert Colours
3) Gaussian Blur
4) Brightness-Contrast

Repeating steps 3 and 4 a couple of times will get the consistency of line seen in the coronal image.