This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at Leeds University and the Université de Bourgogne in Dijon. Radka has embraced online experimentation and has run many hundreds of participants through an impressive number of experiments coded in Javascript.

Akira

Onscreen Experiments

Recently, Crump, McDonnell and Gureckis (2013) replicated the results of a number of classic cognitive behavioral tasks, such as the Stroop task, using experiments conducted online. They demonstrated that, despite what some people fear, online testing can be as reliable as lab-based testing. Additionally, online testing can be extremely fast and efficient in a way that lab-based testing cannot. I have now completed my 7th online experiment as well as having helped others in creating and advertising theirs.  This post is a review of things I have learned in the process. It summarises what I did not know but now wish I had when I was planning my first study and answers some questions I got asked by others along the way.

 

CREATING ONLINE EXPERIMENTS

In terms of conducting online experiments, the best method remains programming as it is by far the most flexible approach. As someone who has learned programming on my own from free online courses, I can confirm that this is not as difficult as some people think it to be and it really is quite fun (for some tips on where to get started this TED blog post is quite useful.). At the same time, many people do not know how to code and do not have the time to learn. The good news is that for many experiments, the current survey software available online remains flexible enough to create large number of experiments although the potential complexity is naturally limited. My favorite is Qualtrics as even the free version allows a fair amount of functionality and number of trials.

 

FINDING PARTICIPANTS

A major advantage of the Internet is that one can reach many different communities. With online testing, one can reach participants who are simply interested in psychology experiments and volunteering in a way that is preferable to testing psychology undergraduates who are coerced into participating for course credit. Once you have an experiment to advertise, the challenge is to find the easiest route by which to reach these people.

There are many websites that focus directly on advertising online experiments. The one I have found the most useful is the Psychological Research on the Net website administered by John H. Krantz. Alternatively, the In-Mind magazine has a page where they post online experiments, which they also share on their Facebook and Twitter account.  Other websites that host links to online studies are the Social Psychology Network  and Online Psychology Research.

The most powerful way for a single individual to reach participants is, quite unsurprisingly, social media. Once a few people start sharing the link, the interest can spread very quickly. The simplest thing to do is to post your study on your Facebook page or Twitter account. Something I haven’t tried yet but that might be worth exploring is finding pages on Facebook or hashtags on Twitter that might relate to the topic of the experiment or psychology in general and post the link to the experiment there. One of the biggest successes for me though, remains reddit. Reddit has a very strong community and people spend time their because they are actively searching for new information and interesting projects. There are a number of subreddits that are specific to psychology so yet again, visited by people interested in these particular topics. To give a few examples: psychology; cognitive science; psych science; music and cognition; mathematical psychology and the list goes on! There is even a subreddit specific to finding participants to complete surveys and experiments simply called Sample Size.

The last resource I have tried a number of times is using more general advertising sites such as craigslist. There is always a ‘volunteers’ section, which is visited by people looking to volunteer for a project of some sort. In that sense it can be a good place to reach participants and the sample will be fairly diverse. This for me has never been as successful as using social media but a few times it has worked fairly well.

 

USEFUL CHECKPOINTS

The most commonly heard argument against online testing is the lack of control. Really what this means is that data collected online might include more noise, making it easier to miss existing effects, than traditional lab-based experiments. As already mentioned, Crump et al. (2013) replicated a number of classic tasks online suggesting that this might not be as big a worry as it at first seems to be. The range of tasks they have chosen demonstrates nicely that the same results can be obtained in the lab as well as on the Internet. Nevertheless, there are a number of ways one can track participants’ behavior to determine whether sufficient attention was given to the experiment. The simplest way is to measure the time participants took to complete the study. If you are using existing survey software, this information is usually automatically provided. If you are programming the study yourself, requesting a timestamp for when the study begins and for when it ends is an easy way to track the same kind of information. If participants are abnormally slow (or fast) in completing a task, then one might have sufficient reasons to exclude the data.

One of the biggest problems I have encountered is a participant completing one part of the task (e.g. a recognition test) but not completing as faithfully another part of the same experiment (e.g. free report descriptions of particular memory experiences from her daily life). While due to ethics we were not allowed to force participants to respond to any question, I have found that simply asking if they are sure they want to proceed, in case that they haven’t filled out all the questions on a page, increased report rates dramatically. As such it can be useful to provide such pointers along the way to make sure participants answer all questions without forcing them to do so.

Crump et al. (2013) also point out from their experiences of online testing that it can be useful to include some questions about the study instructions.  One could simply ask participants to describe briefly what it is that they are expected to do in the experiment. This way one has data against which to check whether participants understood the instructions and completed the task as anticipated. It will probably also help to ensure that participants pay close attention to the instructions. This is particularly useful if the task is fairly complex.

 

DEALING WITH DROP OUTS

A big disadvantage of online testing can be dropout rates. This isn’t something I have tested in any formal way but it does seem that there is at least some relationship between the length of the study and dropout rates. This means that online testing is definitely most suitable to studies, which are up to 15 or 20 minutes in length to complete and this might be something to consider. It is also certain that tasks, which are more engaging, will have lower dropout rates. A good incentive I have found is to give participants at the end of an experiment a breakdown of their performance. I have had many participants confirm that they really enjoyed the feedback on how they performed on the memory task. Such feedback is a simple but efficient way to increase participation and decrease dropout rates.

The second worry is participants’ dropping out in the middle of an experiment and then restarting it. It is not something that would be common but it could happen. One way to deal with this is to ask participants to provide at the beginning of the study some code that should be unique to each participant, anonymous and yet always constant. An example is asking participants to create a code consisting of their day and month of birth and ending with their mother’s maiden initials. This is hardly a novel idea, I have participated in experiments, which asked for such information to create participant IDs that allowed to link responses across a number of experimental sessions. The idea is to find some combination of numbers and letters that should never (or rarely) be the same for two participants but that remains the same for any one participant, whenever he is asked. Once in the data-analysis stage, one can simply exclude files that contain repetitions of the same code.

Once the study is up and running, other than finding suitable places to advertise it at, one can leave it and focus on other things until the data has been collected. It is possible to reach large samples quickly and these samples are often more diverse than your classic psychology undergraduate population. There is a certain degree of luck involved but I have in the past managed to collect data for well over 100 participants in a single day. That is not to say that all studies are suitable to online testing but it is definitely a resource well worth exploring.

If you’re trying to decide on a journal to submit your latest manuscript to, Jane – the Journal/Author Name Estimator, can point you in the right direction. This isn’t exactly breaking news, but it’s worth a reminder.

To use Jane, copy and paste your title and/or abstract into Jane into the text box and click “Find journals”. Using a similarity index with all Medline-indexed publications from the past 10 years, Jane will spit out a list journals worth considering. Alongside a confidence score, which summarises your text’s similarity to other manuscripts published in that journal, you’re also provided with an citation-based indication of that journal’s influence within the field.

Image

The other available searches are the “Find articles” and the “Find authors” search, the last of which I suspect I would use if I were an editor with no idea about whom to send an article to for review. As an author, it’s worth running abstracts through these searches too to make sure you don’t miss any references or authors you definitely ought to cite in your manuscript.

There’s more information on Jane from the Biosemantics Group here: http://biosemantics.org/jane/faq.php.

English: Extract from Raspberry Pi board at Tr...
The Raspberry Pi (photo credit: Wikipedia)

A few months ago, I suggested that Raspberry Pis could be used as a barebones experiment presentation machine. Since then I have got my hands on one and tinkered a little, only to be reminded yet again that my inability to do anything much in both Linux and python is a bit of a problem.

Fortunately, others with more technological nous have been busy exploring the capabilities of the Pi, with some exciting findings. On the Cognitive Science Stack Exchange, user appositive asked “Is the Raspberry Pi capable of operating as a stimulus presentation system for experiments?” and followed up at the end of January with a great answer to their own question, including this paragraph:

The RPi does not support OpenGL. I approached this system with the idea of using a python environment to create and present experiments. There are two good options for this that I know of, opensesame and psychopy. Psychopy requires an OpenGL python backend (pyglet), so it won’t run on the Rpi. Opensesame gives you the option of using the same backend as PsychoPy uses but has other options, one of which does not rely on openGL (based on pygames). This ‘legacy’ backend works just fine. But the absence of openGL means that graphics rely solely on the 700 mHz CPU, which quickly gets overloaded with any sort of rapidly changing visual stimuli (ie. flowing gabors, video, etc.).

Because of the lack of OpenGL support on the Pi, Psychopy is out (for now) leaving OpenSesame as the best cog psych-focused python environment for experiment presentation. The current situation seems to be that the Pi is suboptimal for graphics-intensive experiments, though this may improve as hardware acceleration is incorporated to take advantage of the Pi’s beefy graphics hardware. As things stand though, experiments with words and basic picture stimuli should be fine. It’s just a case of getting hold of one and brushing up on python.

UPDATE via Comments (1/4/2013) – Sebastiaan Mathôt has has published some nice Raspberry Pi graphics benchmarking data, which are well worth a look if you’re interested.
http://www.cogsci.nl/blog/miscellaneous/216-running-psychological-experiments-on-a-raspberry-pi-with-opensesame

A couple of months ago, the folks at codecademy were nice enough to respond to a complimentary e-mail I’d sent them by writing something nice back about me and publishing it on their codecademy.com/stories page.

cadecademic

It was great to get this sort of coverage on a site I think is fantastic (despite the picture of me with a rather supercilious looking one-year old on my back). The only problem was that I didn’t have a working example of code available for them to link to. The JavaScript experiments I had previously coded had run their course, garnering approximately 200 participants each, and had been taken offline, leaving the lab experiment page with nothing for people to try their hand at.

That changed today.  I have a very simple new experiment for people to try their hand at, which can be accessed via the experiments (online) link on the right, or directly, here: http://www.st-andrews.ac.uk/~oclab/memorywords/.

I’m afraid it doesn’t quite do justice to the “neuroscientist discovers javascript” headline on the codecademy stories page, but it’s something.

On Monday I gave a talk on how internet tools can be used to make the job of being an academic a little easier.  I had given a very short version of the talk to faculty in the department over a year ago, but this time I was given an hour in a forum for  early career researchers, PhD students and postdocs.  The subject of twitter, covered early on in the talk, aroused a lot of interest, probably because I got very animated about its benefits for those in the early stages of their careers.

To provide a little context for my enthusiasm, it probably helps to know a few things about me, about my situation, and about my recent experiences.

  1. I am an introvert.  Despite my best (and occasionally successful) efforts to project a different image, I do not find talking to people I don’t know very enjoyable.
  2. I am an early career cognitive neuroscientist keen to build my own research programme and develop links with other researchers.
  3. Last month I attended the Society for Neuroscience conference, at which I attended the best conference social I have ever attended.

Given the received wisdom that people in my position ought to be networking, I often drag myself kicking and screaming to conference socials. The result tends to be a lot of standing around on my own drinking beer, which gives me something to do, but which I could do much more comfortably with one or two people I know well.  The major problem at these events is not my nature, or my status as an early career researcher, but the fact that the people I have imagined myself talking to usually don’t know who I am.  Conversation is therefore awkward, one-sided and introductory.  Once the niceties have dried up, and the level of accumulated conversational silence edges into awkward territory I invariably finish my drink and bugger off to get another one, ending the misery for all involved.  This is probably a universal experience for those starting out in academia, though thankfully it is happening less and less to me as I build something of network of real friends who attend the same conferences as me.  But as a PhD student and postdoc, the experience was excruciating.

I had a totally different experience when I attended the SfN Banter tweetup*.  The event, organised by @doc_becca and @neuropolarbear, was a social for neuroscientists who use twitter and changed my view of conference socials.  They do not have to be endured, even by those doing PhDs and postdocs. They can be enjoyed.

I was excited about going and that excitement didn’t leave me feeling shortchanged by the time I left.  I spoke (actually spoke!) to everyone I wanted to speak to.  Moreover, I had good conversations with people to whom I was speaking for the first time. The reason is fairly obvious – twitter allowed us to build on a body of shared (or at least assumed) knowledge. I follow people, they follow me,  I reply to or retweet their tweets, they do the same – and this is all before we’ve introduced ourselves. When I finally meet someone with whom I have such a history of communication, introducing myself is the least awkward thing I can do. The barriers to conversation are removed**.

Sure, this pattern followed for most interactions at the tweetup because we were all there to do exactly that.  Would the experience be the same at the ‘fMRI social’? No.  But, I don’t think that matters.  If I could have had one of those conference social experience during my time as a PhD student, it would have given me an idea of what I might have to look forward to from conferences if I stuck at it.  Light at the end of the tunnel, a pot of gold at the end of the rainbow, a variable-ratio schedule-determined stimulation of the limbic system following an umpteenth lever press.

It will take a while (there’s no point joining in September 2013 and expecting great things at the SfN tweetup in San Diego), and it’s probably not the primary reason to join twitter (see  Dorothy Bishop’s blog and Tom Hartley’s blog for far more comprehensive discussions  of how and why you should join), but it’s another reason, and it’s one that could make you feel good about your role in academia.  It’s worth a shot.

 

* tw(itter) (m)eetup, see?

** What you do afterwards is up to you.  I still had some awkward interactions, but I think that’s probably down to me (see context point 1).

This year, I decided to learn how to present cognitive psychology experiments online. Five months in, I’m happy with my progress.

Since the second-year of my PhD, when I spent a couple of weeks getting nowhere with Java, I have been keen to use the web to present experiments. What enabled me to move from thinking about it to doing it was Codecademy. I’ve previously blogged about how useful I found the codecademy website in getting me familiar with the syntax of Javascript, but at the time of writing that post, I was unsure of how a knowledge of the coding architecture alone (and certainly not coding aesthetic) would translate into a webpage presenting a functional cognitive psychology experiment. Thankfully I did have a barebones knowledge of basic HTML, much of which is now obsolete and deprecated, from which I was able to salvage snippets to combine with CSS (thanks to w3schools) to get something functional and not hideously ugly.

Syllable-counting in the Study Phase.
(click to be taken to the experiment)

Before I present the experiment I have spent the past few months working on, here are a few things I have learned from the experience so far.

1) In choosing Javascript over a medium like Flash, I hoped to maximise the number of devices on which the experiments would run. I think I made the right choice. Pressing response buttons with your finger on an iPad or an Android phone feels like a Human Factors triumph!

2) Javascript-driven user-interaction operates quite differently to user-interaction in languages like Matlab. Javascript is user-driven, which means you can’t have the browser start an event that waits for a response – the browser will crash. Instead, you must start an event that changes the state of the elements within the browser, such that should those elements be responded to, it will be as if the browser had waited for a response.

3) It is very quick and very easy to learn how to code functionally… if it works – it is generally functional. It is much more difficult to learn how to code both elegantly and functionally. I do not know how to code elegantly and I don’t think I ever will. (I’m not flippant about this either.  This is something I would really like to learn how to do).

4) Getting everything to look OK in different browsers is a pain. It wasn’t so much the Javascript as the newer snippets of HTML5 that I have struggled to get to work in every browser.

5) Web security is a subject on which I have very little knowledge.

6) Sending information from a browser to a server is a pain in the arse.

 

And finally, here is the experiment:

http://www.st-andrews.ac.uk/~oclab/memoryframingexpt/

It is a fairly straightforward recognition experiment, takes about 15 minutes to complete and should provide data for use in a larger project, so do feel free to take it as seriously as you want. As I have already mentioned, it works on an iPad, and I thoroughly recommend you give it a go this way if you have access to one.

Points and badges

For the past week or so, I have been working my way through Codecademy’s JavaScript tutorials. I can’t recommend them highly enough.

As things stand, I have a full house of 480 points and 35 badges and, as the Codecademy creators would undoubtedly hope, I am rather satisfied with the JavaScript proficiency I have attained. ‘Attained’ is probably the wrong word to use though. Being a self-taught Matlab hacker, I have found most of my coding know-how has translated fairly well into Javascript. A few concepts (recursion in particular) have presented me with some difficulty, but the overall experience has been more like learning a new coding dialect  than a new language altogether. I haven’t attained a proficiency, so much as uncovered a hidden one.

Which brings me to why I sought out Codecademy in the first place (thanks to @m_wall for the twitter-solicited tip-off) – I am preparing to teach Psychology undergrads how to code. From 2012/2013 onwards, my academic life is going to be a little more ‘balanced’. As well as the research, admin and small-group teaching I currently enjoy, I’m also going to be doing some large-group teaching. Although I have plenty to say to undergraduates on cognitive neuroscience and cognitive psychology, I think giving them some coding skills will actually be much more useful to most. As my experience with Codecademy has recently reinforced to me, coding basics are the fundamental building-blocks of programming in any language. They will hold you in good stead whatever dialect you end up speaking to your computer in. What’s more, they will hold you in good stead whatever you end up doing, as long as it involves a computer: coding is the most versatile of transferable skills to be imparting to psychology graduates who (rightly) believe they are leaving university with the most versatile of degrees.

With all this in mind, one of Codecademy’s limitations is the difficulty with which its students can translate their new-found JavaScript skills into useful ‘stuff’ implemented outside the Codecademy editor. As Audrey Watters points out, there is barely any acknowledgement within the Codecademy tutorials that the goal of all of these points and badges is to encourage you to write interactive web contact in an IDE. Indeed, last night when I thought about how I would use JavaScript to administer online  memory experiments, I had to do a lot more reading. This could all be about to change though. If the latest Code Year class on HTML is anything to go by, the folks at Codecademy are mindful of this limitation, and are attempting to remedy it.

It’s just a shame that the html integration has come so late in the Code Year (yes, I say this with full awareness that we’re only on week 13).  If the HTML-Javascript confluence had come a little further upstream, I think there probably would have been a fledgling memory experiment linked to from this blogpost!

Raspberry Pi schematic from http://www.raspberrypi.org

I think the Raspberry Pi is going to be fantastic, for reasons summed up very nicely by David McGloin – the availability of such a cheap and versatile barebones technology will kickstart a new generation of tinkerers and coders.

It’s worth mentioning that this kickstart wouldn’t just be limited to the newest generation currently going through their primary and secondary school educations. Should my hands-on experience of the device live up to my expectations (and the expectations of those who have bought all the units that went on sale this morning), I will be ordering a couple for each PhD student I take on. After all, what’s the point in using an expensive desktop computer running expensive software on an expensive OS to run simple psychology experiments that have hardly changed in the past 15 years? What’s the point when technology like the Raspberry PI is available for £22? Moreover, if you can get researchers to present experiments using a medium that has also helped them pick up some of the most desirable employment skills within and outwith academia, expertise with and practical experience in programming, then I think that’s a fairly compelling argument that it would be irresponsible not to.

But won’t I have missed a critical period in my students’ development from technology consumers into technology hackers?

No.

Every psychology student can and  should learn how to code (courtesy of Matt Wall), and it’s never too late.  I  learned to code properly in my twenties, during my postdoc. The reason it took me so long was that I had never needed to code in any serious goal-driven way before this time. Until the end of my PhD, Superlab and E-Prime had been perfectly passable vehicles by which I could present my experiments to participants.  My frustration with the attempts of these experiment presentation packages to make things ‘easy’, which ended up making things sub-optimal, led me to learn how to use the much ‘harder’ Matlab and Psychophysics Toolbox to present my experiments.  Most importantly, I was given license to immerse myself in the learning process by my boss. This is what I hope giving a PhD student a couple of Raspberry Pis will do, bypassing the tyranny of the GUI-driven experiment design package in the process.  Short-term pain, long-term gain.

In a few years, my behavioural testing lab-space could simply be a number of rooms equipped with HDMI monitors, keyboards and mice. Just before testing participants, students and postdocswill connect these peripherals to their own code-loaded Raspberry Pis, avoiding the annoyances of changed hardware settings, missing dongles and unre

liable network licenses. It could be brilliant, but whatever it is, it will be cheap.

Can the iPad2, with its 132ppi 1024 x 768 screen, be used to comfortably read pdfs without the need to zoom and scroll about single pages?

That was a question that troubled me when I was splashing out for one earlier this year. To try to get a better idea of what a pdf viewed on only 800,000 pixels might look like was hard. Neither my attempt to I resize a pdf window to the correct number of pixels (too small) nor my attempt to screengrab  a pdf at a higher resolution and shrink it using GIMP (too fuzzy) were particularly informative. I just had to take plunge and see.

There’s enough wiggle-room (as you can see in the screenshots below) to suggest that there’s no definitive answer, I think the answer is probably yes. But, that’s only if you take advantage of some nifty capabilities of pdf-reading apps, Goodreader being the one I use, mostly thanks to its almost seamless Dropbox syncing capabilities.

Below is a screengrab of a standard, US letter-size, pdf, displayed unmodified on the iPad. The size, when the image is viewed inline with this text (and not in its own separate window), is approximately the same as it appears on the iPad (there is some loss of resolution which can be recovered if you click on the image and open it in its own window).

Click on the image to simulate holding the iPad close to your face whilst squinting.

The screengrab above demonstrates that virgin pdfs aren’t great to read. The main body of the text can be read at a push, but it’s certainly not comfortable.

Thankfully, the bulk of the discomfort can be relieved using Goodreader’s cropping function, which allows whitespace around pdfs to be cropped out (with different settings for odd and even pages, if required).  A cropped version of the above page looks like this:
A marked improvement which could be cropped further if you weren't too worried about losing the header information. Click on the image to see the screengrab with no loss of resolution.

The image above demonstrates that cropping can be used to get most value from the rather miserly screen resolution (the same on both the iPad and iPad2, though almost certainly not on the iPad3, when that’s released).

But, cropping doesn’t solve all tiny text traumas.  There are some circumstances, such as with particularly small text like the figure legend below, that necessitate a bit of zooming.

The figure legend is a little too small to read comfortably, even when the page is cropped.

I don’t mind zooming in to see a figure properly, but that’s probably a matter of personal taste.

If you’re used to using an iPhone4, with its ridiculous 326ppi retina display, then you’ll find reading pdfs on a current model iPad a bit of a step back. But, it’s passable and I certainly don’t mind doing it. It certainly beats printing, carrying and storing reams of paper.