This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at Leeds University and the Université de Bourgogne in Dijon. Radka has embraced online experimentation and has run many hundreds of participants through an impressive number of experiments coded in Javascript.


Onscreen Experiments

Recently, Crump, McDonnell and Gureckis (2013) replicated the results of a number of classic cognitive behavioral tasks, such as the Stroop task, using experiments conducted online. They demonstrated that, despite what some people fear, online testing can be as reliable as lab-based testing. Additionally, online testing can be extremely fast and efficient in a way that lab-based testing cannot. I have now completed my 7th online experiment as well as having helped others in creating and advertising theirs.  This post is a review of things I have learned in the process. It summarises what I did not know but now wish I had when I was planning my first study and answers some questions I got asked by others along the way.



In terms of conducting online experiments, the best method remains programming as it is by far the most flexible approach. As someone who has learned programming on my own from free online courses, I can confirm that this is not as difficult as some people think it to be and it really is quite fun (for some tips on where to get started this TED blog post is quite useful.). At the same time, many people do not know how to code and do not have the time to learn. The good news is that for many experiments, the current survey software available online remains flexible enough to create large number of experiments although the potential complexity is naturally limited. My favorite is Qualtrics as even the free version allows a fair amount of functionality and number of trials.



A major advantage of the Internet is that one can reach many different communities. With online testing, one can reach participants who are simply interested in psychology experiments and volunteering in a way that is preferable to testing psychology undergraduates who are coerced into participating for course credit. Once you have an experiment to advertise, the challenge is to find the easiest route by which to reach these people.

There are many websites that focus directly on advertising online experiments. The one I have found the most useful is the Psychological Research on the Net website administered by John H. Krantz. Alternatively, the In-Mind magazine has a page where they post online experiments, which they also share on their Facebook and Twitter account.  Other websites that host links to online studies are the Social Psychology Network  and Online Psychology Research.

The most powerful way for a single individual to reach participants is, quite unsurprisingly, social media. Once a few people start sharing the link, the interest can spread very quickly. The simplest thing to do is to post your study on your Facebook page or Twitter account. Something I haven’t tried yet but that might be worth exploring is finding pages on Facebook or hashtags on Twitter that might relate to the topic of the experiment or psychology in general and post the link to the experiment there. One of the biggest successes for me though, remains reddit. Reddit has a very strong community and people spend time their because they are actively searching for new information and interesting projects. There are a number of subreddits that are specific to psychology so yet again, visited by people interested in these particular topics. To give a few examples: psychology; cognitive science; psych science; music and cognition; mathematical psychology and the list goes on! There is even a subreddit specific to finding participants to complete surveys and experiments simply called Sample Size.

The last resource I have tried a number of times is using more general advertising sites such as craigslist. There is always a ‘volunteers’ section, which is visited by people looking to volunteer for a project of some sort. In that sense it can be a good place to reach participants and the sample will be fairly diverse. This for me has never been as successful as using social media but a few times it has worked fairly well.



The most commonly heard argument against online testing is the lack of control. Really what this means is that data collected online might include more noise, making it easier to miss existing effects, than traditional lab-based experiments. As already mentioned, Crump et al. (2013) replicated a number of classic tasks online suggesting that this might not be as big a worry as it at first seems to be. The range of tasks they have chosen demonstrates nicely that the same results can be obtained in the lab as well as on the Internet. Nevertheless, there are a number of ways one can track participants’ behavior to determine whether sufficient attention was given to the experiment. The simplest way is to measure the time participants took to complete the study. If you are using existing survey software, this information is usually automatically provided. If you are programming the study yourself, requesting a timestamp for when the study begins and for when it ends is an easy way to track the same kind of information. If participants are abnormally slow (or fast) in completing a task, then one might have sufficient reasons to exclude the data.

One of the biggest problems I have encountered is a participant completing one part of the task (e.g. a recognition test) but not completing as faithfully another part of the same experiment (e.g. free report descriptions of particular memory experiences from her daily life). While due to ethics we were not allowed to force participants to respond to any question, I have found that simply asking if they are sure they want to proceed, in case that they haven’t filled out all the questions on a page, increased report rates dramatically. As such it can be useful to provide such pointers along the way to make sure participants answer all questions without forcing them to do so.

Crump et al. (2013) also point out from their experiences of online testing that it can be useful to include some questions about the study instructions.  One could simply ask participants to describe briefly what it is that they are expected to do in the experiment. This way one has data against which to check whether participants understood the instructions and completed the task as anticipated. It will probably also help to ensure that participants pay close attention to the instructions. This is particularly useful if the task is fairly complex.



A big disadvantage of online testing can be dropout rates. This isn’t something I have tested in any formal way but it does seem that there is at least some relationship between the length of the study and dropout rates. This means that online testing is definitely most suitable to studies, which are up to 15 or 20 minutes in length to complete and this might be something to consider. It is also certain that tasks, which are more engaging, will have lower dropout rates. A good incentive I have found is to give participants at the end of an experiment a breakdown of their performance. I have had many participants confirm that they really enjoyed the feedback on how they performed on the memory task. Such feedback is a simple but efficient way to increase participation and decrease dropout rates.

The second worry is participants’ dropping out in the middle of an experiment and then restarting it. It is not something that would be common but it could happen. One way to deal with this is to ask participants to provide at the beginning of the study some code that should be unique to each participant, anonymous and yet always constant. An example is asking participants to create a code consisting of their day and month of birth and ending with their mother’s maiden initials. This is hardly a novel idea, I have participated in experiments, which asked for such information to create participant IDs that allowed to link responses across a number of experimental sessions. The idea is to find some combination of numbers and letters that should never (or rarely) be the same for two participants but that remains the same for any one participant, whenever he is asked. Once in the data-analysis stage, one can simply exclude files that contain repetitions of the same code.

Once the study is up and running, other than finding suitable places to advertise it at, one can leave it and focus on other things until the data has been collected. It is possible to reach large samples quickly and these samples are often more diverse than your classic psychology undergraduate population. There is a certain degree of luck involved but I have in the past managed to collect data for well over 100 participants in a single day. That is not to say that all studies are suitable to online testing but it is definitely a resource well worth exploring.

The lab’s first Javascript experiment has been online for about 3 weeks now, and has amassed close to 200 participants. It’s been a great experience discovering that the benefits of online testing (60+ participants a week, many of them run while I’m asleep!) easily outweight the costs (the time expended learning Javascript and coding all the fiddly bits, particularly the informed consent procedures and performance-appropriate feedback).

On top of the study completion data that’s obvious from the 7 KB csv file that each happily-debriefed participant leaves behind, the Google Analytics code embedded in each page of the experiment provides further opportunity to explore participation data.


As the experiment structure is entirely linear, it’s possible to track the loss of participants from each page to the next.

Study Attrition

The major point of attrition is between the Participant Information Page and the Consent Form – not surprising given quite how text-heavy the first page was, and how ‘scary’ headings like “Are there any potential risks to taking part?” make the study sound. The content of that first page is entirely driven by the Informed Consent requirements of the University of St Andrews, but the huge attrition rate here has prompted a bit of a redesign in the next follow-up study.


New Visits by Browser

Other information useful for the design of future studies has been the browser data. As might be expected, Firefox and its relatives are the dominant browsers, with Chrome a distant second and Internet Explorer lagging far behind. Implementing fancy HTML5 code that won’t work in Firefox is therefore a bad idea. On top of that, despite how tablet- and phone-friendly the experiment was, very few people used this sort of device to complete the study – it’s probably a waste of time optimising the site specifically for devices like iPads.

Study Completions by Browser
Study Completions by Browser

Curiously enough, when the data for study completions are explored by browser, the three major platforms start to level up. Chrome, Firefox and IE all yield similar completion statistics, suggesting that IE browsers are far more likely to follow through and complete the study once they visit the site. I’m speculating here, but I suspect that this has something to do with a) this being a memory study and b) IE being used by an older demographic of internet user who may be interested in how they perform. Of the three major browsers, Firefox users have the worst completion rate.


Another consideration with word-based experiments is the location of participants. This could impact on the choice of words used in future studies (American or UK spellings) and could be considered important by those who are keen to exclude those who don’t speak English as their first language. Finer grained information about participants’ first languages is something we got from participant self-reports in the demographic questionnaire, but the table of new visits and study completions is still rather interesting.

New Visits and Study Completions by Country

Once again, there are few surprises here, with the US dominating the new visits list, though one new visit from a UK- or India-based browser is more likely to lead to a study completion. A solid argument for using North American spellings and words could also be made from these data.

Source of Traffic

The most important thing to do to make potential participants aware of an online psychology study is to advertise it. But where?

Study Completions by Source

While getting the study listed on stumbleupon was a real coup, it didn’t lead to very many study completions (a measly 2.5%). That’s not surprising – the study doesn’t capture the attention from page 1 and doesn’t have much in the way of internet meme-factor. That is, of course, something that we should be rectifying in future studies if we want them to go viral, but it’s tough to do within the rigid constraints of the informed consent pages that must precede the study itself.

The most fruitful source of participants was the Psychological Research on the Net page. It was much more successful at attracting visits and study completions than facebook, the best of the social networks, and the other online experiment listing sites on which we advertised the study ( and What’s more, there has been a sustained stream of visitors from the page that hasn’t tailed off as the study has been displaced from the top of the Recently Added Studies list.

These statistics, surprised me more than any other.  I assumed that social networking, not a dedicated experiment listing page, would be how people would find the study. But in retrospect, it all makes sense. There is clearly a large number of people out there who want to do online psychology studies, and what better way to find them than to use a directory that lists hundreds of them.  If there’s one place you should advertise your online studies, it’s