The Journal of Cognitive Neuroscience have just invoiced me $985 for a paper they agreed to publish earlier this year. This wasn’t unexpected – not only did we sign away our copyright, allowing MIT Press to make money from our work, but we did so knowing that we would pay a hefty sum to allow them to do this. It still came as a bit of a shock though.

Paying the invoice will curtail some of my research activities next year, like going to conferences to present data. I put this to the journal, asking if they’d hear my case for a reduction or a waiver. Here’s their response:


JOCN does not provide fee waivers. Page costs are stated on the submission guidelines page, as well as on the last page of the online manuscript submission so that all authors are aware of the financial obligation required if your paper is accepted for publication. These fees pay for the website, submission software, and other costs associated with running the journal. If you are unable to pay the page fees, please let us know so that we can remove your manuscript from the publication schedule.

Editorial Staff
Journal of Cognitive Neuroscience


What did I expect though? We willingly submitted to this journal knowing that they would charge us $60 per page. And the Journal of Cognitive Neuroscience certainly isn’t alone in doing this. Most cognitive neuroscience journals are pretty good at making money out of authors (see table below – I haven’t included OA megajournals in the table). Imagers tend to have money and junior imagers, like all junior academics, still need to publish journals that have a reputation.

For what it’s worth, Elsevier journals keep their noses pretty clean. Cerebral Cortex’s publishing house Oxford Journals though… pretty much every stage of that process is monetised. Just. Wow.


Journal NameJournal of NeuroscienceCerebral CortexNeuroimage / Cortex / NeuropsychologiaJournal of Cognitive NeuroscienceCognitive, Affective and Behavioral NeuroscienceCognitive Neuroscience
Our Paper18503387098511001422
PublisherSociety for NeuroscienceOxford JournalsElsevierMIT PressSpringerTaylor & Francis
IF (2013)6.748.376.13 / 6.04 / 3.454.693.212.38
Costs ($)
Figures (Colour)-720--1100*474
Open Access Supplement282034002200 / 2200 / 1800(unknown)30002950
Black and white figures are without cost in all the listed journals. IF is Impact Factor. The paper for which the 'Our paper' costs are calculated had 3 authors, 16 pages, 3 colour figures, and no Open Access Supplement.
* There is a one-off charge for all colour figures, regardless of number.

This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at the University of Leeds. Radka has previously written on this blog about how to conduct online studies. Here she discusses the merits of travelling during your doctoral training.

View from WBurg

I am writing this as I near the end of the second lab visit abroad of my PhD. While I know many students who like me have managed to acquire the ‘visiting scholar’ status on extended lab visits, the number is far smaller than I believe it should be. This is even though visiting other labs and collaborating with other researchers, having external input into the work you are doing and having an idea what others in your field are doing right now is invaluable. It doesn’t matter if it is a visit of few weeks or few months; either way it is worth it and a lot easier to make happen than you would expect. Researchers are mostly very open to hosting and there is a lot of support and funding for such visits. It is a great learning experience and it makes academia seem smaller and friendlier. There are also the practical benefits such as having a travel grant on your CV, being able to show that you are capable of forging international collaborations and increasing the opportunity of knowing someone with post-doc funding.

The purpose of this post is to address the question of how one goes about organizing a visit of any length to another lab (ideally abroad although it doesn’t have to be!). However, the motivation for writing anything on this topic at all is to encourage PhD students – especially at the beginning of their studies – to consider in what ways can they make the most of it and travelling is definitely one way to do that.



The obvious first step is deciding what would you like to get out of a visit to another lab. This can be as generic as ‘networking’ or as specific as learning a particular analysis method. Ideally a visit should involve a collaboration of some kind although whether the visit should be used to plan a project or to actually carry out the data collection and analysis is open to discussion. This will naturally determine how long the visit is. Ideally, you’d look for funding to finance the type of visit you have in mind but sometimes the funding sources available to you might shape – to an extent – how long a visit you undertake. Some of the funding opportunities outlined below are aimed at visits of 6 months or longer; others at 2-3 months and a few start at a couple of weeks. As such it is important to know from the start what options are available to you.



Most commonly, students make use of their supervisor’s network. This is by far the easiest way to organize a visit as it builds on collaborations that already exist. It is also the best way to identify a researcher with relevant experience to help you develop new ideas in the context of the topic of your thesis. As such, the first step is talking to your supervisor; they might already have someone in mind and can initiate the contact.

It is also possible there already is a researcher that you want to work with for an extended period. If you are going to a conference and they are going as well, try to talk to them there. You can contact them before the conference to suggest you meet to discuss your work with them. Having met them in person makes it much easier to talk to them about visiting their lab. If there isn’t an opportunity to meet in person, it is also fine to email the researcher you are interested in working with and ask them whether this could be arranged.



There is more funding available for research visits than might seem at first. Below is a list of some useful starting places for researching funding options. Everyone’s background is unique and the opportunities will vary accordingly.

(i) Funding organizations: It is very likely the organization or research council funding your PhD also has funding for travel visits. What is more, they are probably very keen to fund such a visit. The Economic and Social Research council in the UK is a great example of this as they place great emphasis on international research links through their Overseas Institutional Visits scheme.

(ii) Universities: There is a chance that the institution you want to visit has a ‘visiting scholars’ program that you can apply for to fund the visit. Similarly, your own institution might have ‘travel abroad’ schemes with funding for going abroad to an institution of your choice. Further, there are partnership networks between universities that also offer funding. An example is the Worldwide Universities Network which supports mobility for students and researchers between its partner institutions. It is best to ask your university or someone in the department whether you belong to one.

(iii) National grants: Some countries have grants for their nationals to go on study visits – a great example is the German Academic Exchange Service, which also offers a lot of support for international students to come to Germany. Similarly, France funds visits of 6-10 months to any of its institutions through the Eiffel Excellence Scholarship. There are also bilateral agreements between countries to fund exchanges such as the Fulbright Commission which focuses on mobility between the US and (according to their website) a list of more than 155 countries.

(iv) Societies: Lastly, there are travel grants that are subject specific. The
Experimental Psychology Society, the British Psychology Society or the European Association of Social Psychology all offer study visit grants. However, sometimes there are membership conditions on these.


The key thing is to give yourself enough time to plan a visit. It is important to have an idea of what funding is available to you, when the funding deadlines are, what the application process is like, what documents you need, and what the interval is between submission and final decision.


Good luck!

We had an fMRI paper accepted to the Journal of Cognitive Neuroscience earlier this week. Having got the science out the door, I was able to turn my attention to the fun stuff – a cover image. The cover image for my first fMRI publication was selected by the Journal of Neuroscience and I wanted to go with something similar.

In the past 6 months or so, @alby has tweeted some of the images he generated using @lowpolybot, a twitter bot that returns low-polygon renderings of images tweeted to it. I tweeted a figure from the accepted paper to @lowpolybot and got this back:

@lowpolybot image from the tweet:
@lowpolybot image from the tweet:

There are a range of operations @lowpolybot can perform on your images (detailed on the @lowpolybot tumblr), but if you give no instructions you will get a random combination of operations applied to your image. This was what I had done. I was happy with the picture so, having checked with @lowpolybot’s creator @quasimondo that he was happy for me to do this, I submitted it to the journal.

Sadly though, there’s no chance this image will b e used as a cover image. I received an email the next day from a journal administrator informing me that they have stopped printing cover images. Ah well.

I, like most humans, am bad at understanding randomness and good at spotting patterns that don’t necessarily exist. I also frequently have thoughts like: “That’s the third time this paper has been rejected. It must be bad.” These things are related.

When I submit my work, all of the variables at play, including the quality of the thing being judged, combine to give me a probability that a positive outcome will occur e.g. 0.4 – 2 out of 5 times, a good thing will happen. BUT, probabilities produce lumpy strings of outcomes. That is, good and bad outcomes will appear to us pattern-spotting humans to be clustered, rather than what we would describe as “random”, which we tend to think of as evenly spaced (see the first link above).

To illustrate, I did something very straightforward in Excel to very crudely simulate trying to publish 8 papers.
Column A: =RAND() << (pseudo)randomly assign a number between 0 and 1; in the next
Column B: =IF(Ax>0.4, 0,1) << if the number column A (row x) exceeds .4, this cell will equal 0, otherwise it will equal 1.
Thus, column B will give me a list of successes (1s) and failures (Os) with an overall success rate of ~.4. It took me four refreshes before I got the following:

Note that the success rate, despite being set to .4, was .26 over this small number of observations. Also note that I embellished the output with a hypothetical stream of consciousness. I really wish I had the detachment of column C, but I don’t. I take rejections to heart and internalise bad outcomes like they are Greggs’ Belgian buns.


Although the rejections look clustered, they are all independently determined. I have almost certainly had strings of rejections like those shown above. The only thing that has made them bearable is that I have switched papers, moving on to a new project after ~3 rejections, at the same time giving up on the thrice-rejected paper I assume to be a total failure. As a result, I am almost certainly sitting on good data that has been tainted by bad luck.

Stick with it. It evens out in the end.

Last week Ravi Mill and I had a paper accepted to Consciousness and Cognition. It was accepted after 49 weeks with the journal in four rounds of review. The editorial decisions on the paper were: Reject, Revise and Resubmit, Accept with Minor Revisions and finally, Accept. What makes this decision history somewhat remarkable is that it was initially rejected from the journal it was eventually published in.

This blog post won’t give as much information on that initial rejection as I wanted it to – I sought permission from the journal to publish all correspondence from the reviewer and the editor, which was denied. Below you find only my response to the editorial decision. As context, the manuscript was rejected on the basis of one review, in which it was suggested that we had adopted  some unconventional and even nefarious practices in gathering and analysing our data. These  suggestions didn’t sit well with me, so I sent the following email to the Editor via the Elsevier Editorial System.


Dear Prof XXXX,


Thank you for your recent consideration of our manuscript, ‘”Old?” or “New?”: The test question provokes a goal-directed bias in memory decision-making’ (Ms. No. XXXXX-XX-XXX), for publication in Consciousness & Cognition. We were, of course, disappointed that you chose to reject the manuscript.


Having read the justification given for rejection, we respectfully wish to respond to the decision letter. Whilst we do not believe that this response will prompt reconsideration of the manuscript (your decision letter was clear) we believe it is important to respond for two reasons. First, to reassure you that we have not engaged in any form of data manipulation, and second, to state our concerns that the editorial team at Consciousness & Cognition appear to view it as acceptable that reviewers base their recommendations on poorly substantiated inferences they have made about the motivations of authors to engage in scientific misconduct.


As you highlighted in your decision letter, Reviewer 1 raised “substantial concerns about the manner in which [we] carried out the statistical analysis of [our] data”. The primary concern centred on our decision to exclude participants whose d’ was below a threshold of 0.1. In this reply we hope to demonstrate to you that use of such an exclusion criterion is not only standard practice, but it is indeed a desirable analysis step which should give readers more, not less, confidence in the analyses and their interpretation. We will then take the opportunity to demonstrate, if only to you, that our data behave exactly as one would expect them to under varying relaxations of the enforced exclusion criterion.


Recognition memory studies often require that participants exceed a performance (sensitivity; d’) threshold before they are included in the study. This is typically carried out when the studies themselves treat a non-sensitivity parameter as their primary dependent variable (as in our rejected paper), as a means of excluding participants that were unmotivated or disengaged from the task. Below are listed a small selection of studies published in the past 2 years which have used sensitivity-based exclusion criteria, along with the number of participants excluded and the thresholds used:


Craig, Berman, Jonides & Lustig (2013) – Memory & Cognition
Word recognition
Expt2 – 10/54, Expt3 – 5/54
80% accuracy


Davidenko, N. & Flusberg, S.J. (2012) – Cognition
Face recognition
Expt1a – 1/56, Expt1b –1/26, Expt2b –3/26
chance (50%) accuracy


Gaspelin, Ruthruff, & Pashler, (2013). Memory & Cognition
Word recognition
Expt1 – 3/46, Expt2 – 1/49, Expt3 – 3/54
70% accuracy


Johnson & Halpern (2012) – Memory & Cognition
Song recogntion
Expt1 – 1/20, Expt2 – 3/25
70% accuracy


Rummel, Kuhlmann, & Touron (2013) – Consciousness & Cognition
Word classification (prospective memory task)
Expt1 – 6/145
“prospective memory failure”


Sheridan & Reingold (2011) – Consciousness & Cognition
Word recognition
Expt1 – 5/64
“difficulty following instructions”


Shedden, Milliken, Watters & Monteiro (2013) – Consciousness & Cognition
Letter recognition
Expt4 – 5/28
85% accuracy


You will note that there is tremendous variation in the thresholds used, but that it is certainly not “unusual” as claimed by Reviewer 1, not even for papers published in Consciousness and Cognition. Of course, it would not be good scientific practice to accept the status quo uncritically, and we must therefore explain why the employed sensitivity-based exclusion criterion was appropriate for our study. The reasoning is that we were investigating an effect associated with higher order modulation of memory processes. If we had included participants with d’s below 0.1 (an overall accuracy rate of approximately 52% where chance responding is 50%) then it is reasonable to assume that these participants were not making memory decisions based on memory processes, but were at best contributing noise to the data (e.g. via random responding), and at worst systematically confounding the data. To demonstrate the latter point, one excluded participant from Experiment 1 had a d’ of -3.7 (overall accuracy rate of 13%), which was most likely due to responding using the opposite keys to those which they were instructed to use, substituting “new” responses for “old” responses. As we were investigating the effects of the test question on the proportion of “new” and “old” responses, it is quite conceivable that if they had displayed the same biases as our included participants did overall, they would have reduced our ability to detect a real effect. Including participants who did not meet our inclusion criteria, instead of helping us to find effects that hold across all participants, would have systematically damaged the integrity of our findings, leading to reduced estimates of effect size caused by ignoring the potential for influence of confounding variables.


If we had been given the opportunity to respond to Reviewer 1’s critique via the normal channels, we would have also corrected Reviewer 1’s inaccurate reading of our exclusion rates. As we stated very clearly in the manuscript, our sensitivity-based exclusion rates were 5%, 6% and 17%, not 25%, 6% and 17%. Reviewer 1 has conflated Experiment 1’s exclusions based on native language with exclusion based on sensitivity. As an aside, we justify exclusion based on language once again as a standard exclusion criterion in word memory experiments to ensure equivalent levels of word comprehension across participants. This is of particular importance when conducting online experiments which allow anyone across the world to participate. In coding the experiment, we thought it far more effective to allow all visitors to participate after first indicating their first language, with a view to excluding non-native speakers’ data from the study after they had taken part. We wanted all participants to have the opportunity to take part in the study (and receive feedback on their memory performance – a primary motivator for participation according to anecdotal accounts gleaned from social media) and to minimise any misreporting of first language which would add noise to the data without recourse for its removal.


We would next have responded to Reviewer 1’s claims that our conclusions are not generalisable based on the subset of analysed data by stating that Reviewer 1 is indeed partially correct. Our conclusions would not have been found had we included participants who systematically confounded the data (as discussed above) – as Reviewer 1 is at pains to point out, the effect is small. Nonetheless as demonstrated by our replication of the findings over three experiment and the following reanalyses, our findings are robust enough to withstand inclusion of some additional noise, within reason. To illustrate, we re-analysed the data under two new inclusion thresholds. The first, a d’ <= 0 threshold, equivalent to chance responding (Inclusion 1), and the second a full inclusion in which all participants were analysed (Inclusion 2). For the sake of brevity we list here the results as they relate to our primary manipulation, the effects of question on criterion placement.


Original – old emphasis c > new emphasis c, t(89) = 2.141, p = .035, d = 0.23.


Inclusion 1:
old emphasis c > new emphasis c, t(90) = 2.32, p = .023, d = 0.24.
Inclusion 2:
no difference between old and new emphasis c, t(94) = 1.66, p = .099, d = 0.17


Main effect of LOP, F(1,28) = 23.66, p = .001, ηp2 = .458, shallow > deep.
Main effect of emphasis, F(1,28) = 6.65, p = .015, ηp2 = .192, old? > new?.
No LOP x emphasis interaction, F(1,28) = 3.13, p = .088, ηp2 = .101.
Shallow LOP sig old > new, t(28) = 3.05, p = .005, d = 0.62.
Deep LOP no difference, t(28) = .70, p = .487, d = 0.13.


Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
Main effect of LOP, F(1,30) = 20.84, p = .001, ηp2 = .410, shallow > deep.
Main effect of emphasis, F(1,30) = 8.73, p = .006, ηp2 = .225, old? > new?.
Sig LOP x emphasis interaction, F(1,30) = 4.28, p = .047, ηp2 = .125.
Shallow LOP sig old > new, t(30) = 3.50, p = .001, d = 0.64.
Deep LOP no difference, t(30) = .76, p = .454, d = 0.14.


No main effect of response, F(1,28) = 3.73, p = .064, ηp2 = .117
No main effect of question, F < 1.
Significant question x response interaction, F(1,28) = 8.50, p = .007, ηp2 = .233.
“Yes” response format, old > new, t(28) = 2.41, p = .023, d = 0.45.
“No” response format, new > old, t(28) = 2.77, p = .010, d = 0.52.


Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
No main effect of response, F <1.
No main effect of question, F < 1.
No question x response interaction, F(1,34) = 3.07, p = .089, ηp2 = .083.
“Yes” response format, old > new, t(34) = 1.33, p = .19, d = 0.23.
“No” response format, new > old, t(34) = 1.84, p = .07, d = 0.32.


To summarise, including participants who responded anywhere above chance had no untoward effects on the results of our inferential statistics and therefore our interpretation cannot be called into question by the results of Inclusion 1. Inclusion 2 on the other hand had much more deleterious effects on the patterns of results reported in Experiments 1 and 3. This is exactly what one would expect given the example we described previously where the inclusion of a participant responding systematically below chance would elevate type II error. In this respect, our reanalysis in response to Reviewer 1’s comments does not weaken our interpretation of the findings.


As a final point, we wish to express our concerns about the nature of criticism made by Reviewer 1 and accepted by you as appropriate within peer-review for Consciousness & Cognition. Reviewer 1 states that we the authors “must accept the consequences of data that might disagree with their hypotheses”. This strongly suggests that we have not done so in the original manuscript and have therefore committed scientific misconduct or entered a grey-area verging on misconduct. We deny this allegation in the strongest possible terms and are confident we have demonstrated that this is absolutely not the approach we have taken through the evidence presented in this response. Indeed if Reviewer 1 wishes to make these allegations, they would do well to provide evidence beyond the thinly veiled remarks in their review. If they wish to do this, we volunteer full access to our data for them to conduct any tests to validate their claims, e.g. those carried out in Simonsohn (2013) in which a number of cases of academic misconduct and fraud are exposed through statistical methods. We, and colleagues we have spoken to about this decision, found it worrying that you chose to make your editorial decision on the strength of this unsubstantiated allegation and believe that at the very least we should have been given the opportunity to respond to the review, as we have done here, via official channels.


We thank you for your time.




Akira O’Connor & Ravi Mill


Craig. K. S., Berman, M.G., Jonides, J. & Lustig, C (2013) Escaping the recent past: Which stimulus dimensions influence proactive interference? Memory & Cognition 41, 650-670.
Davidenko, N. & Flusberg, S.J. (2012) Environmental inversion effects in face perception. Cognition 123(2), 442-447.
Gaspelin, N., Ruthruff, E. & Pashler, H. (2013) Divided attention: An undesirable difficulty in memory retention. Memory & Cognition 41, 978-988.
Johnson, S. K. & Halpern, A. R. (2012) Semantic priming of familiar songs. Memory & Cognition 40, 579-593.
Rummel, J., Kuhlmann, B.G. & Touron, D. R. (2013) Performance predictions affect attentional processes of event-based prospective memory. Consciousness and Cognition, 22 (3), 729-741.
Shedden, J. M., Milliken, B., Watters, S. & Monteiro, S (2013) Event-related potentials as brain correlates of item specific proportion congruent effects. Consciousness & Cognition, 22 (4), 1442-1455.
Sheridan, H. & Reingold, E. M. (2011) Recognition memory performance as a function of reported subjective awareness. Consciousness & Cognition 20 (4), 1363-1375.
Simonsohn, U. (2013) Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875-1888.


The Editor’s very reasonable response was to recommend we resubmit the manuscript, which we did. The manuscript was then sent out for review to two new reviewers, and the process began again, this time with a happier ending.

My recommendations for drafting unsolicited responses are:

  • Allow the dust to settle (this is key to Jim Grange’s tips on Dealing with Rejection too). We see injustice everywhere in the first 24 hours following rejection. Give yourself time to calm down and later, revisit the rejection with a more forensic eye. If the reviews or editorial letter warrant a response, they will still warrant it in a few days, by which time you will be better able to pick the points you should focus on.
  • Be polite. (I skate on thin ice in a couple of passages in the letter above, but overall I think I was OK).
  • Support your counterarguments with evidence. I think our letter did this well. If you need to do some more analyses to achieve this, why not? It will at least reassure you that the reviewer’s points aren’t supported by your data.
  • Don’t expect anything to come of your letter. At the very least, it will have helped you manage some of your frustration.

Earlier in the year I was asked by the University of St Andrews Open Access Team to give an interview to a group from the University of Edinburgh Library. I’m certainly no expert, but I’m more excited about the idea than some researchers here at St Andrews (though there are some other researchers here, like Kim McKee, who are extremely enthusiastic about it). The video is embedded below, with my 40 second contribution from 8:44 onwards.



My interview actually lasted more than half an hour, though most of what I was trying to communicate wasn’t really consistent with what the interviewers wanted. If you watch the video through, you’ll notice the editorial push towards green rather than gold OA*. I do understand this push, especially from a library’s perspective – we can and should be uploading the vast majority of our work to institutional repositories and making it open access via the green route – but I don’t think that is helps the long-term health of academic publishing.

I spent a long time in my interview arguing for gold open access, but not the ‘hybrid’ gold open access offered by traditional publishers like Elsevier. (I find the current implementation of hybrid open access pretty abhorrent. It seems to me to be an utterly transparent way for the traditional publishers to milk the cow at both ends, collecting subscriptions and APCs.)  I’m not even too thrilled by the native OA publishers like Frontiers and PLoS, not because they’re bad for academic publishing (I think they are far better for the dissemination of research than the traditional publishers), but because they’re not revolutionary (though see Graham Steel’s comments below)**. Their model is pretty straightforward (or you could call it boring and expensive) – by shifting the collection of money from the back- to the front- end, they negate the need for institutional subscriptions by charging APCs in the region of $1000s. What I am excited about is the gold open access offered by some open access publishers who have thought about a publishing model for the modern era from the ground up, not by simple adaptation of printing press-era models. Publishers like PeerJ and The Winnower have done just this, and these are the sorts of gold OA publishers I hope will change the way we disseminate research.

Sadly for me, I didn’t express myself well enough on that matter to make the final cut of this video. Next time…


* Here’s a brief primer in case you’re not familiar with these terms. Green OA is repository-based free OA – you typically deposit author versions (the documents submitted to the journal rather than the typeset documents published by the journal) into an institutional database. Anyone who knows to look in the repository for your work will find it there. Gold OA is not free – there are almost always article processing charges (APCs) – but once paid for, anyone can access the publisher version of your paper directly from the  publisher’s website.


** Parentheses added 14/08/2014 following Graham Steel’s comments.

This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at Leeds University and the Université de Bourgogne in Dijon. Radka has embraced online experimentation and has run many hundreds of participants through an impressive number of experiments coded in Javascript.


Onscreen Experiments

Recently, Crump, McDonnell and Gureckis (2013) replicated the results of a number of classic cognitive behavioral tasks, such as the Stroop task, using experiments conducted online. They demonstrated that, despite what some people fear, online testing can be as reliable as lab-based testing. Additionally, online testing can be extremely fast and efficient in a way that lab-based testing cannot. I have now completed my 7th online experiment as well as having helped others in creating and advertising theirs.  This post is a review of things I have learned in the process. It summarises what I did not know but now wish I had when I was planning my first study and answers some questions I got asked by others along the way.



In terms of conducting online experiments, the best method remains programming as it is by far the most flexible approach. As someone who has learned programming on my own from free online courses, I can confirm that this is not as difficult as some people think it to be and it really is quite fun (for some tips on where to get started this TED blog post is quite useful.). At the same time, many people do not know how to code and do not have the time to learn. The good news is that for many experiments, the current survey software available online remains flexible enough to create large number of experiments although the potential complexity is naturally limited. My favorite is Qualtrics as even the free version allows a fair amount of functionality and number of trials.



A major advantage of the Internet is that one can reach many different communities. With online testing, one can reach participants who are simply interested in psychology experiments and volunteering in a way that is preferable to testing psychology undergraduates who are coerced into participating for course credit. Once you have an experiment to advertise, the challenge is to find the easiest route by which to reach these people.

There are many websites that focus directly on advertising online experiments. The one I have found the most useful is the Psychological Research on the Net website administered by John H. Krantz. Alternatively, the In-Mind magazine has a page where they post online experiments, which they also share on their Facebook and Twitter account.  Other websites that host links to online studies are the Social Psychology Network  and Online Psychology Research.

The most powerful way for a single individual to reach participants is, quite unsurprisingly, social media. Once a few people start sharing the link, the interest can spread very quickly. The simplest thing to do is to post your study on your Facebook page or Twitter account. Something I haven’t tried yet but that might be worth exploring is finding pages on Facebook or hashtags on Twitter that might relate to the topic of the experiment or psychology in general and post the link to the experiment there. One of the biggest successes for me though, remains reddit. Reddit has a very strong community and people spend time their because they are actively searching for new information and interesting projects. There are a number of subreddits that are specific to psychology so yet again, visited by people interested in these particular topics. To give a few examples: psychology; cognitive science; psych science; music and cognition; mathematical psychology and the list goes on! There is even a subreddit specific to finding participants to complete surveys and experiments simply called Sample Size.

The last resource I have tried a number of times is using more general advertising sites such as craigslist. There is always a ‘volunteers’ section, which is visited by people looking to volunteer for a project of some sort. In that sense it can be a good place to reach participants and the sample will be fairly diverse. This for me has never been as successful as using social media but a few times it has worked fairly well.



The most commonly heard argument against online testing is the lack of control. Really what this means is that data collected online might include more noise, making it easier to miss existing effects, than traditional lab-based experiments. As already mentioned, Crump et al. (2013) replicated a number of classic tasks online suggesting that this might not be as big a worry as it at first seems to be. The range of tasks they have chosen demonstrates nicely that the same results can be obtained in the lab as well as on the Internet. Nevertheless, there are a number of ways one can track participants’ behavior to determine whether sufficient attention was given to the experiment. The simplest way is to measure the time participants took to complete the study. If you are using existing survey software, this information is usually automatically provided. If you are programming the study yourself, requesting a timestamp for when the study begins and for when it ends is an easy way to track the same kind of information. If participants are abnormally slow (or fast) in completing a task, then one might have sufficient reasons to exclude the data.

One of the biggest problems I have encountered is a participant completing one part of the task (e.g. a recognition test) but not completing as faithfully another part of the same experiment (e.g. free report descriptions of particular memory experiences from her daily life). While due to ethics we were not allowed to force participants to respond to any question, I have found that simply asking if they are sure they want to proceed, in case that they haven’t filled out all the questions on a page, increased report rates dramatically. As such it can be useful to provide such pointers along the way to make sure participants answer all questions without forcing them to do so.

Crump et al. (2013) also point out from their experiences of online testing that it can be useful to include some questions about the study instructions.  One could simply ask participants to describe briefly what it is that they are expected to do in the experiment. This way one has data against which to check whether participants understood the instructions and completed the task as anticipated. It will probably also help to ensure that participants pay close attention to the instructions. This is particularly useful if the task is fairly complex.



A big disadvantage of online testing can be dropout rates. This isn’t something I have tested in any formal way but it does seem that there is at least some relationship between the length of the study and dropout rates. This means that online testing is definitely most suitable to studies, which are up to 15 or 20 minutes in length to complete and this might be something to consider. It is also certain that tasks, which are more engaging, will have lower dropout rates. A good incentive I have found is to give participants at the end of an experiment a breakdown of their performance. I have had many participants confirm that they really enjoyed the feedback on how they performed on the memory task. Such feedback is a simple but efficient way to increase participation and decrease dropout rates.

The second worry is participants’ dropping out in the middle of an experiment and then restarting it. It is not something that would be common but it could happen. One way to deal with this is to ask participants to provide at the beginning of the study some code that should be unique to each participant, anonymous and yet always constant. An example is asking participants to create a code consisting of their day and month of birth and ending with their mother’s maiden initials. This is hardly a novel idea, I have participated in experiments, which asked for such information to create participant IDs that allowed to link responses across a number of experimental sessions. The idea is to find some combination of numbers and letters that should never (or rarely) be the same for two participants but that remains the same for any one participant, whenever he is asked. Once in the data-analysis stage, one can simply exclude files that contain repetitions of the same code.

Once the study is up and running, other than finding suitable places to advertise it at, one can leave it and focus on other things until the data has been collected. It is possible to reach large samples quickly and these samples are often more diverse than your classic psychology undergraduate population. There is a certain degree of luck involved but I have in the past managed to collect data for well over 100 participants in a single day. That is not to say that all studies are suitable to online testing but it is definitely a resource well worth exploring.

Last weekend I had the honour of being Best Man at a university friend’s wedding. It was a beautiful day spent in the sunshine of St Albans and then in the low-ceilinged, close comfort of the oldest pub in England, Ye Olde Fighting Cocks.

Ye Olde Fighting Cocks, St Albans
Ye Olde Fighting Cocks, St Albans. from

As Best Man I had a certain number of duties to carry out, with the speech amongst the most highly anticipated by those attending the celebrations. For those unfamiliar with what is expected here, the Best Man’s speech is traditionally the last of the speeches and the point in proceedings when thanks and sentimentality give way to humour and a raucousness that sets the tone for the night ahead. For weeks beforehand, people had been asking how I was getting on with it, noticing my terse response (“getting on just fine thanks”), and reassuring me that there was plenty of material to draw on. The expectation that it be funny was pretty inescapable.

I lecture statistics to 150 students on a regular basis. Over the past few years I have managed to overcome my hatred of public speaking and relax into these one-sided conversations on t-tests, ANOVAs and regression. One of the luxuries of speaking in front of large audiences as part of my job is that I know what it feels like and I know how to deal with the mechanics of getting my words out of my mouth in quite an intimidating situation. There are even moments in lectures now when I notice that I’m in a flow state, enjoying the fluidity of speaking about something I know well. For this reason, the prospect of getting up in from of over 100 boozy partiers and speaking about a good friend was not what I found intimidating. The expectation that I make them laugh, now that was scary.

I know that to speak well, I need to prepare (I have written about the routine I go through for important talks here). This is exactly what I did for the Best Man’s speech. The result was one of the most exhilarating experiences of performing in front of an audience I have ever had – the audience enjoyed themselves and I had a tremendous time. I didn’t have to buy a drink for the rest of the evening! Here is what I did to get into that position.


Be Yourself

1. I reassured myself with the knowledge that, in standing up in front of lecture theatres full of students,  I am paid to do something very similar to this. The major difference between lecturing and speech-giving was what the audience expects of the content. I knew that I was expected to offer toasts to thank various people, but whom exactly? To help with this googled the running order for content within a Best Man’s speech. By chance I found The Art of Manliness’ 10 Steps to the Best Best Man Speech, from which I got some suggested running order information, but much more importantly, I was reassured by the insistence that I ought to be myself. I have never wanted to stand up in front of people to make them laugh, but I do nonetheless enjoy making small groups of friends laugh when telling stories in the pub. Bearing this in mind allowed me to feel comfortable in not trying to ape my favourite comics, but simply allowing myself to find my inner story-teller and let him speak to a larger group of friends. This was the me I tried to be when writing the speech in advance of the wedding.


Write and Practise the Speech in Advance

1. Write the speech beforehand. Write it out even if you’re not going to read it. I always write out important talks so that I can practise my phrasing and I edit them to whatever worked best after each run-through. Thus, I start with a script in rather broken spoken prose, which is edited into something that sounds natural by the time I’m done with it. Over the course of practising I learn what I want to convey in each sentence so that I can say it  in any number of ways, off-script, by the time I get to delivering it to an audience. I know that this is a matter of personal preference, but this is what works for me and I could never deliver any speech or talk without practising it a few times first.

2. When lecturing or giving talks, the transitions between slides are often tricky points. My Best Man’s speech had similarly tricky transition points where I moved from toasts to anecdotes or from one story to the next. Scripting these transitions as part of scripting the entire speech gave me an idea of how to move on as seamlessly as possible.


Work on Timing

1. Don’t out-stay your welcome. I can’t over-run my lectures because students will start leaving. They have other places to be. Weddings guests probably won’t leave, but they won’t applaud you for going on and on either. Silky, a stand-up comedian attending the wedding spoke to me as we were sitting down to dinner. He gave me the following advice: “If it’s going badly, get off quick. If it’s going well, get off quick.” In other words, keep it short. Before I started writing the speech I was aiming for about 10 minutes. Run-throughs lasted about 13 minutes (an acceptable timeframe according to the groom, whom I had asked about this beforehand). If you’re running to a tight schedule, practising the speech will give you an idea of whether or not you need to remove content.

2. Comic timing is a little harder to work on. This is something I have rarely had to worry about in lectures (I tend to play them straight) and I’m not sure how I would go about practising comic timing other than by doing this sort of speaking more. Something that threw me off a few times was that people started tittering before I had delivered the punchlines. The audience expect you to be funny and they want you to feel comfortable, so they will laugh when you give them an excuse to. This made me fluff a line or two. It is something I will be more mindful of should I ever have to do this again.


Logistics and Planning

1. Know your AV equipment. I delivered the speech into a hand-held microphone. I had seen the first speaker struggle a little with microphone distance so I was determined to be careful of making the same mistake and, in the end, delivered my speech with the mic resting on my chin just below my lower lip. It probably looked weird but I managed to get through the whole speech without any microphone dropout. (In future I will have a go on the amplification equipment beforehand so I can work out something a little more elegant.)

2. Coordinate toasts and readings with other speakers. When you are delivering a lecture course, you want to avoid not covering important material (dangerous for exams) and duplication (boring). The same is true of wedding speeches. The night before, the groom and I had discussed who was giving which toasts so that, by the time I had finished, everyone who needed to be thanked would had been thanked. Had we not had this conversation, the groomsmen would have gone without a toast – an omission I’m glad we avoided. On a related note, I also opted to read a short passage to the bride and groom to close my speech. It was only at the wedding service when one of the passages I had been considering for my own speech was read that I realised I had got lucky the choice I eventually made. If you are doing something unconventional like reading a passage during your speech, have a quiet word in the groom’s ear to ask what readings they have planned for the service well in advance.


Being Funny

The three points below target a specific aim, being funny, which isn’t a priority for me when I lecture. I don’t make much reference to lecturing below because these points are specific to my experience of the wedding speech situation.

1. Despite the pressure, being funny is not the be-all and end-all. Having typed “Best Man’s Speech” into Google, I was surprised to find  the first auto-complete suggestion to be “Best Man’s Speech one-liners”. I don’t use the slides my course textbook publishers give me when I lecture, and I would be similarly wary about using jokes about other people to portray my relationship with the groom. What I wanted to do above all else, was to paint a picture of the groom as I know him. Having said this, such is the pressure to be funny that I’m not surprised people google jokes for use in wedding speeches, or end up telling embarrassing stories from the stag do. Again, the  Art of Manliness article delivers reassurance:

What gets people in trouble is attempting to be funny by sharing some embarrassing story or cracking some lame joke about a ball and chain. It usually comes out horribly and no one laughs. It’s okay to share a humorous anecdote, but not one that gets laughs at the expense of your friend and his new wife and embarrasses them and their guests.

This advice set the tone for the stories I wanted to tell. I wanted those in the audience who knew the groom to see him in the jokes I was telling and for this recognition to be funny in and of itself. I also wanted to capture a range of experiences I had with the groom, from those that were funny to those that were sad. The sadder moments would act as points from which to rebound back to laughter, but would also help the audience understand what a lovely thing it was for the groom to have met the bride at the time he did.

2. Avoid in-jokes. My university friends would invariably ask if I was including their own favourite university story of the groom. Many of these stories were very funny, but only in the context of the many in-jokes we shared as a group of close-knit friends. I largely avoided these  references because I wanted to appeal to as many in the audience as possible. Those that I did include were equally viable as terrible puns or cultural references, which the university crowd found funnier because of their shared history of appreciating them.

3. Enjoy the format. I was in the privileged position of speaking to an audience who expected me to make them laugh. When writing the speech I experimented with jokes, tweaking wording, timing and structure. I eventually settled on a narrative that in the end called back to humorous stories about the groom to illustrate how normal the bride is in comparison. I have seen comedians using and even explaining this device to great comedic effect, and incorporating it into structure of my own speech gave me a sense that I had actually written a funny speech. This is undoubtedly an aspect of the Best Man’s speech that I would not have thought to focus on had I been preoccupied by the prospect of public speaking. My experience of lecturing allowed me to build on its commonalities with giving a Best Man’s speech to embrace and ultimately enjoy the format tremendously.