Come to St Andrews and figure out why déjà vu experiences decrease with age, with me and Ines Jentzsch.

FindAPhD Advertisement (full text below)

Please email ( or tweet (@akiraoc) me if you’d like to speak more about this project.  If you’d like to speak to anyone about doing a PhD with me, please get in touch with Mags Pitt (3rd yr PhD), Bjorn Persson (3rd yr PhD) or Ravi Mill (completed PhD) via the People section of the blog.

CNS Poster
Ravi Mill presenting simultaneous EEG fMRI data at CNS 2014

Project Description

BBSRC Theme: Word class underpinning Bioscience

Adaptive cognition involves both the completion of a set of mental operations and the awareness that these operations have been completed so that the next stage of cognition can be engaged. During successful memory decision-making these two steps, memory retrieval and retrieval awareness, go hand in hand. However, they can occasionally fragment, leading to a set of experiences termed introspective memory phenomena (IMPs; e.g. déjà vu and jamais vu). During déjà vu positive retrieval awareness arises in the absence of true retrieval, yielding the overall sensation of inappropriate familiarity (O’Connor & Moulin, 2010). Jamais vu is the opposite–negative retrieval awareness in the presence of true retrieval. IMPs signal conflict within the cognitive system, and thus may play a crucial role in error correction (we do not act on IMPs in the way that we do act on false memories). However, beyond some curious demographic associations (they occur more in those who are well-travelled and well-educated), IMP occurrence is not known to be associated with any existing cognitive or psychological traits.

IMPs are not experienced uniformly across the population but peak in those in their mid-20s, before declining with age thereafter. They are also thought to be driven by dopaminergic over-activity such that some pharmacological and recreational drugs (e.g. dopaminergic flu medications) have been reported as causing persistent déjà vu (Taiminen & Jääskeläinen, 2001). Interestingly, these characteristics mirror what is known about neurophysiological markers of inhibitory control and response monitoring more generally (e.g. Strozyk & Jentzsch, 2012), which show the same lifespan trajectory with an age-related decrease in the dopaminergic functions mediated by the frontal cortex. These links suggest that IMP occurrence may be underpinned by basic neurocognitive characteristics integral to healthy cognition. Thus, the importance of IMPs may not lie in the fragmentation of the memory decision-making system, but in the capacity for our response monitoring systems to detect it and stop us making decisions based on faulty information.

We propose a systematic programme of research to establish the role of error-monitoring in the generation of IMPs. Using i) retrospective questioning to verify the recent occurrence of IMPs and ii) established procedures for their laboratory generation, we will explore individual differences in IMP experience and neurophysiological markers of response monitoring. These experiments will be a) developed in young adults and extended to b) primary school children (age 8-11; the age at which IMPs are first reported by children) and c) older adults (age 55 and older). We will also conduct opportunistic case-studies on d) patients who present themselves to Dr O’Connor over the course of the PhD (UK-based patients typically get in touch at a rate of 1-2/year). This systematic programme will allow us to establish any potential links between basic neurocognitive characteristics and the tendency to experience dissociative memory sensations which are not known to have any other psychological correlates.

This project will benefit from the joint multi-disciplinary expertise of Dr O’Connor, an internationally recognized expert in the area of metacognition and introspective memory phenomena and Dr Jentzsch, a biophysicist and electrophysiologist by training, who specialized in studying the neural underpinnings dopaminergic functions such as action and conflict control. Together, we will provide the prospective student conceptual knowledge of metacognitive models of memory and changes to these functions with healthy ageing integrating behavioural methods and physiological measures of brain function in humans. The student will learn about experimental design, programming (Matlab), data collection and behavioural analysis techniques such as signal detection theory. In addition, the student will learn how to design, conduct and analyse electrophysiological experiments using EEG. Acquisition of generic skills such as team-working, time-management and communication skills amongst many others will also be an important part of the students training.

Funding Notes

This project is eligible for the EASTBIO Doctoral Training Partnership: View Website

This opportunity is only open to UK nationals (or EU students who have been resident in the UK for 3+ years immediately prior to the programme start date) due to restrictions imposed by the funding body.

Apply by 5.00pm on the 14th December 2015 following the instructions on how to apply at: View Website

Informal enquiries to the primary supervisor are very strongly encouraged.


O’Connor, A.R. & Moulin, C.J.A. (2010). Recognition without identification, erroneous familiarity, and déjà vu. Current Psychiatry Reports, 12(3), 165-173.

Strozyk, J.V. & Jentzsch, I. (2012). Weaker error signals do not reduce the effectiveness of post-error adjustments: Comparing error processing in young and middle-aged adults. Brain Research, 460, 41-49

Taiminen, T. & Jääskeläinen, S.K. (2001). Intense and recurrent déjà vu experiences related to amantadine and phenylpropanolamine in a healthy male. Journal of Clinical Neuroscience, 8, 460-462.

The Journal of Cognitive Neuroscience have just invoiced me $985 for a paper they agreed to publish earlier this year. This wasn’t unexpected – not only did we sign away our copyright, allowing MIT Press to make money from our work, but we did so knowing that we would pay a hefty sum to allow them to do this. It still came as a bit of a shock though.

Paying the invoice will curtail some of my research activities next year, like going to conferences to present data. I put this to the journal, asking if they’d hear my case for a reduction or a waiver. Here’s their response:


JOCN does not provide fee waivers. Page costs are stated on the submission guidelines page, as well as on the last page of the online manuscript submission so that all authors are aware of the financial obligation required if your paper is accepted for publication. These fees pay for the website, submission software, and other costs associated with running the journal. If you are unable to pay the page fees, please let us know so that we can remove your manuscript from the publication schedule.

Editorial Staff
Journal of Cognitive Neuroscience


What did I expect though? We willingly submitted to this journal knowing that they would charge us $60 per page. And the Journal of Cognitive Neuroscience certainly isn’t alone in doing this. Most cognitive neuroscience journals are pretty good at making money out of authors (see table below – I haven’t included OA megajournals in the table). Imagers tend to have money and junior imagers, like all junior academics, still need to publish journals that have a reputation.

For what it’s worth, Elsevier journals keep their noses pretty clean. Cerebral Cortex’s publishing house Oxford Journals though… pretty much every stage of that process is monetised. Just. Wow.


Journal NameJournal of NeuroscienceCerebral CortexNeuroimage / Cortex / NeuropsychologiaJournal of Cognitive NeuroscienceCognitive, Affective and Behavioral NeuroscienceCognitive Neuroscience
Our Paper18503387098511001422
PublisherSociety for NeuroscienceOxford JournalsElsevierMIT PressSpringerTaylor & Francis
IF (2013)6.748.376.13 / 6.04 / 3.454.693.212.38
Costs ($)
Figures (Colour)-720--1100*474
Open Access Supplement282034002200 / 2200 / 1800(unknown)30002950
Black and white figures are without cost in all the listed journals. IF is Impact Factor. The paper for which the 'Our paper' costs are calculated had 3 authors, 16 pages, 3 colour figures, and no Open Access Supplement.
* There is a one-off charge for all colour figures, regardless of number.

This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at the University of Leeds. Radka has previously written on this blog about how to conduct online studies. Here she discusses the merits of travelling during your doctoral training.

View from WBurg

I am writing this as I near the end of the second lab visit abroad of my PhD. While I know many students who like me have managed to acquire the ‘visiting scholar’ status on extended lab visits, the number is far smaller than I believe it should be. This is even though visiting other labs and collaborating with other researchers, having external input into the work you are doing and having an idea what others in your field are doing right now is invaluable. It doesn’t matter if it is a visit of few weeks or few months; either way it is worth it and a lot easier to make happen than you would expect. Researchers are mostly very open to hosting and there is a lot of support and funding for such visits. It is a great learning experience and it makes academia seem smaller and friendlier. There are also the practical benefits such as having a travel grant on your CV, being able to show that you are capable of forging international collaborations and increasing the opportunity of knowing someone with post-doc funding.

The purpose of this post is to address the question of how one goes about organizing a visit of any length to another lab (ideally abroad although it doesn’t have to be!). However, the motivation for writing anything on this topic at all is to encourage PhD students – especially at the beginning of their studies – to consider in what ways can they make the most of it and travelling is definitely one way to do that.



The obvious first step is deciding what would you like to get out of a visit to another lab. This can be as generic as ‘networking’ or as specific as learning a particular analysis method. Ideally a visit should involve a collaboration of some kind although whether the visit should be used to plan a project or to actually carry out the data collection and analysis is open to discussion. This will naturally determine how long the visit is. Ideally, you’d look for funding to finance the type of visit you have in mind but sometimes the funding sources available to you might shape – to an extent – how long a visit you undertake. Some of the funding opportunities outlined below are aimed at visits of 6 months or longer; others at 2-3 months and a few start at a couple of weeks. As such it is important to know from the start what options are available to you.



Most commonly, students make use of their supervisor’s network. This is by far the easiest way to organize a visit as it builds on collaborations that already exist. It is also the best way to identify a researcher with relevant experience to help you develop new ideas in the context of the topic of your thesis. As such, the first step is talking to your supervisor; they might already have someone in mind and can initiate the contact.

It is also possible there already is a researcher that you want to work with for an extended period. If you are going to a conference and they are going as well, try to talk to them there. You can contact them before the conference to suggest you meet to discuss your work with them. Having met them in person makes it much easier to talk to them about visiting their lab. If there isn’t an opportunity to meet in person, it is also fine to email the researcher you are interested in working with and ask them whether this could be arranged.



There is more funding available for research visits than might seem at first. Below is a list of some useful starting places for researching funding options. Everyone’s background is unique and the opportunities will vary accordingly.

(i) Funding organizations: It is very likely the organization or research council funding your PhD also has funding for travel visits. What is more, they are probably very keen to fund such a visit. The Economic and Social Research council in the UK is a great example of this as they place great emphasis on international research links through their Overseas Institutional Visits scheme.

(ii) Universities: There is a chance that the institution you want to visit has a ‘visiting scholars’ program that you can apply for to fund the visit. Similarly, your own institution might have ‘travel abroad’ schemes with funding for going abroad to an institution of your choice. Further, there are partnership networks between universities that also offer funding. An example is the Worldwide Universities Network which supports mobility for students and researchers between its partner institutions. It is best to ask your university or someone in the department whether you belong to one.

(iii) National grants: Some countries have grants for their nationals to go on study visits – a great example is the German Academic Exchange Service, which also offers a lot of support for international students to come to Germany. Similarly, France funds visits of 6-10 months to any of its institutions through the Eiffel Excellence Scholarship. There are also bilateral agreements between countries to fund exchanges such as the Fulbright Commission which focuses on mobility between the US and (according to their website) a list of more than 155 countries.

(iv) Societies: Lastly, there are travel grants that are subject specific. The
Experimental Psychology Society, the British Psychology Society or the European Association of Social Psychology all offer study visit grants. However, sometimes there are membership conditions on these.


The key thing is to give yourself enough time to plan a visit. It is important to have an idea of what funding is available to you, when the funding deadlines are, what the application process is like, what documents you need, and what the interval is between submission and final decision.


Good luck!

We had an fMRI paper accepted to the Journal of Cognitive Neuroscience earlier this week. Having got the science out the door, I was able to turn my attention to the fun stuff – a cover image. The cover image for my first fMRI publication was selected by the Journal of Neuroscience and I wanted to go with something similar.

In the past 6 months or so, @alby has tweeted some of the images he generated using @lowpolybot, a twitter bot that returns low-polygon renderings of images tweeted to it. I tweeted a figure from the accepted paper to @lowpolybot and got this back:

@lowpolybot image from the tweet:
@lowpolybot image from the tweet:

There are a range of operations @lowpolybot can perform on your images (detailed on the @lowpolybot tumblr), but if you give no instructions you will get a random combination of operations applied to your image. This was what I had done. I was happy with the picture so, having checked with @lowpolybot’s creator @quasimondo that he was happy for me to do this, I submitted it to the journal.

Sadly though, there’s no chance this image will b e used as a cover image. I received an email the next day from a journal administrator informing me that they have stopped printing cover images. Ah well.

I, like most humans, am bad at understanding randomness and good at spotting patterns that don’t necessarily exist. I also frequently have thoughts like: “That’s the third time this paper has been rejected. It must be bad.” These things are related.

When I submit my work, all of the variables at play, including the quality of the thing being judged, combine to give me a probability that a positive outcome will occur e.g. 0.4 – 2 out of 5 times, a good thing will happen. BUT, probabilities produce lumpy strings of outcomes. That is, good and bad outcomes will appear to us pattern-spotting humans to be clustered, rather than what we would describe as “random”, which we tend to think of as evenly spaced (see the first link above).

To illustrate, I did something very straightforward in Excel to very crudely simulate trying to publish 8 papers.
Column A: =RAND() << (pseudo)randomly assign a number between 0 and 1; in the next
Column B: =IF(Ax>0.4, 0,1) << if the number column A (row x) exceeds .4, this cell will equal 0, otherwise it will equal 1.
Thus, column B will give me a list of successes (1s) and failures (Os) with an overall success rate of ~.4. It took me four refreshes before I got the following:

Note that the success rate, despite being set to .4, was .26 over this small number of observations. Also note that I embellished the output with a hypothetical stream of consciousness. I really wish I had the detachment of column C, but I don’t. I take rejections to heart and internalise bad outcomes like they are Greggs’ Belgian buns.


Although the rejections look clustered, they are all independently determined. I have almost certainly had strings of rejections like those shown above. The only thing that has made them bearable is that I have switched papers, moving on to a new project after ~3 rejections, at the same time giving up on the thrice-rejected paper I assume to be a total failure. As a result, I am almost certainly sitting on good data that has been tainted by bad luck.

Stick with it. It evens out in the end.

Last week Ravi Mill and I had a paper accepted to Consciousness and Cognition. It was accepted after 49 weeks with the journal in four rounds of review. The editorial decisions on the paper were: Reject, Revise and Resubmit, Accept with Minor Revisions and finally, Accept. What makes this decision history somewhat remarkable is that it was initially rejected from the journal it was eventually published in.

This blog post won’t give as much information on that initial rejection as I wanted it to – I sought permission from the journal to publish all correspondence from the reviewer and the editor, which was denied. Below you find only my response to the editorial decision. As context, the manuscript was rejected on the basis of one review, in which it was suggested that we had adopted  some unconventional and even nefarious practices in gathering and analysing our data. These  suggestions didn’t sit well with me, so I sent the following email to the Editor via the Elsevier Editorial System.


Dear Prof XXXX,


Thank you for your recent consideration of our manuscript, ‘”Old?” or “New?”: The test question provokes a goal-directed bias in memory decision-making’ (Ms. No. XXXXX-XX-XXX), for publication in Consciousness & Cognition. We were, of course, disappointed that you chose to reject the manuscript.


Having read the justification given for rejection, we respectfully wish to respond to the decision letter. Whilst we do not believe that this response will prompt reconsideration of the manuscript (your decision letter was clear) we believe it is important to respond for two reasons. First, to reassure you that we have not engaged in any form of data manipulation, and second, to state our concerns that the editorial team at Consciousness & Cognition appear to view it as acceptable that reviewers base their recommendations on poorly substantiated inferences they have made about the motivations of authors to engage in scientific misconduct.


As you highlighted in your decision letter, Reviewer 1 raised “substantial concerns about the manner in which [we] carried out the statistical analysis of [our] data”. The primary concern centred on our decision to exclude participants whose d’ was below a threshold of 0.1. In this reply we hope to demonstrate to you that use of such an exclusion criterion is not only standard practice, but it is indeed a desirable analysis step which should give readers more, not less, confidence in the analyses and their interpretation. We will then take the opportunity to demonstrate, if only to you, that our data behave exactly as one would expect them to under varying relaxations of the enforced exclusion criterion.


Recognition memory studies often require that participants exceed a performance (sensitivity; d’) threshold before they are included in the study. This is typically carried out when the studies themselves treat a non-sensitivity parameter as their primary dependent variable (as in our rejected paper), as a means of excluding participants that were unmotivated or disengaged from the task. Below are listed a small selection of studies published in the past 2 years which have used sensitivity-based exclusion criteria, along with the number of participants excluded and the thresholds used:


Craig, Berman, Jonides & Lustig (2013) – Memory & Cognition
Word recognition
Expt2 – 10/54, Expt3 – 5/54
80% accuracy


Davidenko, N. & Flusberg, S.J. (2012) – Cognition
Face recognition
Expt1a – 1/56, Expt1b –1/26, Expt2b –3/26
chance (50%) accuracy


Gaspelin, Ruthruff, & Pashler, (2013). Memory & Cognition
Word recognition
Expt1 – 3/46, Expt2 – 1/49, Expt3 – 3/54
70% accuracy


Johnson & Halpern (2012) – Memory & Cognition
Song recogntion
Expt1 – 1/20, Expt2 – 3/25
70% accuracy


Rummel, Kuhlmann, & Touron (2013) – Consciousness & Cognition
Word classification (prospective memory task)
Expt1 – 6/145
“prospective memory failure”


Sheridan & Reingold (2011) – Consciousness & Cognition
Word recognition
Expt1 – 5/64
“difficulty following instructions”


Shedden, Milliken, Watters & Monteiro (2013) – Consciousness & Cognition
Letter recognition
Expt4 – 5/28
85% accuracy


You will note that there is tremendous variation in the thresholds used, but that it is certainly not “unusual” as claimed by Reviewer 1, not even for papers published in Consciousness and Cognition. Of course, it would not be good scientific practice to accept the status quo uncritically, and we must therefore explain why the employed sensitivity-based exclusion criterion was appropriate for our study. The reasoning is that we were investigating an effect associated with higher order modulation of memory processes. If we had included participants with d’s below 0.1 (an overall accuracy rate of approximately 52% where chance responding is 50%) then it is reasonable to assume that these participants were not making memory decisions based on memory processes, but were at best contributing noise to the data (e.g. via random responding), and at worst systematically confounding the data. To demonstrate the latter point, one excluded participant from Experiment 1 had a d’ of -3.7 (overall accuracy rate of 13%), which was most likely due to responding using the opposite keys to those which they were instructed to use, substituting “new” responses for “old” responses. As we were investigating the effects of the test question on the proportion of “new” and “old” responses, it is quite conceivable that if they had displayed the same biases as our included participants did overall, they would have reduced our ability to detect a real effect. Including participants who did not meet our inclusion criteria, instead of helping us to find effects that hold across all participants, would have systematically damaged the integrity of our findings, leading to reduced estimates of effect size caused by ignoring the potential for influence of confounding variables.


If we had been given the opportunity to respond to Reviewer 1’s critique via the normal channels, we would have also corrected Reviewer 1’s inaccurate reading of our exclusion rates. As we stated very clearly in the manuscript, our sensitivity-based exclusion rates were 5%, 6% and 17%, not 25%, 6% and 17%. Reviewer 1 has conflated Experiment 1’s exclusions based on native language with exclusion based on sensitivity. As an aside, we justify exclusion based on language once again as a standard exclusion criterion in word memory experiments to ensure equivalent levels of word comprehension across participants. This is of particular importance when conducting online experiments which allow anyone across the world to participate. In coding the experiment, we thought it far more effective to allow all visitors to participate after first indicating their first language, with a view to excluding non-native speakers’ data from the study after they had taken part. We wanted all participants to have the opportunity to take part in the study (and receive feedback on their memory performance – a primary motivator for participation according to anecdotal accounts gleaned from social media) and to minimise any misreporting of first language which would add noise to the data without recourse for its removal.


We would next have responded to Reviewer 1’s claims that our conclusions are not generalisable based on the subset of analysed data by stating that Reviewer 1 is indeed partially correct. Our conclusions would not have been found had we included participants who systematically confounded the data (as discussed above) – as Reviewer 1 is at pains to point out, the effect is small. Nonetheless as demonstrated by our replication of the findings over three experiment and the following reanalyses, our findings are robust enough to withstand inclusion of some additional noise, within reason. To illustrate, we re-analysed the data under two new inclusion thresholds. The first, a d’ <= 0 threshold, equivalent to chance responding (Inclusion 1), and the second a full inclusion in which all participants were analysed (Inclusion 2). For the sake of brevity we list here the results as they relate to our primary manipulation, the effects of question on criterion placement.


Original – old emphasis c > new emphasis c, t(89) = 2.141, p = .035, d = 0.23.


Inclusion 1:
old emphasis c > new emphasis c, t(90) = 2.32, p = .023, d = 0.24.
Inclusion 2:
no difference between old and new emphasis c, t(94) = 1.66, p = .099, d = 0.17


Main effect of LOP, F(1,28) = 23.66, p = .001, ηp2 = .458, shallow > deep.
Main effect of emphasis, F(1,28) = 6.65, p = .015, ηp2 = .192, old? > new?.
No LOP x emphasis interaction, F(1,28) = 3.13, p = .088, ηp2 = .101.
Shallow LOP sig old > new, t(28) = 3.05, p = .005, d = 0.62.
Deep LOP no difference, t(28) = .70, p = .487, d = 0.13.


Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
Main effect of LOP, F(1,30) = 20.84, p = .001, ηp2 = .410, shallow > deep.
Main effect of emphasis, F(1,30) = 8.73, p = .006, ηp2 = .225, old? > new?.
Sig LOP x emphasis interaction, F(1,30) = 4.28, p = .047, ηp2 = .125.
Shallow LOP sig old > new, t(30) = 3.50, p = .001, d = 0.64.
Deep LOP no difference, t(30) = .76, p = .454, d = 0.14.


No main effect of response, F(1,28) = 3.73, p = .064, ηp2 = .117
No main effect of question, F < 1.
Significant question x response interaction, F(1,28) = 8.50, p = .007, ηp2 = .233.
“Yes” response format, old > new, t(28) = 2.41, p = .023, d = 0.45.
“No” response format, new > old, t(28) = 2.77, p = .010, d = 0.52.


Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
No main effect of response, F <1.
No main effect of question, F < 1.
No question x response interaction, F(1,34) = 3.07, p = .089, ηp2 = .083.
“Yes” response format, old > new, t(34) = 1.33, p = .19, d = 0.23.
“No” response format, new > old, t(34) = 1.84, p = .07, d = 0.32.


To summarise, including participants who responded anywhere above chance had no untoward effects on the results of our inferential statistics and therefore our interpretation cannot be called into question by the results of Inclusion 1. Inclusion 2 on the other hand had much more deleterious effects on the patterns of results reported in Experiments 1 and 3. This is exactly what one would expect given the example we described previously where the inclusion of a participant responding systematically below chance would elevate type II error. In this respect, our reanalysis in response to Reviewer 1’s comments does not weaken our interpretation of the findings.


As a final point, we wish to express our concerns about the nature of criticism made by Reviewer 1 and accepted by you as appropriate within peer-review for Consciousness & Cognition. Reviewer 1 states that we the authors “must accept the consequences of data that might disagree with their hypotheses”. This strongly suggests that we have not done so in the original manuscript and have therefore committed scientific misconduct or entered a grey-area verging on misconduct. We deny this allegation in the strongest possible terms and are confident we have demonstrated that this is absolutely not the approach we have taken through the evidence presented in this response. Indeed if Reviewer 1 wishes to make these allegations, they would do well to provide evidence beyond the thinly veiled remarks in their review. If they wish to do this, we volunteer full access to our data for them to conduct any tests to validate their claims, e.g. those carried out in Simonsohn (2013) in which a number of cases of academic misconduct and fraud are exposed through statistical methods. We, and colleagues we have spoken to about this decision, found it worrying that you chose to make your editorial decision on the strength of this unsubstantiated allegation and believe that at the very least we should have been given the opportunity to respond to the review, as we have done here, via official channels.


We thank you for your time.




Akira O’Connor & Ravi Mill


Craig. K. S., Berman, M.G., Jonides, J. & Lustig, C (2013) Escaping the recent past: Which stimulus dimensions influence proactive interference? Memory & Cognition 41, 650-670.
Davidenko, N. & Flusberg, S.J. (2012) Environmental inversion effects in face perception. Cognition 123(2), 442-447.
Gaspelin, N., Ruthruff, E. & Pashler, H. (2013) Divided attention: An undesirable difficulty in memory retention. Memory & Cognition 41, 978-988.
Johnson, S. K. & Halpern, A. R. (2012) Semantic priming of familiar songs. Memory & Cognition 40, 579-593.
Rummel, J., Kuhlmann, B.G. & Touron, D. R. (2013) Performance predictions affect attentional processes of event-based prospective memory. Consciousness and Cognition, 22 (3), 729-741.
Shedden, J. M., Milliken, B., Watters, S. & Monteiro, S (2013) Event-related potentials as brain correlates of item specific proportion congruent effects. Consciousness & Cognition, 22 (4), 1442-1455.
Sheridan, H. & Reingold, E. M. (2011) Recognition memory performance as a function of reported subjective awareness. Consciousness & Cognition 20 (4), 1363-1375.
Simonsohn, U. (2013) Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875-1888.


The Editor’s very reasonable response was to recommend we resubmit the manuscript, which we did. The manuscript was then sent out for review to two new reviewers, and the process began again, this time with a happier ending.

My recommendations for drafting unsolicited responses are:

  • Allow the dust to settle (this is key to Jim Grange’s tips on Dealing with Rejection too). We see injustice everywhere in the first 24 hours following rejection. Give yourself time to calm down and later, revisit the rejection with a more forensic eye. If the reviews or editorial letter warrant a response, they will still warrant it in a few days, by which time you will be better able to pick the points you should focus on.
  • Be polite. (I skate on thin ice in a couple of passages in the letter above, but overall I think I was OK).
  • Support your counterarguments with evidence. I think our letter did this well. If you need to do some more analyses to achieve this, why not? It will at least reassure you that the reviewer’s points aren’t supported by your data.
  • Don’t expect anything to come of your letter. At the very least, it will have helped you manage some of your frustration.

Earlier in the year I was asked by the University of St Andrews Open Access Team to give an interview to a group from the University of Edinburgh Library. I’m certainly no expert, but I’m more excited about the idea than some researchers here at St Andrews (though there are some other researchers here, like Kim McKee, who are extremely enthusiastic about it). The video is embedded below, with my 40 second contribution from 8:44 onwards.



My interview actually lasted more than half an hour, though most of what I was trying to communicate wasn’t really consistent with what the interviewers wanted. If you watch the video through, you’ll notice the editorial push towards green rather than gold OA*. I do understand this push, especially from a library’s perspective – we can and should be uploading the vast majority of our work to institutional repositories and making it open access via the green route – but I don’t think that is helps the long-term health of academic publishing.

I spent a long time in my interview arguing for gold open access, but not the ‘hybrid’ gold open access offered by traditional publishers like Elsevier. (I find the current implementation of hybrid open access pretty abhorrent. It seems to me to be an utterly transparent way for the traditional publishers to milk the cow at both ends, collecting subscriptions and APCs.)  I’m not even too thrilled by the native OA publishers like Frontiers and PLoS, not because they’re bad for academic publishing (I think they are far better for the dissemination of research than the traditional publishers), but because they’re not revolutionary (though see Graham Steel’s comments below)**. Their model is pretty straightforward (or you could call it boring and expensive) – by shifting the collection of money from the back- to the front- end, they negate the need for institutional subscriptions by charging APCs in the region of $1000s. What I am excited about is the gold open access offered by some open access publishers who have thought about a publishing model for the modern era from the ground up, not by simple adaptation of printing press-era models. Publishers like PeerJ and The Winnower have done just this, and these are the sorts of gold OA publishers I hope will change the way we disseminate research.

Sadly for me, I didn’t express myself well enough on that matter to make the final cut of this video. Next time…


* Here’s a brief primer in case you’re not familiar with these terms. Green OA is repository-based free OA – you typically deposit author versions (the documents submitted to the journal rather than the typeset documents published by the journal) into an institutional database. Anyone who knows to look in the repository for your work will find it there. Gold OA is not free – there are almost always article processing charges (APCs) – but once paid for, anyone can access the publisher version of your paper directly from the  publisher’s website.


** Parentheses added 14/08/2014 following Graham Steel’s comments.

This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at Leeds University and the Université de Bourgogne in Dijon. Radka has embraced online experimentation and has run many hundreds of participants through an impressive number of experiments coded in Javascript.


Onscreen Experiments

Recently, Crump, McDonnell and Gureckis (2013) replicated the results of a number of classic cognitive behavioral tasks, such as the Stroop task, using experiments conducted online. They demonstrated that, despite what some people fear, online testing can be as reliable as lab-based testing. Additionally, online testing can be extremely fast and efficient in a way that lab-based testing cannot. I have now completed my 7th online experiment as well as having helped others in creating and advertising theirs.  This post is a review of things I have learned in the process. It summarises what I did not know but now wish I had when I was planning my first study and answers some questions I got asked by others along the way.



In terms of conducting online experiments, the best method remains programming as it is by far the most flexible approach. As someone who has learned programming on my own from free online courses, I can confirm that this is not as difficult as some people think it to be and it really is quite fun (for some tips on where to get started this TED blog post is quite useful.). At the same time, many people do not know how to code and do not have the time to learn. The good news is that for many experiments, the current survey software available online remains flexible enough to create large number of experiments although the potential complexity is naturally limited. My favorite is Qualtrics as even the free version allows a fair amount of functionality and number of trials.



A major advantage of the Internet is that one can reach many different communities. With online testing, one can reach participants who are simply interested in psychology experiments and volunteering in a way that is preferable to testing psychology undergraduates who are coerced into participating for course credit. Once you have an experiment to advertise, the challenge is to find the easiest route by which to reach these people.

There are many websites that focus directly on advertising online experiments. The one I have found the most useful is the Psychological Research on the Net website administered by John H. Krantz. Alternatively, the In-Mind magazine has a page where they post online experiments, which they also share on their Facebook and Twitter account.  Other websites that host links to online studies are the Social Psychology Network  and Online Psychology Research.

The most powerful way for a single individual to reach participants is, quite unsurprisingly, social media. Once a few people start sharing the link, the interest can spread very quickly. The simplest thing to do is to post your study on your Facebook page or Twitter account. Something I haven’t tried yet but that might be worth exploring is finding pages on Facebook or hashtags on Twitter that might relate to the topic of the experiment or psychology in general and post the link to the experiment there. One of the biggest successes for me though, remains reddit. Reddit has a very strong community and people spend time their because they are actively searching for new information and interesting projects. There are a number of subreddits that are specific to psychology so yet again, visited by people interested in these particular topics. To give a few examples: psychology; cognitive science; psych science; music and cognition; mathematical psychology and the list goes on! There is even a subreddit specific to finding participants to complete surveys and experiments simply called Sample Size.

The last resource I have tried a number of times is using more general advertising sites such as craigslist. There is always a ‘volunteers’ section, which is visited by people looking to volunteer for a project of some sort. In that sense it can be a good place to reach participants and the sample will be fairly diverse. This for me has never been as successful as using social media but a few times it has worked fairly well.



The most commonly heard argument against online testing is the lack of control. Really what this means is that data collected online might include more noise, making it easier to miss existing effects, than traditional lab-based experiments. As already mentioned, Crump et al. (2013) replicated a number of classic tasks online suggesting that this might not be as big a worry as it at first seems to be. The range of tasks they have chosen demonstrates nicely that the same results can be obtained in the lab as well as on the Internet. Nevertheless, there are a number of ways one can track participants’ behavior to determine whether sufficient attention was given to the experiment. The simplest way is to measure the time participants took to complete the study. If you are using existing survey software, this information is usually automatically provided. If you are programming the study yourself, requesting a timestamp for when the study begins and for when it ends is an easy way to track the same kind of information. If participants are abnormally slow (or fast) in completing a task, then one might have sufficient reasons to exclude the data.

One of the biggest problems I have encountered is a participant completing one part of the task (e.g. a recognition test) but not completing as faithfully another part of the same experiment (e.g. free report descriptions of particular memory experiences from her daily life). While due to ethics we were not allowed to force participants to respond to any question, I have found that simply asking if they are sure they want to proceed, in case that they haven’t filled out all the questions on a page, increased report rates dramatically. As such it can be useful to provide such pointers along the way to make sure participants answer all questions without forcing them to do so.

Crump et al. (2013) also point out from their experiences of online testing that it can be useful to include some questions about the study instructions.  One could simply ask participants to describe briefly what it is that they are expected to do in the experiment. This way one has data against which to check whether participants understood the instructions and completed the task as anticipated. It will probably also help to ensure that participants pay close attention to the instructions. This is particularly useful if the task is fairly complex.



A big disadvantage of online testing can be dropout rates. This isn’t something I have tested in any formal way but it does seem that there is at least some relationship between the length of the study and dropout rates. This means that online testing is definitely most suitable to studies, which are up to 15 or 20 minutes in length to complete and this might be something to consider. It is also certain that tasks, which are more engaging, will have lower dropout rates. A good incentive I have found is to give participants at the end of an experiment a breakdown of their performance. I have had many participants confirm that they really enjoyed the feedback on how they performed on the memory task. Such feedback is a simple but efficient way to increase participation and decrease dropout rates.

The second worry is participants’ dropping out in the middle of an experiment and then restarting it. It is not something that would be common but it could happen. One way to deal with this is to ask participants to provide at the beginning of the study some code that should be unique to each participant, anonymous and yet always constant. An example is asking participants to create a code consisting of their day and month of birth and ending with their mother’s maiden initials. This is hardly a novel idea, I have participated in experiments, which asked for such information to create participant IDs that allowed to link responses across a number of experimental sessions. The idea is to find some combination of numbers and letters that should never (or rarely) be the same for two participants but that remains the same for any one participant, whenever he is asked. Once in the data-analysis stage, one can simply exclude files that contain repetitions of the same code.

Once the study is up and running, other than finding suitable places to advertise it at, one can leave it and focus on other things until the data has been collected. It is possible to reach large samples quickly and these samples are often more diverse than your classic psychology undergraduate population. There is a certain degree of luck involved but I have in the past managed to collect data for well over 100 participants in a single day. That is not to say that all studies are suitable to online testing but it is definitely a resource well worth exploring.