Call for papers:
Déjà vu and other dissociative states in memory 

A special issue of Memory
Submission deadline: 31st July 2017
Guest Editors (email links):  Chris Moulin, Akira O’Connor and Christine Wells

In recent years, déjà vu has become of great interest in cognition, where it is mostly seen as a memory illusion.  It can be described as having two critical components: an intense feeling of familiarity and a certainty that the current moment is novel.  As such, déjà vu could be described as a dissociative experience, resulting from a metacognitive evaluation (the certainty) of a lower-level memory process (familiarity).  There are currently a number of proposals of how déjà vu arises which receive empirical support from paradigms which attempt to reproduce déjà vu in laboratory settings.  Further information about déjà vu comes from neuropsychological populations and the use of neuroscientific methods, where again the focus is on memory, and in particular the involvement of temporal lobe structures.  In this Special Issue, we will draw together the state of the art in déjà vu research, and develop and evaluate the idea that déjà vu can be seen as a momentary memory dysfunction.  We are seeking empirical papers and brief theoretical statements which consider the nature of déjà vu and how it may be induced experimentally, as well as studies of déjà vu in pathological groups, and studies investigating the neural basis of déjà vu.  We are also interested in associated dissociative phenomena, such as jamais vu, presque vu, prescience and other metacognitive illusions, where their relation to contemporary memory theory (and déjà vu) are clear.

We will consider all types of empirical article, including short reports and neuropsychological cases.  Theoretical statements and reviews should make a genuine novel contribution to the literature.  First drafts should be submitted by the end of July 2017 through the Memory portal,, please select special issue ‘Deja vu’. All submissions will undergo normal full peer review, maintaining the same high editorial standards as for regular submissions to Memory.

If you are considering submitting an article please contact one of the editorial team stating the title of you intended submission.

Is it possible to reliably generate déjà vu in participants? Is it possible to get participants to reliably report déjà vu? These very similar questions are not necessarily as closely linked as we might think.

A paper I wrote with Radka Jersakova (@RadkaJersakova) and Chris Moulin (@chrsmln), recently published in PLOS ONE, reports a series of experiments in which we tried to stop people reporting déjà vu. Why? Because even in simple memory experiments that shouldn’t generate the sensation, upwards of 50% of participants will agree to having experienced déjà vu when asked about it. On the one hand, it’s a pretty strange set of experiments in which we are chasing non-significant results. On the other, it’s really important for the field of subjective experience research. If we can’t reliably assess the absence of an experience, how can we trust reports of its presence (OR if your null hypothesis isn’t a true null, don’t bother with an alternative hypothesis)?

Chris Moulin has published a much more detailed blog post about the paper that’s well worth a read. And of course, there’s the PLOS ONE paper itself.

PlosOne Deja vu Paper

The Journal of Cognitive Neuroscience have just invoiced me $985 for a paper they agreed to publish earlier this year. This wasn’t unexpected – not only did we sign away our copyright, allowing MIT Press to make money from our work, but we did so knowing that we would pay a hefty sum to allow them to do this. It still came as a bit of a shock though.

Paying the invoice will curtail some of my research activities next year, like going to conferences to present data. I put this to the journal, asking if they’d hear my case for a reduction or a waiver. Here’s their response:


JOCN does not provide fee waivers. Page costs are stated on the submission guidelines page, as well as on the last page of the online manuscript submission so that all authors are aware of the financial obligation required if your paper is accepted for publication. These fees pay for the website, submission software, and other costs associated with running the journal. If you are unable to pay the page fees, please let us know so that we can remove your manuscript from the publication schedule.

Editorial Staff
Journal of Cognitive Neuroscience


What did I expect though? We willingly submitted to this journal knowing that they would charge us $60 per page. And the Journal of Cognitive Neuroscience certainly isn’t alone in doing this. Most cognitive neuroscience journals are pretty good at making money out of authors (see table below – I haven’t included OA megajournals in the table). Imagers tend to have money and junior imagers, like all junior academics, still need to publish journals that have a reputation.

For what it’s worth, Elsevier journals keep their noses pretty clean. Cerebral Cortex’s publishing house Oxford Journals though… pretty much every stage of that process is monetised. Just. Wow.


Journal NameJournal of NeuroscienceCerebral CortexNeuroimage / Cortex / NeuropsychologiaJournal of Cognitive NeuroscienceCognitive, Affective and Behavioral NeuroscienceCognitive Neuroscience
Our Paper18503387098511001422
PublisherSociety for NeuroscienceOxford JournalsElsevierMIT PressSpringerTaylor & Francis
IF (2013)6.748.376.13 / 6.04 / 3.454.693.212.38
Costs ($)
Figures (Colour)-720--1100*474
Open Access Supplement282034002200 / 2200 / 1800(unknown)30002950
Black and white figures are without cost in all the listed journals. IF is Impact Factor. The paper for which the 'Our paper' costs are calculated had 3 authors, 16 pages, 3 colour figures, and no Open Access Supplement.
* There is a one-off charge for all colour figures, regardless of number.

I, like most humans, am bad at understanding randomness and good at spotting patterns that don’t necessarily exist. I also frequently have thoughts like: “That’s the third time this paper has been rejected. It must be bad.” These things are related.

When I submit my work, all of the variables at play, including the quality of the thing being judged, combine to give me a probability that a positive outcome will occur e.g. 0.4 – 2 out of 5 times, a good thing will happen. BUT, probabilities produce lumpy strings of outcomes. That is, good and bad outcomes will appear to us pattern-spotting humans to be clustered, rather than what we would describe as “random”, which we tend to think of as evenly spaced (see the first link above).

To illustrate, I did something very straightforward in Excel to very crudely simulate trying to publish 8 papers.
Column A: =RAND() << (pseudo)randomly assign a number between 0 and 1; in the next
Column B: =IF(Ax>0.4, 0,1) << if the number column A (row x) exceeds .4, this cell will equal 0, otherwise it will equal 1.
Thus, column B will give me a list of successes (1s) and failures (Os) with an overall success rate of ~.4. It took me four refreshes before I got the following:

Note that the success rate, despite being set to .4, was .26 over this small number of observations. Also note that I embellished the output with a hypothetical stream of consciousness. I really wish I had the detachment of column C, but I don’t. I take rejections to heart and internalise bad outcomes like they are Greggs’ Belgian buns.


Although the rejections look clustered, they are all independently determined. I have almost certainly had strings of rejections like those shown above. The only thing that has made them bearable is that I have switched papers, moving on to a new project after ~3 rejections, at the same time giving up on the thrice-rejected paper I assume to be a total failure. As a result, I am almost certainly sitting on good data that has been tainted by bad luck.

Stick with it. It evens out in the end.

Last week Ravi Mill and I had a paper accepted to Consciousness and Cognition. It was accepted after 49 weeks with the journal in four rounds of review. The editorial decisions on the paper were: Reject, Revise and Resubmit, Accept with Minor Revisions and finally, Accept. What makes this decision history somewhat remarkable is that it was initially rejected from the journal it was eventually published in.

This blog post won’t give as much information on that initial rejection as I wanted it to – I sought permission from the journal to publish all correspondence from the reviewer and the editor, which was denied. Below you find only my response to the editorial decision. As context, the manuscript was rejected on the basis of one review, in which it was suggested that we had adopted  some unconventional and even nefarious practices in gathering and analysing our data. These  suggestions didn’t sit well with me, so I sent the following email to the Editor via the Elsevier Editorial System.


Dear Prof XXXX,


Thank you for your recent consideration of our manuscript, ‘”Old?” or “New?”: The test question provokes a goal-directed bias in memory decision-making’ (Ms. No. XXXXX-XX-XXX), for publication in Consciousness & Cognition. We were, of course, disappointed that you chose to reject the manuscript.


Having read the justification given for rejection, we respectfully wish to respond to the decision letter. Whilst we do not believe that this response will prompt reconsideration of the manuscript (your decision letter was clear) we believe it is important to respond for two reasons. First, to reassure you that we have not engaged in any form of data manipulation, and second, to state our concerns that the editorial team at Consciousness & Cognition appear to view it as acceptable that reviewers base their recommendations on poorly substantiated inferences they have made about the motivations of authors to engage in scientific misconduct.


As you highlighted in your decision letter, Reviewer 1 raised “substantial concerns about the manner in which [we] carried out the statistical analysis of [our] data”. The primary concern centred on our decision to exclude participants whose d’ was below a threshold of 0.1. In this reply we hope to demonstrate to you that use of such an exclusion criterion is not only standard practice, but it is indeed a desirable analysis step which should give readers more, not less, confidence in the analyses and their interpretation. We will then take the opportunity to demonstrate, if only to you, that our data behave exactly as one would expect them to under varying relaxations of the enforced exclusion criterion.


Recognition memory studies often require that participants exceed a performance (sensitivity; d’) threshold before they are included in the study. This is typically carried out when the studies themselves treat a non-sensitivity parameter as their primary dependent variable (as in our rejected paper), as a means of excluding participants that were unmotivated or disengaged from the task. Below are listed a small selection of studies published in the past 2 years which have used sensitivity-based exclusion criteria, along with the number of participants excluded and the thresholds used:


Craig, Berman, Jonides & Lustig (2013) – Memory & Cognition
Word recognition
Expt2 – 10/54, Expt3 – 5/54
80% accuracy


Davidenko, N. & Flusberg, S.J. (2012) – Cognition
Face recognition
Expt1a – 1/56, Expt1b –1/26, Expt2b –3/26
chance (50%) accuracy


Gaspelin, Ruthruff, & Pashler, (2013). Memory & Cognition
Word recognition
Expt1 – 3/46, Expt2 – 1/49, Expt3 – 3/54
70% accuracy


Johnson & Halpern (2012) – Memory & Cognition
Song recogntion
Expt1 – 1/20, Expt2 – 3/25
70% accuracy


Rummel, Kuhlmann, & Touron (2013) – Consciousness & Cognition
Word classification (prospective memory task)
Expt1 – 6/145
“prospective memory failure”


Sheridan & Reingold (2011) – Consciousness & Cognition
Word recognition
Expt1 – 5/64
“difficulty following instructions”


Shedden, Milliken, Watters & Monteiro (2013) – Consciousness & Cognition
Letter recognition
Expt4 – 5/28
85% accuracy


You will note that there is tremendous variation in the thresholds used, but that it is certainly not “unusual” as claimed by Reviewer 1, not even for papers published in Consciousness and Cognition. Of course, it would not be good scientific practice to accept the status quo uncritically, and we must therefore explain why the employed sensitivity-based exclusion criterion was appropriate for our study. The reasoning is that we were investigating an effect associated with higher order modulation of memory processes. If we had included participants with d’s below 0.1 (an overall accuracy rate of approximately 52% where chance responding is 50%) then it is reasonable to assume that these participants were not making memory decisions based on memory processes, but were at best contributing noise to the data (e.g. via random responding), and at worst systematically confounding the data. To demonstrate the latter point, one excluded participant from Experiment 1 had a d’ of -3.7 (overall accuracy rate of 13%), which was most likely due to responding using the opposite keys to those which they were instructed to use, substituting “new” responses for “old” responses. As we were investigating the effects of the test question on the proportion of “new” and “old” responses, it is quite conceivable that if they had displayed the same biases as our included participants did overall, they would have reduced our ability to detect a real effect. Including participants who did not meet our inclusion criteria, instead of helping us to find effects that hold across all participants, would have systematically damaged the integrity of our findings, leading to reduced estimates of effect size caused by ignoring the potential for influence of confounding variables.


If we had been given the opportunity to respond to Reviewer 1’s critique via the normal channels, we would have also corrected Reviewer 1’s inaccurate reading of our exclusion rates. As we stated very clearly in the manuscript, our sensitivity-based exclusion rates were 5%, 6% and 17%, not 25%, 6% and 17%. Reviewer 1 has conflated Experiment 1’s exclusions based on native language with exclusion based on sensitivity. As an aside, we justify exclusion based on language once again as a standard exclusion criterion in word memory experiments to ensure equivalent levels of word comprehension across participants. This is of particular importance when conducting online experiments which allow anyone across the world to participate. In coding the experiment, we thought it far more effective to allow all visitors to participate after first indicating their first language, with a view to excluding non-native speakers’ data from the study after they had taken part. We wanted all participants to have the opportunity to take part in the study (and receive feedback on their memory performance – a primary motivator for participation according to anecdotal accounts gleaned from social media) and to minimise any misreporting of first language which would add noise to the data without recourse for its removal.


We would next have responded to Reviewer 1’s claims that our conclusions are not generalisable based on the subset of analysed data by stating that Reviewer 1 is indeed partially correct. Our conclusions would not have been found had we included participants who systematically confounded the data (as discussed above) – as Reviewer 1 is at pains to point out, the effect is small. Nonetheless as demonstrated by our replication of the findings over three experiment and the following reanalyses, our findings are robust enough to withstand inclusion of some additional noise, within reason. To illustrate, we re-analysed the data under two new inclusion thresholds. The first, a d’ <= 0 threshold, equivalent to chance responding (Inclusion 1), and the second a full inclusion in which all participants were analysed (Inclusion 2). For the sake of brevity we list here the results as they relate to our primary manipulation, the effects of question on criterion placement.


Original – old emphasis c > new emphasis c, t(89) = 2.141, p = .035, d = 0.23.


Inclusion 1:
old emphasis c > new emphasis c, t(90) = 2.32, p = .023, d = 0.24.
Inclusion 2:
no difference between old and new emphasis c, t(94) = 1.66, p = .099, d = 0.17


Main effect of LOP, F(1,28) = 23.66, p = .001, ηp2 = .458, shallow > deep.
Main effect of emphasis, F(1,28) = 6.65, p = .015, ηp2 = .192, old? > new?.
No LOP x emphasis interaction, F(1,28) = 3.13, p = .088, ηp2 = .101.
Shallow LOP sig old > new, t(28) = 3.05, p = .005, d = 0.62.
Deep LOP no difference, t(28) = .70, p = .487, d = 0.13.


Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
Main effect of LOP, F(1,30) = 20.84, p = .001, ηp2 = .410, shallow > deep.
Main effect of emphasis, F(1,30) = 8.73, p = .006, ηp2 = .225, old? > new?.
Sig LOP x emphasis interaction, F(1,30) = 4.28, p = .047, ηp2 = .125.
Shallow LOP sig old > new, t(30) = 3.50, p = .001, d = 0.64.
Deep LOP no difference, t(30) = .76, p = .454, d = 0.14.


No main effect of response, F(1,28) = 3.73, p = .064, ηp2 = .117
No main effect of question, F < 1.
Significant question x response interaction, F(1,28) = 8.50, p = .007, ηp2 = .233.
“Yes” response format, old > new, t(28) = 2.41, p = .023, d = 0.45.
“No” response format, new > old, t(28) = 2.77, p = .010, d = 0.52.


Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
No main effect of response, F <1.
No main effect of question, F < 1.
No question x response interaction, F(1,34) = 3.07, p = .089, ηp2 = .083.
“Yes” response format, old > new, t(34) = 1.33, p = .19, d = 0.23.
“No” response format, new > old, t(34) = 1.84, p = .07, d = 0.32.


To summarise, including participants who responded anywhere above chance had no untoward effects on the results of our inferential statistics and therefore our interpretation cannot be called into question by the results of Inclusion 1. Inclusion 2 on the other hand had much more deleterious effects on the patterns of results reported in Experiments 1 and 3. This is exactly what one would expect given the example we described previously where the inclusion of a participant responding systematically below chance would elevate type II error. In this respect, our reanalysis in response to Reviewer 1’s comments does not weaken our interpretation of the findings.


As a final point, we wish to express our concerns about the nature of criticism made by Reviewer 1 and accepted by you as appropriate within peer-review for Consciousness & Cognition. Reviewer 1 states that we the authors “must accept the consequences of data that might disagree with their hypotheses”. This strongly suggests that we have not done so in the original manuscript and have therefore committed scientific misconduct or entered a grey-area verging on misconduct. We deny this allegation in the strongest possible terms and are confident we have demonstrated that this is absolutely not the approach we have taken through the evidence presented in this response. Indeed if Reviewer 1 wishes to make these allegations, they would do well to provide evidence beyond the thinly veiled remarks in their review. If they wish to do this, we volunteer full access to our data for them to conduct any tests to validate their claims, e.g. those carried out in Simonsohn (2013) in which a number of cases of academic misconduct and fraud are exposed through statistical methods. We, and colleagues we have spoken to about this decision, found it worrying that you chose to make your editorial decision on the strength of this unsubstantiated allegation and believe that at the very least we should have been given the opportunity to respond to the review, as we have done here, via official channels.


We thank you for your time.




Akira O’Connor & Ravi Mill


Craig. K. S., Berman, M.G., Jonides, J. & Lustig, C (2013) Escaping the recent past: Which stimulus dimensions influence proactive interference? Memory & Cognition 41, 650-670.
Davidenko, N. & Flusberg, S.J. (2012) Environmental inversion effects in face perception. Cognition 123(2), 442-447.
Gaspelin, N., Ruthruff, E. & Pashler, H. (2013) Divided attention: An undesirable difficulty in memory retention. Memory & Cognition 41, 978-988.
Johnson, S. K. & Halpern, A. R. (2012) Semantic priming of familiar songs. Memory & Cognition 40, 579-593.
Rummel, J., Kuhlmann, B.G. & Touron, D. R. (2013) Performance predictions affect attentional processes of event-based prospective memory. Consciousness and Cognition, 22 (3), 729-741.
Shedden, J. M., Milliken, B., Watters, S. & Monteiro, S (2013) Event-related potentials as brain correlates of item specific proportion congruent effects. Consciousness & Cognition, 22 (4), 1442-1455.
Sheridan, H. & Reingold, E. M. (2011) Recognition memory performance as a function of reported subjective awareness. Consciousness & Cognition 20 (4), 1363-1375.
Simonsohn, U. (2013) Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875-1888.


The Editor’s very reasonable response was to recommend we resubmit the manuscript, which we did. The manuscript was then sent out for review to two new reviewers, and the process began again, this time with a happier ending.

My recommendations for drafting unsolicited responses are:

  • Allow the dust to settle (this is key to Jim Grange’s tips on Dealing with Rejection too). We see injustice everywhere in the first 24 hours following rejection. Give yourself time to calm down and later, revisit the rejection with a more forensic eye. If the reviews or editorial letter warrant a response, they will still warrant it in a few days, by which time you will be better able to pick the points you should focus on.
  • Be polite. (I skate on thin ice in a couple of passages in the letter above, but overall I think I was OK).
  • Support your counterarguments with evidence. I think our letter did this well. If you need to do some more analyses to achieve this, why not? It will at least reassure you that the reviewer’s points aren’t supported by your data.
  • Don’t expect anything to come of your letter. At the very least, it will have helped you manage some of your frustration.

I’m going to disregard the usual speculation about what type-setter and editorial assistant salaries are, and how much distribution infrastructure costs because these are all tied in to the true costs of publishing from a publisher’s perspective and not what I’m interested in. Instead, I’m going to use figures from my employer, the University of St Andrews, to crudely examine what this very small market can bear open access articles to cost.

The first assumption I make here is that journal subscriptions and gold open access journal publication costs should be drawn from the same pool of money. That is, they are university outgoings that support publishers, thereby funding the publication of university-based researchers’ work.

The second assumption, which almost immediately serves to highlight how useless this back-of-the-envelope calculation is, is that we no longer need to subscribe to paywalled journals and can therefore channel all funds that we would have spent on this into open access publishing. For argument’s sake, let’s suppose that the UK government has negotiated a nationwide subscription to all journals with all closed-access publishers for the 2014/2015 academic year. This leaves the University of St Andrews Library with journal subscription money that it needs to spend in order to continue its current funding allocation. Naturally, it ploughs all of this into open access publishing costs.

Once comfortable with these assumptions, we can fairly easily estimate how much a university like mine could afford to pay for each article published, if every single output was a gold open access article such.

Total St Andrews University spending on journal subscriptions per year:
According to the library’s 2011/2012 annual report: £2.11m
According to a tweet from the @StAndrewsUniLib twitter account: ~£1.7m
Given that the higher value also included spending on databases and e-resources, I’ll go with the £1.7m/year estimate.

Total number of publications by St Andrews University researchers per year:
We have a PURE research information system on which all researchers are meant to report all of our publications.  Accordng to a tweet from @JackieProven at the University of St Andrews Library:

over 2000 publications/yr, about 1200 are articles and around half of those will have StA corresponding author

We can therefore assume 600 publications/year.

Open access publication costs which could be absorbed in this hypothetical situation:
£1,700,000/600 = £2,833

This value is higher than I was expecting it to be, and suggests that for even a small institution like the University of St Andrews, article processing charges (APC) in gold open access journals aren’t too far off the mark. According to PeerJ’s roundup, even PLOS Biology’s steep APC of $2900 is considerably less than what St Andrews could bear in this highly unrealistic situation.

Of course, there are quite a few caveats that sit on top of this hypothetical estimate and its assumptions:
1) I may well be underestimating the number of publication outputs from the University’s researchers. This would push the per-article cost the library could afford to pay down.
2) Larger universities would have a greater number of researchers and therefore publications. The increase in the denominator would be offset by an increase in the numerator-larger universities have medical schools and law schools which St Andrews does not-but I have no idea what effect this would have on the per-article cost these better endowed libraries could afford to pay.
3) The ecosystem would change. Gold open access journals have higher publication rates than paywalled journals. If more articles were published, this would also push the per-article cost the library could absorb down.
4) This estimate makes no consideration of the open access publication option in closed access journals. This publication option, as well as being more expensive than the gold open access offered in open access only journals allows traditional publishers to milk the cow at both ends (subscription costs AND APCs) and I imagine library administrators would struggle to justify supporting this from the same fund as that used to pay journal subscriptions.

I’ve been meaning to do this calculation for a few months and am grateful to the staff at the University of St Andrews Library for providing me with these figures. I’m interested in what others make of this, and would be keen to hear your thoughts in the comments below.

A recent submission to (and rejection from) Psychological Science has provided me with enough information on the editorial process, via Manuscript Central, to blog a follow-up to my Elsevier Editorial System blog of 2011. (I’m not the only person who is making public their manuscript statuses either, see also Guanyang Zhang’s original and most recent posts.)

Psychological Science Decision

Below is the chronology for the status updates a submission from my lab received from Psychological Science. As stated in the confirmation-of-submission letter received from the Editor-in-Chief, the process of obtaining a first decision should take up to 8 weeks from initial submission.


  • “Awaiting Initial Review Evaluation” – 09/01/2013: The manuscript is submitted and awaits triage, where it is read by two members of the editorial team. An email is sent to the corresponding author from the Editor-in-Chief. The triage process takes up to two weeks and determines whether or not the manuscript will go out for full review.

Full Review

  • “Awaiting Reviewer Selection” – 22/01/2013: An email is sent to the corresponding author from the Editor-in-Chief informing them that the manuscript has passed the triage initial review process. The extended review process is stated as lasting 6-8 weeks from receipt of this email.
  • “Awaiting Reviewer Assignment” – 28/01/2013
  • “Awaiting Reviewer Invitation” – 28/01/2013
  • “Awaiting Reviewer Assignment” – 29/01/2013
  • “Awaiting Reviewer Selection” – 29/01/2013: I may have missed some status updates here. Essentially, I think these status updates reflect the Associate Editor inviting reviewers to review the manuscript and the reviewers choosing whether or not to accept the invitation.
  • “Awaiting Reviewer Scores” – 05/02/2013: The reviewers have agreed to review the manuscript and the Manuscript Central review system awaits their reviews.
  • “Awaiting AE Decision” – 15/03/2013: The reviewers have submitted their reviews, which the Associate Editor uses to make a decision about the manuscript
  • “Decline” – 16/03/2013: An email is sent to the corresponding author from the Associate Editor informing them of the decision and providing feedback from the reviewers.

The whole process took just under ten weeks, so not quite within the 8 week estimate that the initial confirmation-of-submission email suggested.

It’s a shame that I can’t blog the status updates post-acceptance, but the final status update is supposedly what 89% of submissions to Psychological Science will end with. Onwards.

Finding needle in haystack
Finding needle in haystack (Photo credit: Bindaas Madhavi)

A former colleague of mine at an institution I no longer work at has admitted to being a science fraudster.*

I participated in their experiments, I read their papers, I respected their work. I felt a very personal outrage when I heard what they had done with their data. But the revelation went some way to answering questions I ask myself when reading about those who engage in scientific misconduct. What are they like? How would I spot a science fraudster?

Here are the qualities of the fraudster that stick with me.

  • relatively well-dressed.
  • OK (not great, not awful) at presenting their data.
  • doing well (but not spectacularly so) at an early stage of their career.
  • socially awkward but with a somewhat overwhelming projection of self-confidence.

And that’s the problem. I satisfy three of the four criteria above. So do most of my colleagues. If you were to start suspecting every socially awkward academic of fabricating or manipulating their data, that wouldn’t leave you with many people to trust. Conversations with those who worked much more closely with the fraudster reveal more telling signs that something wasn’t right with their approach, but again, the vast majority of the people with similar character flaws don’t fudge their data. It’s only once you formally track every single operation that has been carried out on their original data that you can know for sure whether or not someone has perpetrated scientific misconduct. And that’s exactly how this individual’s misconduct was discovered – an eagle-eyed researcher working with the fraudster noticed some discrepancies in the data after one stage of the workflow. Is it all in the data?

Let’s move beyond the few bad apples argument. A more open scientific process (e.g. the inclusion of original data with the journal submission) would have flagged some of the misconduct being perpetrated here, but only after someone had gone to the (considerable) trouble of replicating the analyses in question.  Most worryingly, it would also have missed the misconduct that took place at an earlier stage of the workflow. It’s easy to modify original data files, especially if you have coded the script that writes them in the first place. It’s also easy to change ‘Date modified’ and ‘Date created’ timestamps within the data files.

Failed replication would have helped, but the file drawer problem, combined with the pressure on scientists to publish or perish typically stops this sort of endeavor (though there are notable exceptions such as the “Replications of Important Results in Cognition”special issue of Frontiers in Cognition ). I also worry that the publication process, in its current form, does nothing more constructive than start an unhelpful rumour-mill that never moves beyond gossip and hearsay. The pressure to publish or perish is also cited as motivation for scientists to cook their data. In this fraudster’s case, they weren’t at a stage of their career typically thought of as being under this sort of pressure (though that’s probably a weak argument when applied to anyone without a permanent position). All of which sends us back to trying to spot the fraudster and not the dodgy data. It’s a circular path that’s no more helpful than uncharitable whispers in conference centre corridors.

So how do we identify scientific misconduct? Certainly not with a personality assessment, and only partially with an open science revolution. If someone wants to diddle their data, they will. Like any form of misconduct, if they do it enough, they will probably get caught. Sadly, that’s probably the most reliable way of spotting it. Wait until they become comfortable enough that they get sloppy. It’s just a crying shame it wastes so much of everyone’s time, energy and trust in the meantime.


*I won’t mention their name in this post for two reasons: 1) to minimise collateral damage that this is having on the fraudster’s former collaborators,  former institution and their former (I hope) field; and 2) because this must be a horrible time for them, and whatever their reason for the fraud, it’s not going to help them rehabilitate themselves in ANY career if a Google search on their name returns a tonne of condemnation.