Call for papers:
Déjà vu and other dissociative states in memory 

A special issue of Memory
Submission deadline: 31st July 2017
Guest Editors (email links):  Chris Moulin, Akira O’Connor and Christine Wells

In recent years, déjà vu has become of great interest in cognition, where it is mostly seen as a memory illusion.  It can be described as having two critical components: an intense feeling of familiarity and a certainty that the current moment is novel.  As such, déjà vu could be described as a dissociative experience, resulting from a metacognitive evaluation (the certainty) of a lower-level memory process (familiarity).  There are currently a number of proposals of how déjà vu arises which receive empirical support from paradigms which attempt to reproduce déjà vu in laboratory settings.  Further information about déjà vu comes from neuropsychological populations and the use of neuroscientific methods, where again the focus is on memory, and in particular the involvement of temporal lobe structures.  In this Special Issue, we will draw together the state of the art in déjà vu research, and develop and evaluate the idea that déjà vu can be seen as a momentary memory dysfunction.  We are seeking empirical papers and brief theoretical statements which consider the nature of déjà vu and how it may be induced experimentally, as well as studies of déjà vu in pathological groups, and studies investigating the neural basis of déjà vu.  We are also interested in associated dissociative phenomena, such as jamais vu, presque vu, prescience and other metacognitive illusions, where their relation to contemporary memory theory (and déjà vu) are clear.

We will consider all types of empirical article, including short reports and neuropsychological cases.  Theoretical statements and reviews should make a genuine novel contribution to the literature.  First drafts should be submitted by the end of July 2017 through the Memory portal, https://mc.manuscriptcentral.com/pmem, please select special issue ‘Deja vu’. All submissions will undergo normal full peer review, maintaining the same high editorial standards as for regular submissions to Memory.

If you are considering submitting an article please contact one of the editorial team stating the title of you intended submission.

Is it possible to reliably generate déjà vu in participants? Is it possible to get participants to reliably report déjà vu? These very similar questions are not necessarily as closely linked as we might think.

A paper I wrote with Radka Jersakova (@RadkaJersakova) and Chris Moulin (@chrsmln), recently published in PLOS ONE, reports a series of experiments in which we tried to stop people reporting déjà vu. Why? Because even in simple memory experiments that shouldn’t generate the sensation, upwards of 50% of participants will agree to having experienced déjà vu when asked about it. On the one hand, it’s a pretty strange set of experiments in which we are chasing non-significant results. On the other, it’s really important for the field of subjective experience research. If we can’t reliably assess the absence of an experience, how can we trust reports of its presence (OR if your null hypothesis isn’t a true null, don’t bother with an alternative hypothesis)?

Chris Moulin has published a much more detailed blog post about the paper that’s well worth a read. And of course, there’s the PLOS ONE paper itself.

PlosOne Deja vu Paper

Come to St Andrews and figure out why déjà vu experiences decrease with age, with me and Ines Jentzsch.

FindAPhD Advertisement (full text below)

Please email (aro2@st-andrews.ac.uk) or tweet (@akiraoc) me if you’d like to speak more about this project.  If you’d like to speak to anyone about doing a PhD with me, please get in touch with Mags Pitt (3rd yr PhD), Bjorn Persson (3rd yr PhD) or Ravi Mill (completed PhD) via the People section of the blog.

CNS Poster
Ravi Mill presenting simultaneous EEG fMRI data at CNS 2014

Project Description

BBSRC Theme: Word class underpinning Bioscience

Adaptive cognition involves both the completion of a set of mental operations and the awareness that these operations have been completed so that the next stage of cognition can be engaged. During successful memory decision-making these two steps, memory retrieval and retrieval awareness, go hand in hand. However, they can occasionally fragment, leading to a set of experiences termed introspective memory phenomena (IMPs; e.g. déjà vu and jamais vu). During déjà vu positive retrieval awareness arises in the absence of true retrieval, yielding the overall sensation of inappropriate familiarity (O’Connor & Moulin, 2010). Jamais vu is the opposite–negative retrieval awareness in the presence of true retrieval. IMPs signal conflict within the cognitive system, and thus may play a crucial role in error correction (we do not act on IMPs in the way that we do act on false memories). However, beyond some curious demographic associations (they occur more in those who are well-travelled and well-educated), IMP occurrence is not known to be associated with any existing cognitive or psychological traits.

IMPs are not experienced uniformly across the population but peak in those in their mid-20s, before declining with age thereafter. They are also thought to be driven by dopaminergic over-activity such that some pharmacological and recreational drugs (e.g. dopaminergic flu medications) have been reported as causing persistent déjà vu (Taiminen & Jääskeläinen, 2001). Interestingly, these characteristics mirror what is known about neurophysiological markers of inhibitory control and response monitoring more generally (e.g. Strozyk & Jentzsch, 2012), which show the same lifespan trajectory with an age-related decrease in the dopaminergic functions mediated by the frontal cortex. These links suggest that IMP occurrence may be underpinned by basic neurocognitive characteristics integral to healthy cognition. Thus, the importance of IMPs may not lie in the fragmentation of the memory decision-making system, but in the capacity for our response monitoring systems to detect it and stop us making decisions based on faulty information.

We propose a systematic programme of research to establish the role of error-monitoring in the generation of IMPs. Using i) retrospective questioning to verify the recent occurrence of IMPs and ii) established procedures for their laboratory generation, we will explore individual differences in IMP experience and neurophysiological markers of response monitoring. These experiments will be a) developed in young adults and extended to b) primary school children (age 8-11; the age at which IMPs are first reported by children) and c) older adults (age 55 and older). We will also conduct opportunistic case-studies on d) patients who present themselves to Dr O’Connor over the course of the PhD (UK-based patients typically get in touch at a rate of 1-2/year). This systematic programme will allow us to establish any potential links between basic neurocognitive characteristics and the tendency to experience dissociative memory sensations which are not known to have any other psychological correlates.

This project will benefit from the joint multi-disciplinary expertise of Dr O’Connor, an internationally recognized expert in the area of metacognition and introspective memory phenomena and Dr Jentzsch, a biophysicist and electrophysiologist by training, who specialized in studying the neural underpinnings dopaminergic functions such as action and conflict control. Together, we will provide the prospective student conceptual knowledge of metacognitive models of memory and changes to these functions with healthy ageing integrating behavioural methods and physiological measures of brain function in humans. The student will learn about experimental design, programming (Matlab), data collection and behavioural analysis techniques such as signal detection theory. In addition, the student will learn how to design, conduct and analyse electrophysiological experiments using EEG. Acquisition of generic skills such as team-working, time-management and communication skills amongst many others will also be an important part of the students training.

Funding Notes

This project is eligible for the EASTBIO Doctoral Training Partnership: View Website

This opportunity is only open to UK nationals (or EU students who have been resident in the UK for 3+ years immediately prior to the programme start date) due to restrictions imposed by the funding body.

Apply by 5.00pm on the 14th December 2015 following the instructions on how to apply at: View Website

Informal enquiries to the primary supervisor are very strongly encouraged.

References

O’Connor, A.R. & Moulin, C.J.A. (2010). Recognition without identification, erroneous familiarity, and déjà vu. Current Psychiatry Reports, 12(3), 165-173.

Strozyk, J.V. & Jentzsch, I. (2012). Weaker error signals do not reduce the effectiveness of post-error adjustments: Comparing error processing in young and middle-aged adults. Brain Research, 460, 41-49

Taiminen, T. & Jääskeläinen, S.K. (2001). Intense and recurrent déjà vu experiences related to amantadine and phenylpropanolamine in a healthy male. Journal of Clinical Neuroscience, 8, 460-462.

The Journal of Cognitive Neuroscience have just invoiced me $985 for a paper they agreed to publish earlier this year. This wasn’t unexpected – not only did we sign away our copyright, allowing MIT Press to make money from our work, but we did so knowing that we would pay a hefty sum to allow them to do this. It still came as a bit of a shock though.

Paying the invoice will curtail some of my research activities next year, like going to conferences to present data. I put this to the journal, asking if they’d hear my case for a reduction or a waiver. Here’s their response:

 

JOCN does not provide fee waivers. Page costs are stated on the submission guidelines page, as well as on the last page of the online manuscript submission so that all authors are aware of the financial obligation required if your paper is accepted for publication. These fees pay for the website, submission software, and other costs associated with running the journal. If you are unable to pay the page fees, please let us know so that we can remove your manuscript from the publication schedule.

Regards,
Editorial Staff
Journal of Cognitive Neuroscience

 

What did I expect though? We willingly submitted to this journal knowing that they would charge us $60 per page. And the Journal of Cognitive Neuroscience certainly isn’t alone in doing this. Most cognitive neuroscience journals are pretty good at making money out of authors (see table below – I haven’t included OA megajournals in the table). Imagers tend to have money and junior imagers, like all junior academics, still need to publish journals that have a reputation.

For what it’s worth, Elsevier journals keep their noses pretty clean. Cerebral Cortex’s publishing house Oxford Journals though… pretty much every stage of that process is monetised. Just. Wow.

 

Journal NameJournal of NeuroscienceCerebral CortexNeuroimage / Cortex / NeuropsychologiaJournal of Cognitive NeuroscienceCognitive, Affective and Behavioral NeuroscienceCognitive Neuroscience
Our Paper18503387098511001422
PublisherSociety for NeuroscienceOxford JournalsElsevierMIT PressSpringerTaylor & Francis
IF (2013)6.748.376.13 / 6.04 / 3.454.693.212.38
Costs ($)
Submission13075----
Figures (Colour)-720--1100*474
Pages-72-60--
Admin1720--25--
Open Access Supplement282034002200 / 2200 / 1800(unknown)30002950
Black and white figures are without cost in all the listed journals. IF is Impact Factor. The paper for which the 'Our paper' costs are calculated had 3 authors, 16 pages, 3 colour figures, and no Open Access Supplement.
* There is a one-off charge for all colour figures, regardless of number.

This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at the University of Leeds. Radka has previously written on this blog about how to conduct online studies. Here she discusses the merits of travelling during your doctoral training.

View from WBurg

I am writing this as I near the end of the second lab visit abroad of my PhD. While I know many students who like me have managed to acquire the ‘visiting scholar’ status on extended lab visits, the number is far smaller than I believe it should be. This is even though visiting other labs and collaborating with other researchers, having external input into the work you are doing and having an idea what others in your field are doing right now is invaluable. It doesn’t matter if it is a visit of few weeks or few months; either way it is worth it and a lot easier to make happen than you would expect. Researchers are mostly very open to hosting and there is a lot of support and funding for such visits. It is a great learning experience and it makes academia seem smaller and friendlier. There are also the practical benefits such as having a travel grant on your CV, being able to show that you are capable of forging international collaborations and increasing the opportunity of knowing someone with post-doc funding.

The purpose of this post is to address the question of how one goes about organizing a visit of any length to another lab (ideally abroad although it doesn’t have to be!). However, the motivation for writing anything on this topic at all is to encourage PhD students – especially at the beginning of their studies – to consider in what ways can they make the most of it and travelling is definitely one way to do that.

 

WHAT TO DO AND HOW LONG TO GO FOR

The obvious first step is deciding what would you like to get out of a visit to another lab. This can be as generic as ‘networking’ or as specific as learning a particular analysis method. Ideally a visit should involve a collaboration of some kind although whether the visit should be used to plan a project or to actually carry out the data collection and analysis is open to discussion. This will naturally determine how long the visit is. Ideally, you’d look for funding to finance the type of visit you have in mind but sometimes the funding sources available to you might shape – to an extent – how long a visit you undertake. Some of the funding opportunities outlined below are aimed at visits of 6 months or longer; others at 2-3 months and a few start at a couple of weeks. As such it is important to know from the start what options are available to you.

 

WHERE TO GO

Most commonly, students make use of their supervisor’s network. This is by far the easiest way to organize a visit as it builds on collaborations that already exist. It is also the best way to identify a researcher with relevant experience to help you develop new ideas in the context of the topic of your thesis. As such, the first step is talking to your supervisor; they might already have someone in mind and can initiate the contact.

It is also possible there already is a researcher that you want to work with for an extended period. If you are going to a conference and they are going as well, try to talk to them there. You can contact them before the conference to suggest you meet to discuss your work with them. Having met them in person makes it much easier to talk to them about visiting their lab. If there isn’t an opportunity to meet in person, it is also fine to email the researcher you are interested in working with and ask them whether this could be arranged.

 

FINDING FUNDING

There is more funding available for research visits than might seem at first. Below is a list of some useful starting places for researching funding options. Everyone’s background is unique and the opportunities will vary accordingly.

(i) Funding organizations: It is very likely the organization or research council funding your PhD also has funding for travel visits. What is more, they are probably very keen to fund such a visit. The Economic and Social Research council in the UK is a great example of this as they place great emphasis on international research links through their Overseas Institutional Visits scheme.

(ii) Universities: There is a chance that the institution you want to visit has a ‘visiting scholars’ program that you can apply for to fund the visit. Similarly, your own institution might have ‘travel abroad’ schemes with funding for going abroad to an institution of your choice. Further, there are partnership networks between universities that also offer funding. An example is the Worldwide Universities Network which supports mobility for students and researchers between its partner institutions. It is best to ask your university or someone in the department whether you belong to one.

(iii) National grants: Some countries have grants for their nationals to go on study visits – a great example is the German Academic Exchange Service, which also offers a lot of support for international students to come to Germany. Similarly, France funds visits of 6-10 months to any of its institutions through the Eiffel Excellence Scholarship. There are also bilateral agreements between countries to fund exchanges such as the Fulbright Commission which focuses on mobility between the US and (according to their website) a list of more than 155 countries.

(iv) Societies: Lastly, there are travel grants that are subject specific. The
Experimental Psychology Society, the British Psychology Society or the European Association of Social Psychology all offer study visit grants. However, sometimes there are membership conditions on these.

 

The key thing is to give yourself enough time to plan a visit. It is important to have an idea of what funding is available to you, when the funding deadlines are, what the application process is like, what documents you need, and what the interval is between submission and final decision.

 

Good luck!

I, like most humans, am bad at understanding randomness and good at spotting patterns that don’t necessarily exist. I also frequently have thoughts like: “That’s the third time this paper has been rejected. It must be bad.” These things are related.

When I submit my work, all of the variables at play, including the quality of the thing being judged, combine to give me a probability that a positive outcome will occur e.g. 0.4 – 2 out of 5 times, a good thing will happen. BUT, probabilities produce lumpy strings of outcomes. That is, good and bad outcomes will appear to us pattern-spotting humans to be clustered, rather than what we would describe as “random”, which we tend to think of as evenly spaced (see the first link above).

To illustrate, I did something very straightforward in Excel to very crudely simulate trying to publish 8 papers.
Column A: =RAND() << (pseudo)randomly assign a number between 0 and 1; in the next
Column B: =IF(Ax>0.4, 0,1) << if the number column A (row x) exceeds .4, this cell will equal 0, otherwise it will equal 1.
Thus, column B will give me a list of successes (1s) and failures (Os) with an overall success rate of ~.4. It took me four refreshes before I got the following:

publishing
Note that the success rate, despite being set to .4, was .26 over this small number of observations. Also note that I embellished the output with a hypothetical stream of consciousness. I really wish I had the detachment of column C, but I don’t. I take rejections to heart and internalise bad outcomes like they are Greggs’ Belgian buns.

 

Although the rejections look clustered, they are all independently determined. I have almost certainly had strings of rejections like those shown above. The only thing that has made them bearable is that I have switched papers, moving on to a new project after ~3 rejections, at the same time giving up on the thrice-rejected paper I assume to be a total failure. As a result, I am almost certainly sitting on good data that has been tainted by bad luck.

Stick with it. It evens out in the end.

Last week Ravi Mill and I had a paper accepted to Consciousness and Cognition. It was accepted after 49 weeks with the journal in four rounds of review. The editorial decisions on the paper were: Reject, Revise and Resubmit, Accept with Minor Revisions and finally, Accept. What makes this decision history somewhat remarkable is that it was initially rejected from the journal it was eventually published in.

This blog post won’t give as much information on that initial rejection as I wanted it to – I sought permission from the journal to publish all correspondence from the reviewer and the editor, which was denied. Below you find only my response to the editorial decision. As context, the manuscript was rejected on the basis of one review, in which it was suggested that we had adopted  some unconventional and even nefarious practices in gathering and analysing our data. These  suggestions didn’t sit well with me, so I sent the following email to the Editor via the Elsevier Editorial System.

 

Dear Prof XXXX,

 

Thank you for your recent consideration of our manuscript, ‘”Old?” or “New?”: The test question provokes a goal-directed bias in memory decision-making’ (Ms. No. XXXXX-XX-XXX), for publication in Consciousness & Cognition. We were, of course, disappointed that you chose to reject the manuscript.

 

Having read the justification given for rejection, we respectfully wish to respond to the decision letter. Whilst we do not believe that this response will prompt reconsideration of the manuscript (your decision letter was clear) we believe it is important to respond for two reasons. First, to reassure you that we have not engaged in any form of data manipulation, and second, to state our concerns that the editorial team at Consciousness & Cognition appear to view it as acceptable that reviewers base their recommendations on poorly substantiated inferences they have made about the motivations of authors to engage in scientific misconduct.

 

As you highlighted in your decision letter, Reviewer 1 raised “substantial concerns about the manner in which [we] carried out the statistical analysis of [our] data”. The primary concern centred on our decision to exclude participants whose d’ was below a threshold of 0.1. In this reply we hope to demonstrate to you that use of such an exclusion criterion is not only standard practice, but it is indeed a desirable analysis step which should give readers more, not less, confidence in the analyses and their interpretation. We will then take the opportunity to demonstrate, if only to you, that our data behave exactly as one would expect them to under varying relaxations of the enforced exclusion criterion.

 

Recognition memory studies often require that participants exceed a performance (sensitivity; d’) threshold before they are included in the study. This is typically carried out when the studies themselves treat a non-sensitivity parameter as their primary dependent variable (as in our rejected paper), as a means of excluding participants that were unmotivated or disengaged from the task. Below are listed a small selection of studies published in the past 2 years which have used sensitivity-based exclusion criteria, along with the number of participants excluded and the thresholds used:

 

Craig, Berman, Jonides & Lustig (2013) – Memory & Cognition
Word recognition
Expt2 – 10/54, Expt3 – 5/54
80% accuracy

 

Davidenko, N. & Flusberg, S.J. (2012) – Cognition
Face recognition
Expt1a – 1/56, Expt1b –1/26, Expt2b –3/26
chance (50%) accuracy

 

Gaspelin, Ruthruff, & Pashler, (2013). Memory & Cognition
Word recognition
Expt1 – 3/46, Expt2 – 1/49, Expt3 – 3/54
70% accuracy

 

Johnson & Halpern (2012) – Memory & Cognition
Song recogntion
Expt1 – 1/20, Expt2 – 3/25
70% accuracy

 

Rummel, Kuhlmann, & Touron (2013) – Consciousness & Cognition
Word classification (prospective memory task)
Expt1 – 6/145
“prospective memory failure”

 

Sheridan & Reingold (2011) – Consciousness & Cognition
Word recognition
Expt1 – 5/64
“difficulty following instructions”

 

Shedden, Milliken, Watters & Monteiro (2013) – Consciousness & Cognition
Letter recognition
Expt4 – 5/28
85% accuracy

 

You will note that there is tremendous variation in the thresholds used, but that it is certainly not “unusual” as claimed by Reviewer 1, not even for papers published in Consciousness and Cognition. Of course, it would not be good scientific practice to accept the status quo uncritically, and we must therefore explain why the employed sensitivity-based exclusion criterion was appropriate for our study. The reasoning is that we were investigating an effect associated with higher order modulation of memory processes. If we had included participants with d’s below 0.1 (an overall accuracy rate of approximately 52% where chance responding is 50%) then it is reasonable to assume that these participants were not making memory decisions based on memory processes, but were at best contributing noise to the data (e.g. via random responding), and at worst systematically confounding the data. To demonstrate the latter point, one excluded participant from Experiment 1 had a d’ of -3.7 (overall accuracy rate of 13%), which was most likely due to responding using the opposite keys to those which they were instructed to use, substituting “new” responses for “old” responses. As we were investigating the effects of the test question on the proportion of “new” and “old” responses, it is quite conceivable that if they had displayed the same biases as our included participants did overall, they would have reduced our ability to detect a real effect. Including participants who did not meet our inclusion criteria, instead of helping us to find effects that hold across all participants, would have systematically damaged the integrity of our findings, leading to reduced estimates of effect size caused by ignoring the potential for influence of confounding variables.

 

If we had been given the opportunity to respond to Reviewer 1’s critique via the normal channels, we would have also corrected Reviewer 1’s inaccurate reading of our exclusion rates. As we stated very clearly in the manuscript, our sensitivity-based exclusion rates were 5%, 6% and 17%, not 25%, 6% and 17%. Reviewer 1 has conflated Experiment 1’s exclusions based on native language with exclusion based on sensitivity. As an aside, we justify exclusion based on language once again as a standard exclusion criterion in word memory experiments to ensure equivalent levels of word comprehension across participants. This is of particular importance when conducting online experiments which allow anyone across the world to participate. In coding the experiment, we thought it far more effective to allow all visitors to participate after first indicating their first language, with a view to excluding non-native speakers’ data from the study after they had taken part. We wanted all participants to have the opportunity to take part in the study (and receive feedback on their memory performance – a primary motivator for participation according to anecdotal accounts gleaned from social media) and to minimise any misreporting of first language which would add noise to the data without recourse for its removal.

 

We would next have responded to Reviewer 1’s claims that our conclusions are not generalisable based on the subset of analysed data by stating that Reviewer 1 is indeed partially correct. Our conclusions would not have been found had we included participants who systematically confounded the data (as discussed above) – as Reviewer 1 is at pains to point out, the effect is small. Nonetheless as demonstrated by our replication of the findings over three experiment and the following reanalyses, our findings are robust enough to withstand inclusion of some additional noise, within reason. To illustrate, we re-analysed the data under two new inclusion thresholds. The first, a d’ <= 0 threshold, equivalent to chance responding (Inclusion 1), and the second a full inclusion in which all participants were analysed (Inclusion 2). For the sake of brevity we list here the results as they relate to our primary manipulation, the effects of question on criterion placement.

 

EXPERIMENT 1
Original – old emphasis c > new emphasis c, t(89) = 2.141, p = .035, d = 0.23.

 

Inclusion 1:
old emphasis c > new emphasis c, t(90) = 2.32, p = .023, d = 0.24.
Inclusion 2:
no difference between old and new emphasis c, t(94) = 1.66, p = .099, d = 0.17

 

EXPERIMENT 2
Original:
Main effect of LOP, F(1,28) = 23.66, p = .001, ηp2 = .458, shallow > deep.
Main effect of emphasis, F(1,28) = 6.65, p = .015, ηp2 = .192, old? > new?.
No LOP x emphasis interaction, F(1,28) = 3.13, p = .088, ηp2 = .101.
Shallow LOP sig old > new, t(28) = 3.05, p = .005, d = 0.62.
Deep LOP no difference, t(28) = .70, p = .487, d = 0.13.

 

Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
Main effect of LOP, F(1,30) = 20.84, p = .001, ηp2 = .410, shallow > deep.
Main effect of emphasis, F(1,30) = 8.73, p = .006, ηp2 = .225, old? > new?.
Sig LOP x emphasis interaction, F(1,30) = 4.28, p = .047, ηp2 = .125.
Shallow LOP sig old > new, t(30) = 3.50, p = .001, d = 0.64.
Deep LOP no difference, t(30) = .76, p = .454, d = 0.14.

 

EXPERIMENT 3
Original:
No main effect of response, F(1,28) = 3.73, p = .064, ηp2 = .117
No main effect of question, F < 1.
Significant question x response interaction, F(1,28) = 8.50, p = .007, ηp2 = .233.
“Yes” response format, old > new, t(28) = 2.41, p = .023, d = 0.45.
“No” response format, new > old, t(28) = 2.77, p = .010, d = 0.52.

 

Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
No main effect of response, F <1.
No main effect of question, F < 1.
No question x response interaction, F(1,34) = 3.07, p = .089, ηp2 = .083.
“Yes” response format, old > new, t(34) = 1.33, p = .19, d = 0.23.
“No” response format, new > old, t(34) = 1.84, p = .07, d = 0.32.

 

To summarise, including participants who responded anywhere above chance had no untoward effects on the results of our inferential statistics and therefore our interpretation cannot be called into question by the results of Inclusion 1. Inclusion 2 on the other hand had much more deleterious effects on the patterns of results reported in Experiments 1 and 3. This is exactly what one would expect given the example we described previously where the inclusion of a participant responding systematically below chance would elevate type II error. In this respect, our reanalysis in response to Reviewer 1’s comments does not weaken our interpretation of the findings.

 

As a final point, we wish to express our concerns about the nature of criticism made by Reviewer 1 and accepted by you as appropriate within peer-review for Consciousness & Cognition. Reviewer 1 states that we the authors “must accept the consequences of data that might disagree with their hypotheses”. This strongly suggests that we have not done so in the original manuscript and have therefore committed scientific misconduct or entered a grey-area verging on misconduct. We deny this allegation in the strongest possible terms and are confident we have demonstrated that this is absolutely not the approach we have taken through the evidence presented in this response. Indeed if Reviewer 1 wishes to make these allegations, they would do well to provide evidence beyond the thinly veiled remarks in their review. If they wish to do this, we volunteer full access to our data for them to conduct any tests to validate their claims, e.g. those carried out in Simonsohn (2013) in which a number of cases of academic misconduct and fraud are exposed through statistical methods. We, and colleagues we have spoken to about this decision, found it worrying that you chose to make your editorial decision on the strength of this unsubstantiated allegation and believe that at the very least we should have been given the opportunity to respond to the review, as we have done here, via official channels.

 

We thank you for your time.

 

Sincerely,

 

Akira O’Connor & Ravi Mill

 

References
Craig. K. S., Berman, M.G., Jonides, J. & Lustig, C (2013) Escaping the recent past: Which stimulus dimensions influence proactive interference? Memory & Cognition 41, 650-670.
Davidenko, N. & Flusberg, S.J. (2012) Environmental inversion effects in face perception. Cognition 123(2), 442-447.
Gaspelin, N., Ruthruff, E. & Pashler, H. (2013) Divided attention: An undesirable difficulty in memory retention. Memory & Cognition 41, 978-988.
Johnson, S. K. & Halpern, A. R. (2012) Semantic priming of familiar songs. Memory & Cognition 40, 579-593.
Rummel, J., Kuhlmann, B.G. & Touron, D. R. (2013) Performance predictions affect attentional processes of event-based prospective memory. Consciousness and Cognition, 22 (3), 729-741.
Shedden, J. M., Milliken, B., Watters, S. & Monteiro, S (2013) Event-related potentials as brain correlates of item specific proportion congruent effects. Consciousness & Cognition, 22 (4), 1442-1455.
Sheridan, H. & Reingold, E. M. (2011) Recognition memory performance as a function of reported subjective awareness. Consciousness & Cognition 20 (4), 1363-1375.
Simonsohn, U. (2013) Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875-1888.

 

The Editor’s very reasonable response was to recommend we resubmit the manuscript, which we did. The manuscript was then sent out for review to two new reviewers, and the process began again, this time with a happier ending.

My recommendations for drafting unsolicited responses are:

  • Allow the dust to settle (this is key to Jim Grange’s tips on Dealing with Rejection too). We see injustice everywhere in the first 24 hours following rejection. Give yourself time to calm down and later, revisit the rejection with a more forensic eye. If the reviews or editorial letter warrant a response, they will still warrant it in a few days, by which time you will be better able to pick the points you should focus on.
  • Be polite. (I skate on thin ice in a couple of passages in the letter above, but overall I think I was OK).
  • Support your counterarguments with evidence. I think our letter did this well. If you need to do some more analyses to achieve this, why not? It will at least reassure you that the reviewer’s points aren’t supported by your data.
  • Don’t expect anything to come of your letter. At the very least, it will have helped you manage some of your frustration.

I’m going to disregard the usual speculation about what type-setter and editorial assistant salaries are, and how much distribution infrastructure costs because these are all tied in to the true costs of publishing from a publisher’s perspective and not what I’m interested in. Instead, I’m going to use figures from my employer, the University of St Andrews, to crudely examine what this very small market can bear open access articles to cost.

The first assumption I make here is that journal subscriptions and gold open access journal publication costs should be drawn from the same pool of money. That is, they are university outgoings that support publishers, thereby funding the publication of university-based researchers’ work.

The second assumption, which almost immediately serves to highlight how useless this back-of-the-envelope calculation is, is that we no longer need to subscribe to paywalled journals and can therefore channel all funds that we would have spent on this into open access publishing. For argument’s sake, let’s suppose that the UK government has negotiated a nationwide subscription to all journals with all closed-access publishers for the 2014/2015 academic year. This leaves the University of St Andrews Library with journal subscription money that it needs to spend in order to continue its current funding allocation. Naturally, it ploughs all of this into open access publishing costs.

Once comfortable with these assumptions, we can fairly easily estimate how much a university like mine could afford to pay for each article published, if every single output was a gold open access article such.

Total St Andrews University spending on journal subscriptions per year:
According to the library’s 2011/2012 annual report: £2.11m
According to a tweet from the @StAndrewsUniLib twitter account: ~£1.7m
Given that the higher value also included spending on databases and e-resources, I’ll go with the £1.7m/year estimate.

Total number of publications by St Andrews University researchers per year:
We have a PURE research information system on which all researchers are meant to report all of our publications.  Accordng to a tweet from @JackieProven at the University of St Andrews Library:

over 2000 publications/yr, about 1200 are articles and around half of those will have StA corresponding author

We can therefore assume 600 publications/year.

Open access publication costs which could be absorbed in this hypothetical situation:
£1,700,000/600 = £2,833

This value is higher than I was expecting it to be, and suggests that for even a small institution like the University of St Andrews, article processing charges (APC) in gold open access journals aren’t too far off the mark. According to PeerJ’s roundup, even PLOS Biology’s steep APC of $2900 is considerably less than what St Andrews could bear in this highly unrealistic situation.

Of course, there are quite a few caveats that sit on top of this hypothetical estimate and its assumptions:
1) I may well be underestimating the number of publication outputs from the University’s researchers. This would push the per-article cost the library could afford to pay down.
2) Larger universities would have a greater number of researchers and therefore publications. The increase in the denominator would be offset by an increase in the numerator-larger universities have medical schools and law schools which St Andrews does not-but I have no idea what effect this would have on the per-article cost these better endowed libraries could afford to pay.
3) The ecosystem would change. Gold open access journals have higher publication rates than paywalled journals. If more articles were published, this would also push the per-article cost the library could absorb down.
4) This estimate makes no consideration of the open access publication option in closed access journals. This publication option, as well as being more expensive than the gold open access offered in open access only journals allows traditional publishers to milk the cow at both ends (subscription costs AND APCs) and I imagine library administrators would struggle to justify supporting this from the same fund as that used to pay journal subscriptions.

I’ve been meaning to do this calculation for a few months and am grateful to the staff at the University of St Andrews Library for providing me with these figures. I’m interested in what others make of this, and would be keen to hear your thoughts in the comments below.