Is it possible to reliably generate déjà vu in participants? Is it possible to get participants to reliably report déjà vu? These very similar questions are not necessarily as closely linked as we might think.

A paper I wrote with Radka Jersakova (@RadkaJersakova) and Chris Moulin (@chrsmln), recently published in PLOS ONE, reports a series of experiments in which we tried to stop people reporting déjà vu. Why? Because even in simple memory experiments that shouldn’t generate the sensation, upwards of 50% of participants will agree to having experienced déjà vu when asked about it. On the one hand, it’s a pretty strange set of experiments in which we are chasing non-significant results. On the other, it’s really important for the field of subjective experience research. If we can’t reliably assess the absence of an experience, how can we trust reports of its presence (OR if your null hypothesis isn’t a true null, don’t bother with an alternative hypothesis)?

Chris Moulin has published a much more detailed blog post about the paper that’s well worth a read. And of course, there’s the PLOS ONE paper itself.

PlosOne Deja vu Paper

Over the past couple of days, I have been archiving published fMRI projects, and copying data from SD cards to start new ones. I have written previously about ways in which I have copied and verified copied files, and this is a quick update to that post to document another tool for verifying copies.

As far as the copying itself is concerned, I still swear by Teracopy. As far as verifying that copies have been successfully made though, I have recently started using Exactfile. The tagline “Making sure that what you hash is what you get” sums up the procedure for using Exactfile, once you have installed it on a Windows machine.

Exactfile in action
Exactfile in action
  1. Create a single file checksum, or, if you are comparing all the files and subfolders within folders (even massive folders containing gigabytes of fMRI data) a checksum digest (illustrated above). This will be saved as a file using which you can…
  2. Test your checksum digest. You locate your digest file and the copied data you wish to compare against the checksums, and it runs through making sure each file is identical.

That’s it – pretty straightforward. Step 1 takes a little longer than Step 2, and if you’re comparing hundreds of thousands of files, you should prepare to have this running in the background as you get on with other stuff.

Come to St Andrews and figure out why déjà vu experiences decrease with age, with me and Ines Jentzsch.

FindAPhD Advertisement (full text below)

Please email (aro2@st-andrews.ac.uk) or tweet (@akiraoc) me if you’d like to speak more about this project.  If you’d like to speak to anyone about doing a PhD with me, please get in touch with Mags Pitt (3rd yr PhD), Bjorn Persson (3rd yr PhD) or Ravi Mill (completed PhD) via the People section of the blog.

CNS Poster
Ravi Mill presenting simultaneous EEG fMRI data at CNS 2014

Project Description

BBSRC Theme: Word class underpinning Bioscience

Adaptive cognition involves both the completion of a set of mental operations and the awareness that these operations have been completed so that the next stage of cognition can be engaged. During successful memory decision-making these two steps, memory retrieval and retrieval awareness, go hand in hand. However, they can occasionally fragment, leading to a set of experiences termed introspective memory phenomena (IMPs; e.g. déjà vu and jamais vu). During déjà vu positive retrieval awareness arises in the absence of true retrieval, yielding the overall sensation of inappropriate familiarity (O’Connor & Moulin, 2010). Jamais vu is the opposite–negative retrieval awareness in the presence of true retrieval. IMPs signal conflict within the cognitive system, and thus may play a crucial role in error correction (we do not act on IMPs in the way that we do act on false memories). However, beyond some curious demographic associations (they occur more in those who are well-travelled and well-educated), IMP occurrence is not known to be associated with any existing cognitive or psychological traits.

IMPs are not experienced uniformly across the population but peak in those in their mid-20s, before declining with age thereafter. They are also thought to be driven by dopaminergic over-activity such that some pharmacological and recreational drugs (e.g. dopaminergic flu medications) have been reported as causing persistent déjà vu (Taiminen & Jääskeläinen, 2001). Interestingly, these characteristics mirror what is known about neurophysiological markers of inhibitory control and response monitoring more generally (e.g. Strozyk & Jentzsch, 2012), which show the same lifespan trajectory with an age-related decrease in the dopaminergic functions mediated by the frontal cortex. These links suggest that IMP occurrence may be underpinned by basic neurocognitive characteristics integral to healthy cognition. Thus, the importance of IMPs may not lie in the fragmentation of the memory decision-making system, but in the capacity for our response monitoring systems to detect it and stop us making decisions based on faulty information.

We propose a systematic programme of research to establish the role of error-monitoring in the generation of IMPs. Using i) retrospective questioning to verify the recent occurrence of IMPs and ii) established procedures for their laboratory generation, we will explore individual differences in IMP experience and neurophysiological markers of response monitoring. These experiments will be a) developed in young adults and extended to b) primary school children (age 8-11; the age at which IMPs are first reported by children) and c) older adults (age 55 and older). We will also conduct opportunistic case-studies on d) patients who present themselves to Dr O’Connor over the course of the PhD (UK-based patients typically get in touch at a rate of 1-2/year). This systematic programme will allow us to establish any potential links between basic neurocognitive characteristics and the tendency to experience dissociative memory sensations which are not known to have any other psychological correlates.

This project will benefit from the joint multi-disciplinary expertise of Dr O’Connor, an internationally recognized expert in the area of metacognition and introspective memory phenomena and Dr Jentzsch, a biophysicist and electrophysiologist by training, who specialized in studying the neural underpinnings dopaminergic functions such as action and conflict control. Together, we will provide the prospective student conceptual knowledge of metacognitive models of memory and changes to these functions with healthy ageing integrating behavioural methods and physiological measures of brain function in humans. The student will learn about experimental design, programming (Matlab), data collection and behavioural analysis techniques such as signal detection theory. In addition, the student will learn how to design, conduct and analyse electrophysiological experiments using EEG. Acquisition of generic skills such as team-working, time-management and communication skills amongst many others will also be an important part of the students training.

Funding Notes

This project is eligible for the EASTBIO Doctoral Training Partnership: View Website

This opportunity is only open to UK nationals (or EU students who have been resident in the UK for 3+ years immediately prior to the programme start date) due to restrictions imposed by the funding body.

Apply by 5.00pm on the 14th December 2015 following the instructions on how to apply at: View Website

Informal enquiries to the primary supervisor are very strongly encouraged.

References

O’Connor, A.R. & Moulin, C.J.A. (2010). Recognition without identification, erroneous familiarity, and déjà vu. Current Psychiatry Reports, 12(3), 165-173.

Strozyk, J.V. & Jentzsch, I. (2012). Weaker error signals do not reduce the effectiveness of post-error adjustments: Comparing error processing in young and middle-aged adults. Brain Research, 460, 41-49

Taiminen, T. & Jääskeläinen, S.K. (2001). Intense and recurrent déjà vu experiences related to amantadine and phenylpropanolamine in a healthy male. Journal of Clinical Neuroscience, 8, 460-462.

The Journal of Cognitive Neuroscience have just invoiced me $985 for a paper they agreed to publish earlier this year. This wasn’t unexpected – not only did we sign away our copyright, allowing MIT Press to make money from our work, but we did so knowing that we would pay a hefty sum to allow them to do this. It still came as a bit of a shock though.

Paying the invoice will curtail some of my research activities next year, like going to conferences to present data. I put this to the journal, asking if they’d hear my case for a reduction or a waiver. Here’s their response:

 

JOCN does not provide fee waivers. Page costs are stated on the submission guidelines page, as well as on the last page of the online manuscript submission so that all authors are aware of the financial obligation required if your paper is accepted for publication. These fees pay for the website, submission software, and other costs associated with running the journal. If you are unable to pay the page fees, please let us know so that we can remove your manuscript from the publication schedule.

Regards,
Editorial Staff
Journal of Cognitive Neuroscience

 

What did I expect though? We willingly submitted to this journal knowing that they would charge us $60 per page. And the Journal of Cognitive Neuroscience certainly isn’t alone in doing this. Most cognitive neuroscience journals are pretty good at making money out of authors (see table below – I haven’t included OA megajournals in the table). Imagers tend to have money and junior imagers, like all junior academics, still need to publish journals that have a reputation.

For what it’s worth, Elsevier journals keep their noses pretty clean. Cerebral Cortex’s publishing house Oxford Journals though… pretty much every stage of that process is monetised. Just. Wow.

 

Journal NameJournal of NeuroscienceCerebral CortexNeuroimage / Cortex / NeuropsychologiaJournal of Cognitive NeuroscienceCognitive, Affective and Behavioral NeuroscienceCognitive Neuroscience
Our Paper18503387098511001422
PublisherSociety for NeuroscienceOxford JournalsElsevierMIT PressSpringerTaylor & Francis
IF (2013)6.748.376.13 / 6.04 / 3.454.693.212.38
Costs ($)
Submission13075----
Figures (Colour)-720--1100*474
Pages-72-60--
Admin1720--25--
Open Access Supplement282034002200 / 2200 / 1800(unknown)30002950
Black and white figures are without cost in all the listed journals. IF is Impact Factor. The paper for which the 'Our paper' costs are calculated had 3 authors, 16 pages, 3 colour figures, and no Open Access Supplement.
* There is a one-off charge for all colour figures, regardless of number.

This is a guest post from Radka Jersakova (@RadkaJersakova), who did her undergraduate degree in my lab and is now working on her PhD at the University of Leeds. Radka has previously written on this blog about how to conduct online studies. Here she discusses the merits of travelling during your doctoral training.

View from WBurg

I am writing this as I near the end of the second lab visit abroad of my PhD. While I know many students who like me have managed to acquire the ‘visiting scholar’ status on extended lab visits, the number is far smaller than I believe it should be. This is even though visiting other labs and collaborating with other researchers, having external input into the work you are doing and having an idea what others in your field are doing right now is invaluable. It doesn’t matter if it is a visit of few weeks or few months; either way it is worth it and a lot easier to make happen than you would expect. Researchers are mostly very open to hosting and there is a lot of support and funding for such visits. It is a great learning experience and it makes academia seem smaller and friendlier. There are also the practical benefits such as having a travel grant on your CV, being able to show that you are capable of forging international collaborations and increasing the opportunity of knowing someone with post-doc funding.

The purpose of this post is to address the question of how one goes about organizing a visit of any length to another lab (ideally abroad although it doesn’t have to be!). However, the motivation for writing anything on this topic at all is to encourage PhD students – especially at the beginning of their studies – to consider in what ways can they make the most of it and travelling is definitely one way to do that.

 

WHAT TO DO AND HOW LONG TO GO FOR

The obvious first step is deciding what would you like to get out of a visit to another lab. This can be as generic as ‘networking’ or as specific as learning a particular analysis method. Ideally a visit should involve a collaboration of some kind although whether the visit should be used to plan a project or to actually carry out the data collection and analysis is open to discussion. This will naturally determine how long the visit is. Ideally, you’d look for funding to finance the type of visit you have in mind but sometimes the funding sources available to you might shape – to an extent – how long a visit you undertake. Some of the funding opportunities outlined below are aimed at visits of 6 months or longer; others at 2-3 months and a few start at a couple of weeks. As such it is important to know from the start what options are available to you.

 

WHERE TO GO

Most commonly, students make use of their supervisor’s network. This is by far the easiest way to organize a visit as it builds on collaborations that already exist. It is also the best way to identify a researcher with relevant experience to help you develop new ideas in the context of the topic of your thesis. As such, the first step is talking to your supervisor; they might already have someone in mind and can initiate the contact.

It is also possible there already is a researcher that you want to work with for an extended period. If you are going to a conference and they are going as well, try to talk to them there. You can contact them before the conference to suggest you meet to discuss your work with them. Having met them in person makes it much easier to talk to them about visiting their lab. If there isn’t an opportunity to meet in person, it is also fine to email the researcher you are interested in working with and ask them whether this could be arranged.

 

FINDING FUNDING

There is more funding available for research visits than might seem at first. Below is a list of some useful starting places for researching funding options. Everyone’s background is unique and the opportunities will vary accordingly.

(i) Funding organizations: It is very likely the organization or research council funding your PhD also has funding for travel visits. What is more, they are probably very keen to fund such a visit. The Economic and Social Research council in the UK is a great example of this as they place great emphasis on international research links through their Overseas Institutional Visits scheme.

(ii) Universities: There is a chance that the institution you want to visit has a ‘visiting scholars’ program that you can apply for to fund the visit. Similarly, your own institution might have ‘travel abroad’ schemes with funding for going abroad to an institution of your choice. Further, there are partnership networks between universities that also offer funding. An example is the Worldwide Universities Network which supports mobility for students and researchers between its partner institutions. It is best to ask your university or someone in the department whether you belong to one.

(iii) National grants: Some countries have grants for their nationals to go on study visits – a great example is the German Academic Exchange Service, which also offers a lot of support for international students to come to Germany. Similarly, France funds visits of 6-10 months to any of its institutions through the Eiffel Excellence Scholarship. There are also bilateral agreements between countries to fund exchanges such as the Fulbright Commission which focuses on mobility between the US and (according to their website) a list of more than 155 countries.

(iv) Societies: Lastly, there are travel grants that are subject specific. The
Experimental Psychology Society, the British Psychology Society or the European Association of Social Psychology all offer study visit grants. However, sometimes there are membership conditions on these.

 

The key thing is to give yourself enough time to plan a visit. It is important to have an idea of what funding is available to you, when the funding deadlines are, what the application process is like, what documents you need, and what the interval is between submission and final decision.

 

Good luck!

We had an fMRI paper accepted to the Journal of Cognitive Neuroscience earlier this week. Having got the science out the door, I was able to turn my attention to the fun stuff – a cover image. The cover image for my first fMRI publication was selected by the Journal of Neuroscience and I wanted to go with something similar.

In the past 6 months or so, @alby has tweeted some of the images he generated using @lowpolybot, a twitter bot that returns low-polygon renderings of images tweeted to it. I tweeted a figure from the accepted paper to @lowpolybot and got this back:

@lowpolybot image from the tweet: https://twitter.com/Lowpolybot/status/572392951634108416
@lowpolybot image from the tweet: https://twitter.com/Lowpolybot/status/572392951634108416

There are a range of operations @lowpolybot can perform on your images (detailed on the @lowpolybot tumblr), but if you give no instructions you will get a random combination of operations applied to your image. This was what I had done. I was happy with the picture so, having checked with @lowpolybot’s creator @quasimondo that he was happy for me to do this, I submitted it to the journal.

Sadly though, there’s no chance this image will b e used as a cover image. I received an email the next day from a journal administrator informing me that they have stopped printing cover images. Ah well.

I, like most humans, am bad at understanding randomness and good at spotting patterns that don’t necessarily exist. I also frequently have thoughts like: “That’s the third time this paper has been rejected. It must be bad.” These things are related.

When I submit my work, all of the variables at play, including the quality of the thing being judged, combine to give me a probability that a positive outcome will occur e.g. 0.4 – 2 out of 5 times, a good thing will happen. BUT, probabilities produce lumpy strings of outcomes. That is, good and bad outcomes will appear to us pattern-spotting humans to be clustered, rather than what we would describe as “random”, which we tend to think of as evenly spaced (see the first link above).

To illustrate, I did something very straightforward in Excel to very crudely simulate trying to publish 8 papers.
Column A: =RAND() << (pseudo)randomly assign a number between 0 and 1; in the next
Column B: =IF(Ax>0.4, 0,1) << if the number column A (row x) exceeds .4, this cell will equal 0, otherwise it will equal 1.
Thus, column B will give me a list of successes (1s) and failures (Os) with an overall success rate of ~.4. It took me four refreshes before I got the following:

publishing
Note that the success rate, despite being set to .4, was .26 over this small number of observations. Also note that I embellished the output with a hypothetical stream of consciousness. I really wish I had the detachment of column C, but I don’t. I take rejections to heart and internalise bad outcomes like they are Greggs’ Belgian buns.

 

Although the rejections look clustered, they are all independently determined. I have almost certainly had strings of rejections like those shown above. The only thing that has made them bearable is that I have switched papers, moving on to a new project after ~3 rejections, at the same time giving up on the thrice-rejected paper I assume to be a total failure. As a result, I am almost certainly sitting on good data that has been tainted by bad luck.

Stick with it. It evens out in the end.

Last week Ravi Mill and I had a paper accepted to Consciousness and Cognition. It was accepted after 49 weeks with the journal in four rounds of review. The editorial decisions on the paper were: Reject, Revise and Resubmit, Accept with Minor Revisions and finally, Accept. What makes this decision history somewhat remarkable is that it was initially rejected from the journal it was eventually published in.

This blog post won’t give as much information on that initial rejection as I wanted it to – I sought permission from the journal to publish all correspondence from the reviewer and the editor, which was denied. Below you find only my response to the editorial decision. As context, the manuscript was rejected on the basis of one review, in which it was suggested that we had adopted  some unconventional and even nefarious practices in gathering and analysing our data. These  suggestions didn’t sit well with me, so I sent the following email to the Editor via the Elsevier Editorial System.

 

Dear Prof XXXX,

 

Thank you for your recent consideration of our manuscript, ‘”Old?” or “New?”: The test question provokes a goal-directed bias in memory decision-making’ (Ms. No. XXXXX-XX-XXX), for publication in Consciousness & Cognition. We were, of course, disappointed that you chose to reject the manuscript.

 

Having read the justification given for rejection, we respectfully wish to respond to the decision letter. Whilst we do not believe that this response will prompt reconsideration of the manuscript (your decision letter was clear) we believe it is important to respond for two reasons. First, to reassure you that we have not engaged in any form of data manipulation, and second, to state our concerns that the editorial team at Consciousness & Cognition appear to view it as acceptable that reviewers base their recommendations on poorly substantiated inferences they have made about the motivations of authors to engage in scientific misconduct.

 

As you highlighted in your decision letter, Reviewer 1 raised “substantial concerns about the manner in which [we] carried out the statistical analysis of [our] data”. The primary concern centred on our decision to exclude participants whose d’ was below a threshold of 0.1. In this reply we hope to demonstrate to you that use of such an exclusion criterion is not only standard practice, but it is indeed a desirable analysis step which should give readers more, not less, confidence in the analyses and their interpretation. We will then take the opportunity to demonstrate, if only to you, that our data behave exactly as one would expect them to under varying relaxations of the enforced exclusion criterion.

 

Recognition memory studies often require that participants exceed a performance (sensitivity; d’) threshold before they are included in the study. This is typically carried out when the studies themselves treat a non-sensitivity parameter as their primary dependent variable (as in our rejected paper), as a means of excluding participants that were unmotivated or disengaged from the task. Below are listed a small selection of studies published in the past 2 years which have used sensitivity-based exclusion criteria, along with the number of participants excluded and the thresholds used:

 

Craig, Berman, Jonides & Lustig (2013) – Memory & Cognition
Word recognition
Expt2 – 10/54, Expt3 – 5/54
80% accuracy

 

Davidenko, N. & Flusberg, S.J. (2012) – Cognition
Face recognition
Expt1a – 1/56, Expt1b –1/26, Expt2b –3/26
chance (50%) accuracy

 

Gaspelin, Ruthruff, & Pashler, (2013). Memory & Cognition
Word recognition
Expt1 – 3/46, Expt2 – 1/49, Expt3 – 3/54
70% accuracy

 

Johnson & Halpern (2012) – Memory & Cognition
Song recogntion
Expt1 – 1/20, Expt2 – 3/25
70% accuracy

 

Rummel, Kuhlmann, & Touron (2013) – Consciousness & Cognition
Word classification (prospective memory task)
Expt1 – 6/145
“prospective memory failure”

 

Sheridan & Reingold (2011) – Consciousness & Cognition
Word recognition
Expt1 – 5/64
“difficulty following instructions”

 

Shedden, Milliken, Watters & Monteiro (2013) – Consciousness & Cognition
Letter recognition
Expt4 – 5/28
85% accuracy

 

You will note that there is tremendous variation in the thresholds used, but that it is certainly not “unusual” as claimed by Reviewer 1, not even for papers published in Consciousness and Cognition. Of course, it would not be good scientific practice to accept the status quo uncritically, and we must therefore explain why the employed sensitivity-based exclusion criterion was appropriate for our study. The reasoning is that we were investigating an effect associated with higher order modulation of memory processes. If we had included participants with d’s below 0.1 (an overall accuracy rate of approximately 52% where chance responding is 50%) then it is reasonable to assume that these participants were not making memory decisions based on memory processes, but were at best contributing noise to the data (e.g. via random responding), and at worst systematically confounding the data. To demonstrate the latter point, one excluded participant from Experiment 1 had a d’ of -3.7 (overall accuracy rate of 13%), which was most likely due to responding using the opposite keys to those which they were instructed to use, substituting “new” responses for “old” responses. As we were investigating the effects of the test question on the proportion of “new” and “old” responses, it is quite conceivable that if they had displayed the same biases as our included participants did overall, they would have reduced our ability to detect a real effect. Including participants who did not meet our inclusion criteria, instead of helping us to find effects that hold across all participants, would have systematically damaged the integrity of our findings, leading to reduced estimates of effect size caused by ignoring the potential for influence of confounding variables.

 

If we had been given the opportunity to respond to Reviewer 1’s critique via the normal channels, we would have also corrected Reviewer 1’s inaccurate reading of our exclusion rates. As we stated very clearly in the manuscript, our sensitivity-based exclusion rates were 5%, 6% and 17%, not 25%, 6% and 17%. Reviewer 1 has conflated Experiment 1’s exclusions based on native language with exclusion based on sensitivity. As an aside, we justify exclusion based on language once again as a standard exclusion criterion in word memory experiments to ensure equivalent levels of word comprehension across participants. This is of particular importance when conducting online experiments which allow anyone across the world to participate. In coding the experiment, we thought it far more effective to allow all visitors to participate after first indicating their first language, with a view to excluding non-native speakers’ data from the study after they had taken part. We wanted all participants to have the opportunity to take part in the study (and receive feedback on their memory performance – a primary motivator for participation according to anecdotal accounts gleaned from social media) and to minimise any misreporting of first language which would add noise to the data without recourse for its removal.

 

We would next have responded to Reviewer 1’s claims that our conclusions are not generalisable based on the subset of analysed data by stating that Reviewer 1 is indeed partially correct. Our conclusions would not have been found had we included participants who systematically confounded the data (as discussed above) – as Reviewer 1 is at pains to point out, the effect is small. Nonetheless as demonstrated by our replication of the findings over three experiment and the following reanalyses, our findings are robust enough to withstand inclusion of some additional noise, within reason. To illustrate, we re-analysed the data under two new inclusion thresholds. The first, a d’ <= 0 threshold, equivalent to chance responding (Inclusion 1), and the second a full inclusion in which all participants were analysed (Inclusion 2). For the sake of brevity we list here the results as they relate to our primary manipulation, the effects of question on criterion placement.

 

EXPERIMENT 1
Original – old emphasis c > new emphasis c, t(89) = 2.141, p = .035, d = 0.23.

 

Inclusion 1:
old emphasis c > new emphasis c, t(90) = 2.32, p = .023, d = 0.24.
Inclusion 2:
no difference between old and new emphasis c, t(94) = 1.66, p = .099, d = 0.17

 

EXPERIMENT 2
Original:
Main effect of LOP, F(1,28) = 23.66, p = .001, ηp2 = .458, shallow > deep.
Main effect of emphasis, F(1,28) = 6.65, p = .015, ηp2 = .192, old? > new?.
No LOP x emphasis interaction, F(1,28) = 3.13, p = .088, ηp2 = .101.
Shallow LOP sig old > new, t(28) = 3.05, p = .005, d = 0.62.
Deep LOP no difference, t(28) = .70, p = .487, d = 0.13.

 

Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
Main effect of LOP, F(1,30) = 20.84, p = .001, ηp2 = .410, shallow > deep.
Main effect of emphasis, F(1,30) = 8.73, p = .006, ηp2 = .225, old? > new?.
Sig LOP x emphasis interaction, F(1,30) = 4.28, p = .047, ηp2 = .125.
Shallow LOP sig old > new, t(30) = 3.50, p = .001, d = 0.64.
Deep LOP no difference, t(30) = .76, p = .454, d = 0.14.

 

EXPERIMENT 3
Original:
No main effect of response, F(1,28) = 3.73, p = .064, ηp2 = .117
No main effect of question, F < 1.
Significant question x response interaction, F(1,28) = 8.50, p = .007, ηp2 = .233.
“Yes” response format, old > new, t(28) = 2.41, p = .023, d = 0.45.
“No” response format, new > old, t(28) = 2.77, p = .010, d = 0.52.

 

Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
No main effect of response, F <1.
No main effect of question, F < 1.
No question x response interaction, F(1,34) = 3.07, p = .089, ηp2 = .083.
“Yes” response format, old > new, t(34) = 1.33, p = .19, d = 0.23.
“No” response format, new > old, t(34) = 1.84, p = .07, d = 0.32.

 

To summarise, including participants who responded anywhere above chance had no untoward effects on the results of our inferential statistics and therefore our interpretation cannot be called into question by the results of Inclusion 1. Inclusion 2 on the other hand had much more deleterious effects on the patterns of results reported in Experiments 1 and 3. This is exactly what one would expect given the example we described previously where the inclusion of a participant responding systematically below chance would elevate type II error. In this respect, our reanalysis in response to Reviewer 1’s comments does not weaken our interpretation of the findings.

 

As a final point, we wish to express our concerns about the nature of criticism made by Reviewer 1 and accepted by you as appropriate within peer-review for Consciousness & Cognition. Reviewer 1 states that we the authors “must accept the consequences of data that might disagree with their hypotheses”. This strongly suggests that we have not done so in the original manuscript and have therefore committed scientific misconduct or entered a grey-area verging on misconduct. We deny this allegation in the strongest possible terms and are confident we have demonstrated that this is absolutely not the approach we have taken through the evidence presented in this response. Indeed if Reviewer 1 wishes to make these allegations, they would do well to provide evidence beyond the thinly veiled remarks in their review. If they wish to do this, we volunteer full access to our data for them to conduct any tests to validate their claims, e.g. those carried out in Simonsohn (2013) in which a number of cases of academic misconduct and fraud are exposed through statistical methods. We, and colleagues we have spoken to about this decision, found it worrying that you chose to make your editorial decision on the strength of this unsubstantiated allegation and believe that at the very least we should have been given the opportunity to respond to the review, as we have done here, via official channels.

 

We thank you for your time.

 

Sincerely,

 

Akira O’Connor & Ravi Mill

 

References
Craig. K. S., Berman, M.G., Jonides, J. & Lustig, C (2013) Escaping the recent past: Which stimulus dimensions influence proactive interference? Memory & Cognition 41, 650-670.
Davidenko, N. & Flusberg, S.J. (2012) Environmental inversion effects in face perception. Cognition 123(2), 442-447.
Gaspelin, N., Ruthruff, E. & Pashler, H. (2013) Divided attention: An undesirable difficulty in memory retention. Memory & Cognition 41, 978-988.
Johnson, S. K. & Halpern, A. R. (2012) Semantic priming of familiar songs. Memory & Cognition 40, 579-593.
Rummel, J., Kuhlmann, B.G. & Touron, D. R. (2013) Performance predictions affect attentional processes of event-based prospective memory. Consciousness and Cognition, 22 (3), 729-741.
Shedden, J. M., Milliken, B., Watters, S. & Monteiro, S (2013) Event-related potentials as brain correlates of item specific proportion congruent effects. Consciousness & Cognition, 22 (4), 1442-1455.
Sheridan, H. & Reingold, E. M. (2011) Recognition memory performance as a function of reported subjective awareness. Consciousness & Cognition 20 (4), 1363-1375.
Simonsohn, U. (2013) Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875-1888.

 

The Editor’s very reasonable response was to recommend we resubmit the manuscript, which we did. The manuscript was then sent out for review to two new reviewers, and the process began again, this time with a happier ending.

My recommendations for drafting unsolicited responses are:

  • Allow the dust to settle (this is key to Jim Grange’s tips on Dealing with Rejection too). We see injustice everywhere in the first 24 hours following rejection. Give yourself time to calm down and later, revisit the rejection with a more forensic eye. If the reviews or editorial letter warrant a response, they will still warrant it in a few days, by which time you will be better able to pick the points you should focus on.
  • Be polite. (I skate on thin ice in a couple of passages in the letter above, but overall I think I was OK).
  • Support your counterarguments with evidence. I think our letter did this well. If you need to do some more analyses to achieve this, why not? It will at least reassure you that the reviewer’s points aren’t supported by your data.
  • Don’t expect anything to come of your letter. At the very least, it will have helped you manage some of your frustration.

Earlier in the year I was asked by the University of St Andrews Open Access Team to give an interview to a group from the University of Edinburgh Library. I’m certainly no expert, but I’m more excited about the idea than some researchers here at St Andrews (though there are some other researchers here, like Kim McKee, who are extremely enthusiastic about it). The video is embedded below, with my 40 second contribution from 8:44 onwards.

 

 

My interview actually lasted more than half an hour, though most of what I was trying to communicate wasn’t really consistent with what the interviewers wanted. If you watch the video through, you’ll notice the editorial push towards green rather than gold OA*. I do understand this push, especially from a library’s perspective – we can and should be uploading the vast majority of our work to institutional repositories and making it open access via the green route – but I don’t think that is helps the long-term health of academic publishing.

I spent a long time in my interview arguing for gold open access, but not the ‘hybrid’ gold open access offered by traditional publishers like Elsevier. (I find the current implementation of hybrid open access pretty abhorrent. It seems to me to be an utterly transparent way for the traditional publishers to milk the cow at both ends, collecting subscriptions and APCs.)  I’m not even too thrilled by the native OA publishers like Frontiers and PLoS, not because they’re bad for academic publishing (I think they are far better for the dissemination of research than the traditional publishers), but because they’re not revolutionary (though see Graham Steel’s comments below)**. Their model is pretty straightforward (or you could call it boring and expensive) – by shifting the collection of money from the back- to the front- end, they negate the need for institutional subscriptions by charging APCs in the region of $1000s. What I am excited about is the gold open access offered by some open access publishers who have thought about a publishing model for the modern era from the ground up, not by simple adaptation of printing press-era models. Publishers like PeerJ and The Winnower have done just this, and these are the sorts of gold OA publishers I hope will change the way we disseminate research.

Sadly for me, I didn’t express myself well enough on that matter to make the final cut of this video. Next time…

 

* Here’s a brief primer in case you’re not familiar with these terms. Green OA is repository-based free OA – you typically deposit author versions (the documents submitted to the journal rather than the typeset documents published by the journal) into an institutional database. Anyone who knows to look in the repository for your work will find it there. Gold OA is not free – there are almost always article processing charges (APCs) – but once paid for, anyone can access the publisher version of your paper directly from the  publisher’s website.

 

** Parentheses added 14/08/2014 following Graham Steel’s comments.