Last week Ravi Mill and I had a paper accepted to Consciousness and Cognition. It was accepted after 49 weeks with the journal in four rounds of review. The editorial decisions on the paper were: Reject, Revise and Resubmit, Accept with Minor Revisions and finally, Accept. What makes this decision history somewhat remarkable is that it was initially rejected from the journal it was eventually published in.

This blog post won’t give as much information on that initial rejection as I wanted it to – I sought permission from the journal to publish all correspondence from the reviewer and the editor, which was denied. Below you find only my response to the editorial decision. As context, the manuscript was rejected on the basis of one review, in which it was suggested that we had adopted  some unconventional and even nefarious practices in gathering and analysing our data. These  suggestions didn’t sit well with me, so I sent the following email to the Editor via the Elsevier Editorial System.

 

Dear Prof XXXX,

 

Thank you for your recent consideration of our manuscript, ‘”Old?” or “New?”: The test question provokes a goal-directed bias in memory decision-making’ (Ms. No. XXXXX-XX-XXX), for publication in Consciousness & Cognition. We were, of course, disappointed that you chose to reject the manuscript.

 

Having read the justification given for rejection, we respectfully wish to respond to the decision letter. Whilst we do not believe that this response will prompt reconsideration of the manuscript (your decision letter was clear) we believe it is important to respond for two reasons. First, to reassure you that we have not engaged in any form of data manipulation, and second, to state our concerns that the editorial team at Consciousness & Cognition appear to view it as acceptable that reviewers base their recommendations on poorly substantiated inferences they have made about the motivations of authors to engage in scientific misconduct.

 

As you highlighted in your decision letter, Reviewer 1 raised “substantial concerns about the manner in which [we] carried out the statistical analysis of [our] data”. The primary concern centred on our decision to exclude participants whose d’ was below a threshold of 0.1. In this reply we hope to demonstrate to you that use of such an exclusion criterion is not only standard practice, but it is indeed a desirable analysis step which should give readers more, not less, confidence in the analyses and their interpretation. We will then take the opportunity to demonstrate, if only to you, that our data behave exactly as one would expect them to under varying relaxations of the enforced exclusion criterion.

 

Recognition memory studies often require that participants exceed a performance (sensitivity; d’) threshold before they are included in the study. This is typically carried out when the studies themselves treat a non-sensitivity parameter as their primary dependent variable (as in our rejected paper), as a means of excluding participants that were unmotivated or disengaged from the task. Below are listed a small selection of studies published in the past 2 years which have used sensitivity-based exclusion criteria, along with the number of participants excluded and the thresholds used:

 

Craig, Berman, Jonides & Lustig (2013) – Memory & Cognition
Word recognition
Expt2 – 10/54, Expt3 – 5/54
80% accuracy

 

Davidenko, N. & Flusberg, S.J. (2012) – Cognition
Face recognition
Expt1a – 1/56, Expt1b –1/26, Expt2b –3/26
chance (50%) accuracy

 

Gaspelin, Ruthruff, & Pashler, (2013). Memory & Cognition
Word recognition
Expt1 – 3/46, Expt2 – 1/49, Expt3 – 3/54
70% accuracy

 

Johnson & Halpern (2012) – Memory & Cognition
Song recogntion
Expt1 – 1/20, Expt2 – 3/25
70% accuracy

 

Rummel, Kuhlmann, & Touron (2013) – Consciousness & Cognition
Word classification (prospective memory task)
Expt1 – 6/145
“prospective memory failure”

 

Sheridan & Reingold (2011) – Consciousness & Cognition
Word recognition
Expt1 – 5/64
“difficulty following instructions”

 

Shedden, Milliken, Watters & Monteiro (2013) – Consciousness & Cognition
Letter recognition
Expt4 – 5/28
85% accuracy

 

You will note that there is tremendous variation in the thresholds used, but that it is certainly not “unusual” as claimed by Reviewer 1, not even for papers published in Consciousness and Cognition. Of course, it would not be good scientific practice to accept the status quo uncritically, and we must therefore explain why the employed sensitivity-based exclusion criterion was appropriate for our study. The reasoning is that we were investigating an effect associated with higher order modulation of memory processes. If we had included participants with d’s below 0.1 (an overall accuracy rate of approximately 52% where chance responding is 50%) then it is reasonable to assume that these participants were not making memory decisions based on memory processes, but were at best contributing noise to the data (e.g. via random responding), and at worst systematically confounding the data. To demonstrate the latter point, one excluded participant from Experiment 1 had a d’ of -3.7 (overall accuracy rate of 13%), which was most likely due to responding using the opposite keys to those which they were instructed to use, substituting “new” responses for “old” responses. As we were investigating the effects of the test question on the proportion of “new” and “old” responses, it is quite conceivable that if they had displayed the same biases as our included participants did overall, they would have reduced our ability to detect a real effect. Including participants who did not meet our inclusion criteria, instead of helping us to find effects that hold across all participants, would have systematically damaged the integrity of our findings, leading to reduced estimates of effect size caused by ignoring the potential for influence of confounding variables.

 

If we had been given the opportunity to respond to Reviewer 1’s critique via the normal channels, we would have also corrected Reviewer 1’s inaccurate reading of our exclusion rates. As we stated very clearly in the manuscript, our sensitivity-based exclusion rates were 5%, 6% and 17%, not 25%, 6% and 17%. Reviewer 1 has conflated Experiment 1’s exclusions based on native language with exclusion based on sensitivity. As an aside, we justify exclusion based on language once again as a standard exclusion criterion in word memory experiments to ensure equivalent levels of word comprehension across participants. This is of particular importance when conducting online experiments which allow anyone across the world to participate. In coding the experiment, we thought it far more effective to allow all visitors to participate after first indicating their first language, with a view to excluding non-native speakers’ data from the study after they had taken part. We wanted all participants to have the opportunity to take part in the study (and receive feedback on their memory performance – a primary motivator for participation according to anecdotal accounts gleaned from social media) and to minimise any misreporting of first language which would add noise to the data without recourse for its removal.

 

We would next have responded to Reviewer 1’s claims that our conclusions are not generalisable based on the subset of analysed data by stating that Reviewer 1 is indeed partially correct. Our conclusions would not have been found had we included participants who systematically confounded the data (as discussed above) – as Reviewer 1 is at pains to point out, the effect is small. Nonetheless as demonstrated by our replication of the findings over three experiment and the following reanalyses, our findings are robust enough to withstand inclusion of some additional noise, within reason. To illustrate, we re-analysed the data under two new inclusion thresholds. The first, a d’ <= 0 threshold, equivalent to chance responding (Inclusion 1), and the second a full inclusion in which all participants were analysed (Inclusion 2). For the sake of brevity we list here the results as they relate to our primary manipulation, the effects of question on criterion placement.

 

EXPERIMENT 1
Original – old emphasis c > new emphasis c, t(89) = 2.141, p = .035, d = 0.23.

 

Inclusion 1:
old emphasis c > new emphasis c, t(90) = 2.32, p = .023, d = 0.24.
Inclusion 2:
no difference between old and new emphasis c, t(94) = 1.66, p = .099, d = 0.17

 

EXPERIMENT 2
Original:
Main effect of LOP, F(1,28) = 23.66, p = .001, ηp2 = .458, shallow > deep.
Main effect of emphasis, F(1,28) = 6.65, p = .015, ηp2 = .192, old? > new?.
No LOP x emphasis interaction, F(1,28) = 3.13, p = .088, ηp2 = .101.
Shallow LOP sig old > new, t(28) = 3.05, p = .005, d = 0.62.
Deep LOP no difference, t(28) = .70, p = .487, d = 0.13.

 

Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
Main effect of LOP, F(1,30) = 20.84, p = .001, ηp2 = .410, shallow > deep.
Main effect of emphasis, F(1,30) = 8.73, p = .006, ηp2 = .225, old? > new?.
Sig LOP x emphasis interaction, F(1,30) = 4.28, p = .047, ηp2 = .125.
Shallow LOP sig old > new, t(30) = 3.50, p = .001, d = 0.64.
Deep LOP no difference, t(30) = .76, p = .454, d = 0.14.

 

EXPERIMENT 3
Original:
No main effect of response, F(1,28) = 3.73, p = .064, ηp2 = .117
No main effect of question, F < 1.
Significant question x response interaction, F(1,28) = 8.50, p = .007, ηp2 = .233.
“Yes” response format, old > new, t(28) = 2.41, p = .023, d = 0.45.
“No” response format, new > old, t(28) = 2.77, p = .010, d = 0.52.

 

Inclusion 1:
No change in excluded participants – results identical.
Inclusion 2:
No main effect of response, F <1.
No main effect of question, F < 1.
No question x response interaction, F(1,34) = 3.07, p = .089, ηp2 = .083.
“Yes” response format, old > new, t(34) = 1.33, p = .19, d = 0.23.
“No” response format, new > old, t(34) = 1.84, p = .07, d = 0.32.

 

To summarise, including participants who responded anywhere above chance had no untoward effects on the results of our inferential statistics and therefore our interpretation cannot be called into question by the results of Inclusion 1. Inclusion 2 on the other hand had much more deleterious effects on the patterns of results reported in Experiments 1 and 3. This is exactly what one would expect given the example we described previously where the inclusion of a participant responding systematically below chance would elevate type II error. In this respect, our reanalysis in response to Reviewer 1’s comments does not weaken our interpretation of the findings.

 

As a final point, we wish to express our concerns about the nature of criticism made by Reviewer 1 and accepted by you as appropriate within peer-review for Consciousness & Cognition. Reviewer 1 states that we the authors “must accept the consequences of data that might disagree with their hypotheses”. This strongly suggests that we have not done so in the original manuscript and have therefore committed scientific misconduct or entered a grey-area verging on misconduct. We deny this allegation in the strongest possible terms and are confident we have demonstrated that this is absolutely not the approach we have taken through the evidence presented in this response. Indeed if Reviewer 1 wishes to make these allegations, they would do well to provide evidence beyond the thinly veiled remarks in their review. If they wish to do this, we volunteer full access to our data for them to conduct any tests to validate their claims, e.g. those carried out in Simonsohn (2013) in which a number of cases of academic misconduct and fraud are exposed through statistical methods. We, and colleagues we have spoken to about this decision, found it worrying that you chose to make your editorial decision on the strength of this unsubstantiated allegation and believe that at the very least we should have been given the opportunity to respond to the review, as we have done here, via official channels.

 

We thank you for your time.

 

Sincerely,

 

Akira O’Connor & Ravi Mill

 

References
Craig. K. S., Berman, M.G., Jonides, J. & Lustig, C (2013) Escaping the recent past: Which stimulus dimensions influence proactive interference? Memory & Cognition 41, 650-670.
Davidenko, N. & Flusberg, S.J. (2012) Environmental inversion effects in face perception. Cognition 123(2), 442-447.
Gaspelin, N., Ruthruff, E. & Pashler, H. (2013) Divided attention: An undesirable difficulty in memory retention. Memory & Cognition 41, 978-988.
Johnson, S. K. & Halpern, A. R. (2012) Semantic priming of familiar songs. Memory & Cognition 40, 579-593.
Rummel, J., Kuhlmann, B.G. & Touron, D. R. (2013) Performance predictions affect attentional processes of event-based prospective memory. Consciousness and Cognition, 22 (3), 729-741.
Shedden, J. M., Milliken, B., Watters, S. & Monteiro, S (2013) Event-related potentials as brain correlates of item specific proportion congruent effects. Consciousness & Cognition, 22 (4), 1442-1455.
Sheridan, H. & Reingold, E. M. (2011) Recognition memory performance as a function of reported subjective awareness. Consciousness & Cognition 20 (4), 1363-1375.
Simonsohn, U. (2013) Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875-1888.

 

The Editor’s very reasonable response was to recommend we resubmit the manuscript, which we did. The manuscript was then sent out for review to two new reviewers, and the process began again, this time with a happier ending.

My recommendations for drafting unsolicited responses are:

  • Allow the dust to settle (this is key to Jim Grange’s tips on Dealing with Rejection too). We see injustice everywhere in the first 24 hours following rejection. Give yourself time to calm down and later, revisit the rejection with a more forensic eye. If the reviews or editorial letter warrant a response, they will still warrant it in a few days, by which time you will be better able to pick the points you should focus on.
  • Be polite. (I skate on thin ice in a couple of passages in the letter above, but overall I think I was OK).
  • Support your counterarguments with evidence. I think our letter did this well. If you need to do some more analyses to achieve this, why not? It will at least reassure you that the reviewer’s points aren’t supported by your data.
  • Don’t expect anything to come of your letter. At the very least, it will have helped you manage some of your frustration.

Earlier in the year I was asked by the University of St Andrews Open Access Team to give an interview to a group from the University of Edinburgh Library. I’m certainly no expert, but I’m more excited about the idea than some researchers here at St Andrews (though there are some other researchers here, like Kim McKee, who are extremely enthusiastic about it). The video is embedded below, with my 40 second contribution from 8:44 onwards.

 

 

My interview actually lasted more than half an hour, though most of what I was trying to communicate wasn’t really consistent with what the interviewers wanted. If you watch the video through, you’ll notice the editorial push towards green rather than gold OA*. I do understand this push, especially from a library’s perspective – we can and should be uploading the vast majority of our work to institutional repositories and making it open access via the green route – but I don’t think that is helps the long-term health of academic publishing.

I spent a long time in my interview arguing for gold open access, but not the ‘hybrid’ gold open access offered by traditional publishers like Elsevier. (I find the current implementation of hybrid open access pretty abhorrent. It seems to me to be an utterly transparent way for the traditional publishers to milk the cow at both ends, collecting subscriptions and APCs.)  I’m not even too thrilled by the native OA publishers like Frontiers and PLoS, not because they’re bad for academic publishing (I think they are far better for the dissemination of research than the traditional publishers), but because they’re not revolutionary (though see Graham Steel’s comments below)**. Their model is pretty straightforward (or you could call it boring and expensive) – by shifting the collection of money from the back- to the front- end, they negate the need for institutional subscriptions by charging APCs in the region of $1000s. What I am excited about is the gold open access offered by some open access publishers who have thought about a publishing model for the modern era from the ground up, not by simple adaptation of printing press-era models. Publishers like PeerJ and The Winnower have done just this, and these are the sorts of gold OA publishers I hope will change the way we disseminate research.

Sadly for me, I didn’t express myself well enough on that matter to make the final cut of this video. Next time…

 

* Here’s a brief primer in case you’re not familiar with these terms. Green OA is repository-based free OA – you typically deposit author versions (the documents submitted to the journal rather than the typeset documents published by the journal) into an institutional database. Anyone who knows to look in the repository for your work will find it there. Gold OA is not free – there are almost always article processing charges (APCs) – but once paid for, anyone can access the publisher version of your paper directly from the  publisher’s website.

 

** Parentheses added 14/08/2014 following Graham Steel’s comments.

Last weekend I had the honour of being Best Man at a university friend’s wedding. It was a beautiful day spent in the sunshine of St Albans and then in the low-ceilinged, close comfort of the oldest pub in England, Ye Olde Fighting Cocks.

Ye Olde Fighting Cocks, St Albans
Ye Olde Fighting Cocks, St Albans. from hertsmemories.org.uk

As Best Man I had a certain number of duties to carry out, with the speech amongst the most highly anticipated by those attending the celebrations. For those unfamiliar with what is expected here, the Best Man’s speech is traditionally the last of the speeches and the point in proceedings when thanks and sentimentality give way to humour and a raucousness that sets the tone for the night ahead. For weeks beforehand, people had been asking how I was getting on with it, noticing my terse response (“getting on just fine thanks”), and reassuring me that there was plenty of material to draw on. The expectation that it be funny was pretty inescapable.

I lecture statistics to 150 students on a regular basis. Over the past few years I have managed to overcome my hatred of public speaking and relax into these one-sided conversations on t-tests, ANOVAs and regression. One of the luxuries of speaking in front of large audiences as part of my job is that I know what it feels like and I know how to deal with the mechanics of getting my words out of my mouth in quite an intimidating situation. There are even moments in lectures now when I notice that I’m in a flow state, enjoying the fluidity of speaking about something I know well. For this reason, the prospect of getting up in from of over 100 boozy partiers and speaking about a good friend was not what I found intimidating. The expectation that I make them laugh, now that was scary.

I know that to speak well, I need to prepare (I have written about the routine I go through for important talks here). This is exactly what I did for the Best Man’s speech. The result was one of the most exhilarating experiences of performing in front of an audience I have ever had – the audience enjoyed themselves and I had a tremendous time. I didn’t have to buy a drink for the rest of the evening! Here is what I did to get into that position.

 

Be Yourself

1. I reassured myself with the knowledge that, in standing up in front of lecture theatres full of students,  I am paid to do something very similar to this. The major difference between lecturing and speech-giving was what the audience expects of the content. I knew that I was expected to offer toasts to thank various people, but whom exactly? To help with this googled the running order for content within a Best Man’s speech. By chance I found The Art of Manliness’ 10 Steps to the Best Best Man Speech, from which I got some suggested running order information, but much more importantly, I was reassured by the insistence that I ought to be myself. I have never wanted to stand up in front of people to make them laugh, but I do nonetheless enjoy making small groups of friends laugh when telling stories in the pub. Bearing this in mind allowed me to feel comfortable in not trying to ape my favourite comics, but simply allowing myself to find my inner story-teller and let him speak to a larger group of friends. This was the me I tried to be when writing the speech in advance of the wedding.

 

Write and Practise the Speech in Advance

1. Write the speech beforehand. Write it out even if you’re not going to read it. I always write out important talks so that I can practise my phrasing and I edit them to whatever worked best after each run-through. Thus, I start with a script in rather broken spoken prose, which is edited into something that sounds natural by the time I’m done with it. Over the course of practising I learn what I want to convey in each sentence so that I can say it  in any number of ways, off-script, by the time I get to delivering it to an audience. I know that this is a matter of personal preference, but this is what works for me and I could never deliver any speech or talk without practising it a few times first.

2. When lecturing or giving talks, the transitions between slides are often tricky points. My Best Man’s speech had similarly tricky transition points where I moved from toasts to anecdotes or from one story to the next. Scripting these transitions as part of scripting the entire speech gave me an idea of how to move on as seamlessly as possible.

 

Work on Timing

1. Don’t out-stay your welcome. I can’t over-run my lectures because students will start leaving. They have other places to be. Weddings guests probably won’t leave, but they won’t applaud you for going on and on either. Silky, a stand-up comedian attending the wedding spoke to me as we were sitting down to dinner. He gave me the following advice: “If it’s going badly, get off quick. If it’s going well, get off quick.” In other words, keep it short. Before I started writing the speech I was aiming for about 10 minutes. Run-throughs lasted about 13 minutes (an acceptable timeframe according to the groom, whom I had asked about this beforehand). If you’re running to a tight schedule, practising the speech will give you an idea of whether or not you need to remove content.

2. Comic timing is a little harder to work on. This is something I have rarely had to worry about in lectures (I tend to play them straight) and I’m not sure how I would go about practising comic timing other than by doing this sort of speaking more. Something that threw me off a few times was that people started tittering before I had delivered the punchlines. The audience expect you to be funny and they want you to feel comfortable, so they will laugh when you give them an excuse to. This made me fluff a line or two. It is something I will be more mindful of should I ever have to do this again.

 

Logistics and Planning

1. Know your AV equipment. I delivered the speech into a hand-held microphone. I had seen the first speaker struggle a little with microphone distance so I was determined to be careful of making the same mistake and, in the end, delivered my speech with the mic resting on my chin just below my lower lip. It probably looked weird but I managed to get through the whole speech without any microphone dropout. (In future I will have a go on the amplification equipment beforehand so I can work out something a little more elegant.)

2. Coordinate toasts and readings with other speakers. When you are delivering a lecture course, you want to avoid not covering important material (dangerous for exams) and duplication (boring). The same is true of wedding speeches. The night before, the groom and I had discussed who was giving which toasts so that, by the time I had finished, everyone who needed to be thanked would had been thanked. Had we not had this conversation, the groomsmen would have gone without a toast – an omission I’m glad we avoided. On a related note, I also opted to read a short passage to the bride and groom to close my speech. It was only at the wedding service when one of the passages I had been considering for my own speech was read that I realised I had got lucky the choice I eventually made. If you are doing something unconventional like reading a passage during your speech, have a quiet word in the groom’s ear to ask what readings they have planned for the service well in advance.

 

Being Funny

The three points below target a specific aim, being funny, which isn’t a priority for me when I lecture. I don’t make much reference to lecturing below because these points are specific to my experience of the wedding speech situation.

1. Despite the pressure, being funny is not the be-all and end-all. Having typed “Best Man’s Speech” into Google, I was surprised to find  the first auto-complete suggestion to be “Best Man’s Speech one-liners”. I don’t use the slides my course textbook publishers give me when I lecture, and I would be similarly wary about using jokes about other people to portray my relationship with the groom. What I wanted to do above all else, was to paint a picture of the groom as I know him. Having said this, such is the pressure to be funny that I’m not surprised people google jokes for use in wedding speeches, or end up telling embarrassing stories from the stag do. Again, the  Art of Manliness article delivers reassurance:

What gets people in trouble is attempting to be funny by sharing some embarrassing story or cracking some lame joke about a ball and chain. It usually comes out horribly and no one laughs. It’s okay to share a humorous anecdote, but not one that gets laughs at the expense of your friend and his new wife and embarrasses them and their guests.

This advice set the tone for the stories I wanted to tell. I wanted those in the audience who knew the groom to see him in the jokes I was telling and for this recognition to be funny in and of itself. I also wanted to capture a range of experiences I had with the groom, from those that were funny to those that were sad. The sadder moments would act as points from which to rebound back to laughter, but would also help the audience understand what a lovely thing it was for the groom to have met the bride at the time he did.

2. Avoid in-jokes. My university friends would invariably ask if I was including their own favourite university story of the groom. Many of these stories were very funny, but only in the context of the many in-jokes we shared as a group of close-knit friends. I largely avoided these  references because I wanted to appeal to as many in the audience as possible. Those that I did include were equally viable as terrible puns or cultural references, which the university crowd found funnier because of their shared history of appreciating them.

3. Enjoy the format. I was in the privileged position of speaking to an audience who expected me to make them laugh. When writing the speech I experimented with jokes, tweaking wording, timing and structure. I eventually settled on a narrative that in the end called back to humorous stories about the groom to illustrate how normal the bride is in comparison. I have seen comedians using and even explaining this device to great comedic effect, and incorporating it into structure of my own speech gave me a sense that I had actually written a funny speech. This is undoubtedly an aspect of the Best Man’s speech that I would not have thought to focus on had I been preoccupied by the prospect of public speaking. My experience of lecturing allowed me to build on its commonalities with giving a Best Man’s speech to embrace and ultimately enjoy the format tremendously.

I give all of my lectures and presentations using the cloud-based Prezi. Because of this, I have a subscription, which gives me access to the offline Prezi creator, Prezi Desktop. The major advantage of Prezi Desktop is that you can work on presentations without an internet connection and upload your presentations to the cloud later… In theory.

Upload to Prezi.com...
Upload to Prezi.com…

Last week I ran into an issues with Prezi Desktop where the ‘Upload to Prezi.com…’ menu function didn’t work. This is not what you want to happen the night before your lecture. Here was the problem as I described it to Prezi Support:

I cannot upload a prezi I have created in the desktop editor (2.84 MB in size). Sometimes it stops (hangs) at 10%, other times as high as 35%. Occasionally  (~20% of the time)  I get an error message about media, prompting me to strip out all embedded videos (less than ideal), but this still does not resolve the issue. The exact error I receive is:

There was some trouble uploading your content (Error: uploading_media_files)

A search of Prezi’s support data base yields this, which suggests that there’s something wrong with my firewall settings. I therefore tried it with firewall off and on, on different computers, on different ISPs. No luck.

Eventually Prezi support uploaded the file for me and also came through with the following nugget of advice.

…my suggestion for next time would be to name the pez file something very simple, with only letters and numbers.

The files I had been trying to upload were named PS2001_20122013_Lect1.pez, PS2001_20122013_Lect2.pez etc. As soon as I stripped them down to PS2001Lect1, PS2001Lect2 etc., upload worked just fine. This is a very annoying bug in the Prezi Desktop functionality that needs to be fixed, especially as this filename suggestion isn’t available on Prezi’s support forum.

To guarantee Prezi Desktop’s ‘Upload to Prezi.com…’ menu function, make your prezi filenames short and avoid non-alphanumeric characters e.g. _s, &s etc.

Frontiers use a pre-publication peer review system that differs from the standard peer-review system used by most traditional journals. After undergoing Independent Review, an initial peer review in which reviewers give star ratings and make comments on your manuscript (the standard reviewing format), your manuscript moves to Interactive Review, an online forum in which points from the Independent Review are dealt with by the authors and there is back-and-forth discussion all parties are satisfied. Frontiers make much of this system, claiming that it increases reviewing efficiency and leads to an average submission-publication lag of only 3 months.

A paper on which I am an author is currently undergoing Interactive Review. Up until now I can’t say that the review process has been particularly speedy, with some reviewers reluctant to engage in the Interactive Review (indeed, I suspect many of the time-savings from submission to publication come after the manuscript has been accepted for publication). Frustrated, I emailed the Frontiers in Psychology Editorial Office to ask what their official guidance is on the time it should take authors and reviewers to complete each stage of the  review process, and how they deal with tardiness. Here are their answers:

  • Independent Review – 10 days
  • Interactive Review, authors’ responses to Independent Reviews – 45 days
  • Interactive Review, reviewers responses to authors’ responses – 10 days
  • Interactive Review, authors’ responses to reviewers’ responses – 10 days (and so on)

Frontiers say that they “have a dedicated team…  who work on ensuring that the review process of manuscripts runs smoothly. Should participants become delayed, we monitor the situation and also remind them about taking action.”

This sounds like a good system in principle, though I remain to be convinced about how effectively delays are dealt with. It certainly seems that if a reviewer wants to slow down publication of a paper, they can do so at little cost to themselves. Just like traditional peer review, Frontiers’ review system relies on goodwill from all participants and a strong editor, maybe even moreso as there are any number of points at which a reviewer can bring the Interactive Review to a halt. Apart from the increased transparency (it’s blindingly obvious that Reviewer 2 not only hates your paper, but can’t be bothered to say this quickly), there doesn’t seem to be much that is revolutionary here. I suppose I’ll just have to wait and see.

I’m going to disregard the usual speculation about what type-setter and editorial assistant salaries are, and how much distribution infrastructure costs because these are all tied in to the true costs of publishing from a publisher’s perspective and not what I’m interested in. Instead, I’m going to use figures from my employer, the University of St Andrews, to crudely examine what this very small market can bear open access articles to cost.

The first assumption I make here is that journal subscriptions and gold open access journal publication costs should be drawn from the same pool of money. That is, they are university outgoings that support publishers, thereby funding the publication of university-based researchers’ work.

The second assumption, which almost immediately serves to highlight how useless this back-of-the-envelope calculation is, is that we no longer need to subscribe to paywalled journals and can therefore channel all funds that we would have spent on this into open access publishing. For argument’s sake, let’s suppose that the UK government has negotiated a nationwide subscription to all journals with all closed-access publishers for the 2014/2015 academic year. This leaves the University of St Andrews Library with journal subscription money that it needs to spend in order to continue its current funding allocation. Naturally, it ploughs all of this into open access publishing costs.

Once comfortable with these assumptions, we can fairly easily estimate how much a university like mine could afford to pay for each article published, if every single output was a gold open access article such.

Total St Andrews University spending on journal subscriptions per year:
According to the library’s 2011/2012 annual report: £2.11m
According to a tweet from the @StAndrewsUniLib twitter account: ~£1.7m
Given that the higher value also included spending on databases and e-resources, I’ll go with the £1.7m/year estimate.

Total number of publications by St Andrews University researchers per year:
We have a PURE research information system on which all researchers are meant to report all of our publications.  Accordng to a tweet from @JackieProven at the University of St Andrews Library:

over 2000 publications/yr, about 1200 are articles and around half of those will have StA corresponding author

We can therefore assume 600 publications/year.

Open access publication costs which could be absorbed in this hypothetical situation:
£1,700,000/600 = £2,833

This value is higher than I was expecting it to be, and suggests that for even a small institution like the University of St Andrews, article processing charges (APC) in gold open access journals aren’t too far off the mark. According to PeerJ’s roundup, even PLOS Biology’s steep APC of $2900 is considerably less than what St Andrews could bear in this highly unrealistic situation.

Of course, there are quite a few caveats that sit on top of this hypothetical estimate and its assumptions:
1) I may well be underestimating the number of publication outputs from the University’s researchers. This would push the per-article cost the library could afford to pay down.
2) Larger universities would have a greater number of researchers and therefore publications. The increase in the denominator would be offset by an increase in the numerator-larger universities have medical schools and law schools which St Andrews does not-but I have no idea what effect this would have on the per-article cost these better endowed libraries could afford to pay.
3) The ecosystem would change. Gold open access journals have higher publication rates than paywalled journals. If more articles were published, this would also push the per-article cost the library could absorb down.
4) This estimate makes no consideration of the open access publication option in closed access journals. This publication option, as well as being more expensive than the gold open access offered in open access only journals allows traditional publishers to milk the cow at both ends (subscription costs AND APCs) and I imagine library administrators would struggle to justify supporting this from the same fund as that used to pay journal subscriptions.

I’ve been meaning to do this calculation for a few months and am grateful to the staff at the University of St Andrews Library for providing me with these figures. I’m interested in what others make of this, and would be keen to hear your thoughts in the comments below.

Earlier this week, I attended the BBSRC Eastbio Annual Symposium, a meeting for PhD students funded by the BBSRC’s doctoral training programme. The theme of this year’s meeting was ‘Making an Impact’. Alongside one or two talks on the REF impact of ‘spinning out’ scientific businesses that I found utterly, soul-crushingly devoid of anything honourable, there were a number of great talks on the value of public engagement.

Of these, the talk I enjoyed most was given by Dr Jan Barfoot of EuroStemCell, who spoke about the huge number of ways in which researchers can engage the public in their work. Amidst the extraverted commotion about bright clubs and elevator pitches that had permeated the rest of the symposium, I took comfort from Jan’s acknowledgement of the other ways in which people like me might want to communicate with those who might find their work interesting. As has been explored in great depth in Susan Cain’s book Quiet, there are quite a few people, 33-50% of the population, who find the idea of networking, public speaking and generally ‘putting yourself out there’ lies somewhere along the continuum of unpleasant to terrifying. This minority of introverts is well represented in academia, though sadly for me, not well represented enough to have done away with oral presentations, conference socials and the idea that there’s something wrong with you if you don’t enjoy talking about our work to people who might not give a shit.

Jan’s talk got me thinking about the public engagement I do. In doing so I realised that the majority of the activities I’ve been involved with since my arrival at St Andrews have been written (have I mentioned that I don’t enjoy giving talks?)  writing these public engagement pieces has almost always been made much easier by my experience of writing posts for this blog (maybe some of the more popular blog posts I have written even count as public engagement). I generally write these posts with a scientifically clued-up, but non-specialist audience in mind – PhD students, researchers in other fields, interested members of the public – most of whom I expect will have stumbled across this site via Google. As I’ve practiced writing for this audience a fair amount, I find it relatively easy to switch into this mode when asked to write bits for the St Andrews Campaign magazine (see below) or the Development blog. As lame as it sounds to those who crave the rush of applause and laughter, blogging is my bright club.

St Andrews Campaign Magazine: University Sport
St Andrews Campaign Magazine: University Sport

Of course it takes time and commitment  to keep it up (no-one thinks much of a one-post “Hello World” blog) but I didn’t say it was easier than other forms of public engagement. It’s just a better format for me. Considering the investment of time it requires alongside the real benefits it can have, it’s a shame when other researchers dismiss blogging as less meaningful than the engagement work they do. Something which happens all to often. Consequently, I always feel guilty when writing posts for it during work hours. Why should this be the case? I wouldn’t feel bad about practicing a public engagement talk or meeting a community of patients for whom my research is relevant, so why the self-flagellation over writing? No doubt, this perception will be further reinforced when REF Impact Statements are circulated across departments across the UK, with blogging being written up as just something that all academics in all research groups do, probably under the misapprehension that a PURE research profile like this counts as a blog. This does a real disservice to those whose blogs often act as a first source of information for those googling something they’ve just heard about on the news, or those whose blogs help raise and maintain the profiles of the universities at which they work.

If you want to blog, do it. You’ll write better and one way or another you’ll probably get asked to write in a more formal capacity for the organisation you work for. Just don’t expect to be promoted, or even appreciated, because of it.

I spent most of this summer in St Andrews writing research papers. This prolonged period of writing gave me time to consider the publication of my own work along lines I haven’t fully considered before. I was able to think not only of the quality of science typically reported in the journal to which I was considering submission, but also of that journal’s publication model. For the first time in my career I felt it not only desirable, but also sensible, to submit to open access journals. It’s not that I haven’t wanted to publish in open access journals before, it’s just that there have been too many things stopping me from breaking with the traditional journals.

Open access image via salfordpgrs on flickr: http://www.flickr.com/photos/salfordpgrs/
Open access image via salfordpgrs on flickr: http://www.flickr.com/photos/salfordpgrs/

So what changed? Of course, the traditional publishing houses have had a lot of bad press. Their support of the ultimately unsuccessful Research Works Act, set against soaring profits and the unsavoury business practices of academic journal bundling all demonstrate how committed traditional publishing houses are to making money, not increasing access to research. More personally, PubMed’s RSS feeds for custom search terms (informing me as soon as something related to ‘recognition memory’ is published), Twitter’s water-cooler paper retweets and Google Scholar‘s pdf indexing, mean that I usually learn about and can get access to articles I am interested in, without needing to know where they are published. Over the past months and years, subscription-model journals have started to feel old-fashioned, maybe even willfully so. It’s now the case that if university scientists are interested in my research, they will find it regardless of where it is published. If the public are interested in my research, whether or not they will be able to read it depends entirely on how it is published.

That said, Google Scholar would be useless as a source of papers if researchers and universities didn’t make pdfs available for it to index. It’s here that the work university libraries do to promote open access is crucial. At St Andrews, we use the PURE system which makes green open access – uploading author versions or publisher versions usually after an embargo period –  straightforward. Beyond this though, the Open Access Team frequently encourage us to provide this sort of green open access. For example, earlier this week one of the open access librarians tweeted me to tell me that I was entitled to upload the final version of a paper I had been holding off on uploading. In doing this, they cultivate an environment in which providing open access is seen as a responsibility we have to those who might want to read our research.

While green open access can work, it requires efficient management. The St Andrews Open Access Team seem to have a million publisher-specific checks they have to make before they will allow a pdf to go into the Research@StAndrews:FullText repository. Surely gold open access – publishing in journals whose business model doesn’t involve protecting access to their outputs – would make things much easier. The one problem with gold access, even from a the point of view of a researcher who wants more than anything to publish in this way, is that it is expensive, really expensive. A paper in PLOS ONE costs $1,350, Frontiers, €1,600 and Springer Plus, £725 (though this all may change with PeerJ’s author subscription model). Of course it makes sense. A journal that doesn’t charge subscription fees needs to recoup its costs by charging to publish. And here’s where we run into major barriers to the uptake of gold open access. First, gold open access publishers are asking universities to spend money to publish their own researchers’ work when they’re already spending an eye-watering amount on accessing work that the same researchers have previously published. Second but maybe more importantly in terms of journal submission choices, gold open access publishers are now forcing researchers, not their university libraries, to face up to the costs of publication.

That I, not the head of my library, must think about how to fund the journals that publish my research goes against the traditional subscription model of academic publishing. Moreover this financial division, and the problem it poses to open access journals, almost certainly exists at every single university in the UK. In an ideal world, I should be able to dip into the library’s subscription budget every time I publish in a gold open access journal. If all researchers knew that their submission to Frontiers in Psychology wasn’t jeopardising their travel to next year’s conference in San Diego, gold open access would be set. It’s only when universities recognise this, that gold open access publishing payments should come from the same pot as journal subscription payments, that open access publishing will take off.

And so to why I was able to consider submitting to a gold open access journal. The St Andrews Library Open Access Team have a fund specifically for gold open access publishing. A cheeky twitter request as to whether they would support my submission to an open access journal was all it took for me to get the thumbs up. Together with the green open access resource at Research@StAndrews:FullText and maintenance of the existing closed access journal subscriptions (for now), the gold open access fund helps to provide the full range of publication options for St Andrews researchers. It’s a comprehensive approach to open access that makes me proud to work here.