Finding needle in haystack
Finding needle in haystack (Photo credit: Bindaas Madhavi)

A former colleague of mine at an institution I no longer work at has admitted to being a science fraudster.*

I participated in their experiments, I read their papers, I respected their work. I felt a very personal outrage when I heard what they had done with their data. But the revelation went some way to answering questions I ask myself when reading about those who engage in scientific misconduct. What are they like? How would I spot a science fraudster?

Here are the qualities of the fraudster that stick with me.

  • relatively well-dressed.
  • OK (not great, not awful) at presenting their data.
  • doing well (but not spectacularly so) at an early stage of their career.
  • socially awkward but with a somewhat overwhelming projection of self-confidence.

And that’s the problem. I satisfy three of the four criteria above. So do most of my colleagues. If you were to start suspecting every socially awkward academic of fabricating or manipulating their data, that wouldn’t leave you with many people to trust. Conversations with those who worked much more closely with the fraudster reveal more telling signs that something wasn’t right with their approach, but again, the vast majority of the people with similar character flaws don’t fudge their data. It’s only once you formally track every single operation that has been carried out on their original data that you can know for sure whether or not someone has perpetrated scientific misconduct. And that’s exactly how this individual’s misconduct was discovered – an eagle-eyed researcher working with the fraudster noticed some discrepancies in the data after one stage of the workflow. Is it all in the data?

Let’s move beyond the few bad apples argument. A more open scientific process (e.g. the inclusion of original data with the journal submission) would have flagged some of the misconduct being perpetrated here, but only after someone had gone to the (considerable) trouble of replicating the analyses in question.  Most worryingly, it would also have missed the misconduct that took place at an earlier stage of the workflow. It’s easy to modify original data files, especially if you have coded the script that writes them in the first place. It’s also easy to change ‘Date modified’ and ‘Date created’ timestamps within the data files.

Failed replication would have helped, but the file drawer problem, combined with the pressure on scientists to publish or perish typically stops this sort of endeavor (though there are notable exceptions such as the “Replications of Important Results in Cognition”special issue of Frontiers in Cognition ). I also worry that the publication process, in its current form, does nothing more constructive than start an unhelpful rumour-mill that never moves beyond gossip and hearsay. The pressure to publish or perish is also cited as motivation for scientists to cook their data. In this fraudster’s case, they weren’t at a stage of their career typically thought of as being under this sort of pressure (though that’s probably a weak argument when applied to anyone without a permanent position). All of which sends us back to trying to spot the fraudster and not the dodgy data. It’s a circular path that’s no more helpful than uncharitable whispers in conference centre corridors.

So how do we identify scientific misconduct? Certainly not with a personality assessment, and only partially with an open science revolution. If someone wants to diddle their data, they will. Like any form of misconduct, if they do it enough, they will probably get caught. Sadly, that’s probably the most reliable way of spotting it. Wait until they become comfortable enough that they get sloppy. It’s just a crying shame it wastes so much of everyone’s time, energy and trust in the meantime.

 

*I won’t mention their name in this post for two reasons: 1) to minimise collateral damage that this is having on the fraudster’s former collaborators,  former institution and their former (I hope) field; and 2) because this must be a horrible time for them, and whatever their reason for the fraud, it’s not going to help them rehabilitate themselves in ANY career if a Google search on their name returns a tonne of condemnation.

On Monday I gave a talk on how internet tools can be used to make the job of being an academic a little easier.  I had given a very short version of the talk to faculty in the department over a year ago, but this time I was given an hour in a forum for  early career researchers, PhD students and postdocs.  The subject of twitter, covered early on in the talk, aroused a lot of interest, probably because I got very animated about its benefits for those in the early stages of their careers.

To provide a little context for my enthusiasm, it probably helps to know a few things about me, about my situation, and about my recent experiences.

  1. I am an introvert.  Despite my best (and occasionally successful) efforts to project a different image, I do not find talking to people I don’t know very enjoyable.
  2. I am an early career cognitive neuroscientist keen to build my own research programme and develop links with other researchers.
  3. Last month I attended the Society for Neuroscience conference, at which I attended the best conference social I have ever attended.

Given the received wisdom that people in my position ought to be networking, I often drag myself kicking and screaming to conference socials. The result tends to be a lot of standing around on my own drinking beer, which gives me something to do, but which I could do much more comfortably with one or two people I know well.  The major problem at these events is not my nature, or my status as an early career researcher, but the fact that the people I have imagined myself talking to usually don’t know who I am.  Conversation is therefore awkward, one-sided and introductory.  Once the niceties have dried up, and the level of accumulated conversational silence edges into awkward territory I invariably finish my drink and bugger off to get another one, ending the misery for all involved.  This is probably a universal experience for those starting out in academia, though thankfully it is happening less and less to me as I build something of network of real friends who attend the same conferences as me.  But as a PhD student and postdoc, the experience was excruciating.

I had a totally different experience when I attended the SfN Banter tweetup*.  The event, organised by @doc_becca and @neuropolarbear, was a social for neuroscientists who use twitter and changed my view of conference socials.  They do not have to be endured, even by those doing PhDs and postdocs. They can be enjoyed.

I was excited about going and that excitement didn’t leave me feeling shortchanged by the time I left.  I spoke (actually spoke!) to everyone I wanted to speak to.  Moreover, I had good conversations with people to whom I was speaking for the first time. The reason is fairly obvious – twitter allowed us to build on a body of shared (or at least assumed) knowledge. I follow people, they follow me,  I reply to or retweet their tweets, they do the same – and this is all before we’ve introduced ourselves. When I finally meet someone with whom I have such a history of communication, introducing myself is the least awkward thing I can do. The barriers to conversation are removed**.

Sure, this pattern followed for most interactions at the tweetup because we were all there to do exactly that.  Would the experience be the same at the ‘fMRI social’? No.  But, I don’t think that matters.  If I could have had one of those conference social experience during my time as a PhD student, it would have given me an idea of what I might have to look forward to from conferences if I stuck at it.  Light at the end of the tunnel, a pot of gold at the end of the rainbow, a variable-ratio schedule-determined stimulation of the limbic system following an umpteenth lever press.

It will take a while (there’s no point joining in September 2013 and expecting great things at the SfN tweetup in San Diego), and it’s probably not the primary reason to join twitter (see  Dorothy Bishop’s blog and Tom Hartley’s blog for far more comprehensive discussions  of how and why you should join), but it’s another reason, and it’s one that could make you feel good about your role in academia.  It’s worth a shot.

 

* tw(itter) (m)eetup, see?

** What you do afterwards is up to you.  I still had some awkward interactions, but I think that’s probably down to me (see context point 1).

I’m taking part in a Guardian live chat this Friday (1-4pm BST) titled ‘Surviving your first academic post.’ With this topic in mind, I’m noting some preliminary thoughts under a few themes.

The points below relate to my first 10 months as a lecturer at St Andrews and aren’t at all relevant to my postdoc experience which was, by and large, extremely easy to navigate and the most enjoyable period of my career so far. It’s also important to make the context from which I am making these observations clear.  I am privileged in that I am on a permanent contract, the first five years of which comprise a SINAPSE research fellowship which means I have a minimal teaching load.  That said, I do have an admin load and I have the additional responsibility to promote neuroimaging within the department and across the SINAPSE network.

Before you accept the job – You’ll start evaluating whether a position is right for you from the moment you see the advert.  Beyond whether you are the ‘type’ of academic the institution is after, you’ll also consider the department is right for you (is it the right size? could you collaborate with anyone? are there local research facilities? is it the right calibre of institution for you?), whether you could live there (is it too big/small a city? too far to move? too isolated?) and whether you could actually do what you enjoy about academia there (is the teaching load too heavy? if you did your PhD/postdoc there, could you get taken seriously as a PI). All of these thoughts feed into the rather nebulous concept of ‘fit’ which, it turns out, is rather important to you enjoying your potential new job.

When I interviewed at St Andrews, everyone I spoke to mentioned how small the town is.  I didn’t think it would be a problem, but on moving here, the realisation that I had never previously lived outside a city certainly hit home. Within my first few weeks here I understood that this common point of conversation had been an important warning. Starting your first academic post can be lonely (even if you go with family), and being in a place that doesn’t feel right for you can make you feel even lonelier. I would never have turned down the offer to work here, but I suspect that another candidate for the job I went on to accept did, and it was probably something to do with ‘fit’.

Start-up negotiations are also worth devoting some thought to once you’ve established that the ‘fit’ is going to be satisfactory. You’ll have to walk a fine line between making sure you don’t do yourself out of money you will need to set up a lab that is capable of doing the research you are being employed to do, and asking for too much and appearing (or being) greedy. My experience of start-up negotiation was that the equipment I wanted was a lot easier to obtain than the scanner time I wanted. Colleagues have mentioned an informal loan arrangement where the School provided expensive equipment on condition that costs be recouped further down the line, so that could be a useful negotiation strategy, particularly when expensive equipment is required from the outset. One thing I wish I had done was to speak to an academic who had recently started, to ascertain where they thought they went wrong in their start-up request. I, for example, realised too late that I would have to buy my own printer toner, which ended up having to come out of my research budget for the 2010/2011 academic year.

Your first weeks – These are lonely and stressful. Simple things like making external phone calls can be challenging. Of course, people offer their help and advice, but you want to appear capable and self-sufficient so you end up spending far too much time working things out on your own.  If there are other new hires in your department, pooling your newly acquired knowledge will help. Induction events are also a good way to get to know people throughout the University.

Department coffee mornings are supposed to be an excellent way of establishing yourself amongst your new colleagues.  But, I found these to be something of a double-edged sword.  Despite the social benefits, there will be times you wish you hadn’t gone. Within a week of starting, going to grab some coffee had led to me being roped in to give a cover lecture on probability theory. I felt like hiding in my office after that (and I did for a while), but the best strategy is to…

Learn how to say “no” – You won’t want to appear uncollegiate, but people will ask you to do things until you learn how to say “no”. You’ll probably receive a lot of requests to cover lectures and complete one-off tasks in your first few weeks.  Some of this is down to people wrongly assuming that you won’t have anything else to do, and some of it, I think, is down to people testing the water and seeing whether you are a ‘yes-(wo)man’ who will agree to anything.

Crafting that first refusal will probably take a lot of time, but it is an important step to take.  Just make sure that you:
a) can demonstrate that you have shown willing (it helps to have said “yes” at least once before your first “no”);
b) say why you are refusing (not the right person for the job, have already said “yes” to too many other requests, too little time at this stage, though happy to muck in next semester when things have settled, etc.);
c) don’t let the task you initially agreed to morph into something that you would never have agreed to in the first place (e.g. it’s OK for a “yes” to become a “no” if a one-off lecture turns into longer-term cover for a lecturer on maternity leave).

Saying “no” gets easier, it just takes a bit of practice.  With some strategic refusals and a bit of luck, you’ll calibrate the system so that you’re not having to say “no” to very much because people making requests of you will make sure that you really are the right person for the job before asking.

If you run into a persistent problem of people making too many unreasonable demands of you, a mentor who is looking out for your interests will help. I haven’t yet had to call on my mentor for this, but I’m fairly certain that she has been doing so anyway, if only by not suggesting me for admin duties whose allocation she controls.

Time – When I was a postdoc, nothing felt too difficult.  All anything took was time, sometimes plenty of it, but it didn’t matter because time was something I was given plenty of.  I spent months learning Matlab, weeks scripting analyses and days making a couple of lines of code to do just what I wanted them to do. Now, some tasks are too difficult because I don’t feel I have the time to devote to them. Of course, I have much more time than I would if I had a full teaching load, but I have much less time than I had as a postdoc.

To remedy this perceived lack of time, I’m considering devoting a few weeks here and there to an ‘at-work-retreat’.  That is, I will go to work, and just work on what I need to work on to get analyses done and papers written without the distraction of e-mail, admin jobs (which will be put on hold) and teaching. I think it might even be appropriate to use an e-mail auto-response, the exact working of which I will have to be very careful about, to let people know of my unavailability. This fellowship period of my job should be a perfect opportunity for me to do this sort of thing and it may be something worth writing about on the blog at a later date.

Money – I need to funding to carry out neuroimaging. I therefore need grant funding. I don’t mind that the School strongly encourage me to apply for grant-funding because I need to apply for it anyway. That said, it feels like I have only just learned how to write journal articles and now I’m being asked to write in a totally different style with a totally different emphasis.  Applying for grant funding has probably taken more time than any other activity in my first 10 months here. It’s a shame, because I could have devoted this time to writing journal articles that would have added to my CV and made me more ‘fundable’.  Still, I need to do it at some stage, and now is as good a time as any.

On my recent submission of a manuscript to the Journal of Memory and Language (an Elsevier journal), I was faced with the unexpected task of having to provide  “Research highlights” of the submitted manuscript.  Elsevier describe these highlights here, including the following instructions:

  • Include 3 to 5 highlights.
  • Max. 85 characters per highlight including spaces…
  • Only the core results of the paper should be covered.

They mention that these highlights will “be displayed in online search result lists, the contents list and in the online article, but will not (yet) appear in the article PDF file or print”, but having never previously encountered them, I was (and am still) a little unsure about how exactly they would be used (Would they be indexed on Google Scholar? Would they be used instead of the abstract in RSS feeds of the journal table of contents?)  The thought that kept coming to me as I rephrased and reworked my  highlights was “they already have an abstract, why do they need an abstract of my abstract?”

Having pruned my five highlights to fit the criteria, I submitted them and thought nothing more of them. .. until tonight.  I checked the JML website to see if my article had made it to the ‘ Articles In Press’ section and rather than seeing my own article, saw this:

This was my first encounter of Research Highlights in action.  I was impressed.  I’m not too interested in language processing, so would never normally have clicked on the article title to read the abstract, but I didn’t need to. The highlights were quick to read and gave me a flavour of the research without giving me too much to sift though.  I guess that’s the point, and it’ll be interesting to see whether that  is maintained when every article on the page is accompanied by highlights.

It’s hard to tell if the implementation of research highlights in all journals would improve the academic user-experience.  No doubt, other journal publishers are waiting to see how Elsevier’s brain-child is received by researchers.  But there is another potential consequence that could be extremely important.  In the example above, I was able to read something comprehensible to me on a field a know next-to-nothing about.  In the same vein, maybe these highlights will be the first port of call of popular science writers looking to make academic research accessible to laymen.  If the end-result of the research highlight experiment is that a system is implemented that helps reduce the misrepresentation of science in the popular media, then I would consider that a huge success.

Typical fMRI brain scans take a 3D images of the head every few seconds.  These images are composed of lots of 2D ‘slices’ (usually axially oriented) stacked on top of one another.  This is where the problem of slice acquisition time rears its head – the problem being that these slices are not all taken at the same time, in fact, their collection tends to be distributed uniformly over the duration it takes to gather a whole 3D image.  Therefore, if you are collecting a 3D image comprising 36 slices every 2 seconds, you will have a different slice collected every 1/18th of a second.

2D slices (left; presented as a mosaic), acquired at slightly different times within a 2s TR, that make up a typical 3D image in fMRI (right; shown with a cutout)

If you’re worried about the effect of this fuzziness in temporal resolution on your data (and there are those who don’t), then it can be corrected for in the preprocesisng stages of analysis.  Of course, you do need to know the order in which your slices were collected to correct for the ordering differences.

Finding out the order of slice correction is not as easy as it should be.  On the Siemens Trio scanner that I use, it’s straightforward if you have an ‘ascending’ (bottom to top, in order: 1, 2, 3, etc.) or a ‘descending’ (top to bottom, in order, 36, 35, 34 etc.) order of slice collection.  However, if you’re using the ‘interleaved’ order (odd slices collected first, followed by even slices), it’s not immediately clear whether you’re doing that in an ascending (1, 3 5… 2, 4 6… etc.) or descending (35, 33, 31… 36, 34, 32… etc.) interleaved order.

I found out that I was collecting my slices in an interleaved, ascending order by asking the MR technician at the facility.  But, if there was no technician to hand, or if I wanted to verify this order myself, I would be very tempted to try out a method I found out about on the SPM list today:

Head-turning research (links to poster at a readable resolution on the University of Ghent web-site)

The procedure, devised by Descamps and colleagues, simply involves getting an fMRI participant to turn their head from looking straight up, to looking to one side during a very short scan.  The turn should be caught in its various stages of completion by the various slices that comprise one 3D image, allowing the curious researcher to figure out the slice acquisition order crudely, but effectively.

I enjoyed how connected to the physical reality of our own bodies this procedure is.  It reminded that these tools we are using to make inference about cognition are tied to our bodies in a very tangible way.  That is something I often forget when pushing vast arrays of brain-signals values around in matrices, so it’s nice to be reminded of it now and again – I’d certainly rather be reminded like this, than by having to discard a participant’s data because they have moved so much during the a scan as to make their data useless!

Brookings Hall, an icon of Washington Universi...
Image via Wikipedia

In just under one month, I will be leaving Washington University in St. Louis and moving to Scotland to take up a lectureship at the University of St. Andrews.  It’s another big move for me, both geographically and professionally, and it’s what I was hoping would result from my time as a postdoc here in the Dobbins lab.  It hasn’t all been fun and games though.

Back in 2007, my experience of applying for post-PhD jobs in the UK was desperate.  I loved my area of research, deja vu, but struggled to get short-listed for anything other than jobs directly in that field e.g. researching temporal lobe epilepsy and memory.  Even when I was shortlisted for jobs I didn’t do well at the interview stage, I suspect, because my exposure to anything other than deja vu research had been rather limited.  I was also keen to start using fMRI in my research, but hadn’t the foggiest idea of how I might do that, and who would give me the opportunity.  I still look back on most of 2007 as a bleak time of struggling to keep up with the demands of writing my thesis alongside firing off job applications, the quality of which declined the more desperate I became.

I had been keen on a postdoc, but during my search in 2007 they seemed few and far between.  I did get interviewed for one in Exeter, but I was so out of my depth it was ridiculous.  It was around November that one of my PhD supervisors forwarded me a call for postdoc applications that he had received (on the MDRS mailing list, I think).  My supervisor’s comment with the e-mail read something along the lines of “I know this is probably too far away for you but…”

I e-mailed my CV and from then onwards things started to move very quickly.   Within a few days I had a phone conversation with Ian Dobbins and we organised that I would visit Washington University – it would take me a few more days to realise that the university was neither located in D.C., nor on the West Coast, but slap-bang in the middle of the country, in a city I didn’t realise still existed.  Within weeks I was giving a talk in St. Louis to memory researchers whose work I had read about in undergraduate textbooks.  An arduous J1 visa application later, I started my postdoc.

What I found staggering at the time, was that my boss was willing to take a chance on someone with no fMRI experience, in what was going to be an fMRI-heavy position.  This worked out well for us both (I hope), but I know I was very lucky.  A combination of a PI willing to take a punt on an enthusiastic postdoc candidate, and a wealth of resources afforded by working on a well-funded grant at a prestigious private university, allowed me an opportunity that has undoubtedly paved the way for my next step to St. Andrews.  I can’t overstate how grateful I am.

Beyond the professional fortune, I was also extremely lucky that my circumstances allowed me to make the move from the North of England to the American Midwest in pursuit of a job.  That my wife was willing to uproot, that our family and friends were so supportive, and that we were able to gather the money to make the move were all huge factors, the absence of any one of which would have scuppered the done-deal.

St Andrews University.
Image via Wikipedia

The confluence of professional and personal serendipity has once again presented us with a fantastic opportunity to move back east across the Atlantic, this time necessitating three tickets rather than the two that sufficed for the westward trip we made in 2008.  I hope that in a few years time I can look back on this move too, as another lucky break that I was able to take full advantage of.  I also hope that at some later stage of my career, I can present similar opportunities to a new generation of budding postdocs.