Finding needle in haystack
Finding needle in haystack (Photo credit: Bindaas Madhavi)

A former colleague of mine at an institution I no longer work at has admitted to being a science fraudster.*

I participated in their experiments, I read their papers, I respected their work. I felt a very personal outrage when I heard what they had done with their data. But the revelation went some way to answering questions I ask myself when reading about those who engage in scientific misconduct. What are they like? How would I spot a science fraudster?

Here are the qualities of the fraudster that stick with me.

  • relatively well-dressed.
  • OK (not great, not awful) at presenting their data.
  • doing well (but not spectacularly so) at an early stage of their career.
  • socially awkward but with a somewhat overwhelming projection of self-confidence.

And that’s the problem. I satisfy three of the four criteria above. So do most of my colleagues. If you were to start suspecting every socially awkward academic of fabricating or manipulating their data, that wouldn’t leave you with many people to trust. Conversations with those who worked much more closely with the fraudster reveal more telling signs that something wasn’t right with their approach, but again, the vast majority of the people with similar character flaws don’t fudge their data. It’s only once you formally track every single operation that has been carried out on their original data that you can know for sure whether or not someone has perpetrated scientific misconduct. And that’s exactly how this individual’s misconduct was discovered – an eagle-eyed researcher working with the fraudster noticed some discrepancies in the data after one stage of the workflow. Is it all in the data?

Let’s move beyond the few bad apples argument. A more open scientific process (e.g. the inclusion of original data with the journal submission) would have flagged some of the misconduct being perpetrated here, but only after someone had gone to the (considerable) trouble of replicating the analyses in question.  Most worryingly, it would also have missed the misconduct that took place at an earlier stage of the workflow. It’s easy to modify original data files, especially if you have coded the script that writes them in the first place. It’s also easy to change ‘Date modified’ and ‘Date created’ timestamps within the data files.

Failed replication would have helped, but the file drawer problem, combined with the pressure on scientists to publish or perish typically stops this sort of endeavor (though there are notable exceptions such as the “Replications of Important Results in Cognition”special issue of Frontiers in Cognition ). I also worry that the publication process, in its current form, does nothing more constructive than start an unhelpful rumour-mill that never moves beyond gossip and hearsay. The pressure to publish or perish is also cited as motivation for scientists to cook their data. In this fraudster’s case, they weren’t at a stage of their career typically thought of as being under this sort of pressure (though that’s probably a weak argument when applied to anyone without a permanent position). All of which sends us back to trying to spot the fraudster and not the dodgy data. It’s a circular path that’s no more helpful than uncharitable whispers in conference centre corridors.

So how do we identify scientific misconduct? Certainly not with a personality assessment, and only partially with an open science revolution. If someone wants to diddle their data, they will. Like any form of misconduct, if they do it enough, they will probably get caught. Sadly, that’s probably the most reliable way of spotting it. Wait until they become comfortable enough that they get sloppy. It’s just a crying shame it wastes so much of everyone’s time, energy and trust in the meantime.

 

*I won’t mention their name in this post for two reasons: 1) to minimise collateral damage that this is having on the fraudster’s former collaborators,  former institution and their former (I hope) field; and 2) because this must be a horrible time for them, and whatever their reason for the fraud, it’s not going to help them rehabilitate themselves in ANY career if a Google search on their name returns a tonne of condemnation.

2 thoughts on “Identifying Scientific Misconduct

  1. It’s awful. And, I don’t think you can do a “spot the fraudster”, because these will always be (critters always find things to exploit to get ahead – so I’m reading too much evolution and game theory). Changing incentives may eliminate some, and limit the damage done by others.

    I took Stapel personally. (And he is now a former everything, so no need to worry about future repair I think). I met him when I was in grad school, and he was early on in his career, and he presented some interesting research on the ebbinghaus illusion and social categories, which, as far as I know, has not been retracted.

    We took the paradigm and adapted it to our emotioncongruence work. Tried it twice. Went nowhere. That, in itself, is not suspicious. It is not unreasonable to think that the paradigm wouldn’t transfer. (Though I recently saw that someone has done just that – into emotion. I’m floored, but, well, effect sizes are always small).

    When the fraud unraveled I was shocked, because, well, he was one of us. Which I think is why I have taken such an interest in reforming practices, not just among scientists, but in the entire system.

    Reply
    • I totally agree. A connection, whether personal or scientific, makes a huge difference to how affected one gets by the exposure of others as fraudsters. In my case, it was both personal and professional and I couldn’t stop thinking about it for weeks when I first found out.

      A terrible thing seems to be that fraud is not rare. We hear a lot about the high profile cases – those who are prolific, or young or particularly nefarious in their fraud. But, anonymous survey data appears to suggest that in all likelihood, we are all working alongside someone who will admit to this: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738 Thoroughly depressing!

      Reply

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> 

required


*