I often bang on about how useful twitter is for crowd-sourcing a research community. Today I was reminded of quite brilliant the people on twitter can be at helping to overcome an ‘I don’t know where to start’-type information problem.

I’m currently helping to design an fMRI study which could benefit considerably from the application of multivoxel pattern analysis (MVPA). Having no practical experience with MVPA means I’m trying to figure out what I need to do to make the MVPA bit of the study a success. After a few hours of searching, I have come across and read a number of broad theoretical methods papers, but nothing that gives me the confidence that anything I come up with will be viableOf course, there’s no right way of designing a study, but there are a tonne of wrong ways, and I definitely want to avoid those.

So, I turned to twitter:

Relays and Retweets from @hugospiers, @zarinahagnew and @neuroconscience led to the following tweets coming my way (stripped of @s for ease of reading… kind of).

Sure, I could have come up with as many articles to read by typing “MVPA” into Google Scholar (as I have done in the past), but the best thing about my twitter-sourced reading list is that I’m confident it’s pitched at the right level.

I’m humbled by how generous people are with their time, and glad so many friendly academics are on twitter. I hope collegiality and friendliness like this encourages many more to join our ranks.

On my recent submission of a manuscript to the Journal of Memory and Language (an Elsevier journal), I was faced with the unexpected task of having to provide  “Research highlights” of the submitted manuscript.  Elsevier describe these highlights here, including the following instructions:

  • Include 3 to 5 highlights.
  • Max. 85 characters per highlight including spaces…
  • Only the core results of the paper should be covered.

They mention that these highlights will “be displayed in online search result lists, the contents list and in the online article, but will not (yet) appear in the article PDF file or print”, but having never previously encountered them, I was (and am still) a little unsure about how exactly they would be used (Would they be indexed on Google Scholar? Would they be used instead of the abstract in RSS feeds of the journal table of contents?)  The thought that kept coming to me as I rephrased and reworked my  highlights was “they already have an abstract, why do they need an abstract of my abstract?”

Having pruned my five highlights to fit the criteria, I submitted them and thought nothing more of them. .. until tonight.  I checked the JML website to see if my article had made it to the ‘ Articles In Press’ section and rather than seeing my own article, saw this:

This was my first encounter of Research Highlights in action.  I was impressed.  I’m not too interested in language processing, so would never normally have clicked on the article title to read the abstract, but I didn’t need to. The highlights were quick to read and gave me a flavour of the research without giving me too much to sift though.  I guess that’s the point, and it’ll be interesting to see whether that  is maintained when every article on the page is accompanied by highlights.

It’s hard to tell if the implementation of research highlights in all journals would improve the academic user-experience.  No doubt, other journal publishers are waiting to see how Elsevier’s brain-child is received by researchers.  But there is another potential consequence that could be extremely important.  In the example above, I was able to read something comprehensible to me on a field a know next-to-nothing about.  In the same vein, maybe these highlights will be the first port of call of popular science writers looking to make academic research accessible to laymen.  If the end-result of the research highlight experiment is that a system is implemented that helps reduce the misrepresentation of science in the popular media, then I would consider that a huge success.