If you’re trying to decide on a journal to submit your latest manuscript to, Jane – the Journal/Author Name Estimator, can point you in the right direction. This isn’t exactly breaking news, but it’s worth a reminder.
To use Jane, copy and paste your title and/or abstract into Jane into the text box and click “Find journals”. Using a similarity index with all Medline-indexed publications from the past 10 years, Jane will spit out a list journals worth considering. Alongside a confidence score, which summarises your text’s similarity to other manuscripts published in that journal, you’re also provided with an citation-based indication of that journal’s influence within the field.
The other available searches are the “Find articles” and the “Find authors” search, the last of which I suspect I would use if I were an editor with no idea about whom to send an article to for review. As an author, it’s worth running abstracts through these searches too to make sure you don’t miss any references or authors you definitely ought to cite in your manuscript.
There’s more information on Jane from the Biosemantics Group here: http://biosemantics.org/jane/faq.php.
On my recent submission of a manuscript to the Journal of Memory and Language (an Elsevier journal), I was faced with the unexpected task of having to provide “Research highlights” of the submitted manuscript. Elsevier describe these highlights here, including the following instructions:
- Include 3 to 5 highlights.
- Max. 85 characters per highlight including spaces…
- Only the core results of the paper should be covered.
They mention that these highlights will “be displayed in online search result lists, the contents list and in the online article, but will not (yet) appear in the article PDF file or print”, but having never previously encountered them, I was (and am still) a little unsure about how exactly they would be used (Would they be indexed on Google Scholar? Would they be used instead of the abstract in RSS feeds of the journal table of contents?) The thought that kept coming to me as I rephrased and reworked my highlights was “they already have an abstract, why do they need an abstract of my abstract?”
Having pruned my five highlights to fit the criteria, I submitted them and thought nothing more of them. .. until tonight. I checked the JML website to see if my article had made it to the ‘ Articles In Press’ section and rather than seeing my own article, saw this:
This was my first encounter of Research Highlights in action. I was impressed. I’m not too interested in language processing, so would never normally have clicked on the article title to read the abstract, but I didn’t need to. The highlights were quick to read and gave me a flavour of the research without giving me too much to sift though. I guess that’s the point, and it’ll be interesting to see whether that is maintained when every article on the page is accompanied by highlights.
It’s hard to tell if the implementation of research highlights in all journals would improve the academic user-experience. No doubt, other journal publishers are waiting to see how Elsevier’s brain-child is received by researchers. But there is another potential consequence that could be extremely important. In the example above, I was able to read something comprehensible to me on a field a know next-to-nothing about. In the same vein, maybe these highlights will be the first port of call of popular science writers looking to make academic research accessible to laymen. If the end-result of the research highlight experiment is that a system is implemented that helps reduce the misrepresentation of science in the popular media, then I would consider that a huge success.