I’m taking part in a Guardian live chat this Friday (1-4pm BST) titled ‘Surviving your first academic post.’ With this topic in mind, I’m noting some preliminary thoughts under a few themes.

The points below relate to my first 10 months as a lecturer at St Andrews and aren’t at all relevant to my postdoc experience which was, by and large, extremely easy to navigate and the most enjoyable period of my career so far. It’s also important to make the context from which I am making these observations clear.  I am privileged in that I am on a permanent contract, the first five years of which comprise a SINAPSE research fellowship which means I have a minimal teaching load.  That said, I do have an admin load and I have the additional responsibility to promote neuroimaging within the department and across the SINAPSE network.

Before you accept the job – You’ll start evaluating whether a position is right for you from the moment you see the advert.  Beyond whether you are the ‘type’ of academic the institution is after, you’ll also consider the department is right for you (is it the right size? could you collaborate with anyone? are there local research facilities? is it the right calibre of institution for you?), whether you could live there (is it too big/small a city? too far to move? too isolated?) and whether you could actually do what you enjoy about academia there (is the teaching load too heavy? if you did your PhD/postdoc there, could you get taken seriously as a PI). All of these thoughts feed into the rather nebulous concept of ‘fit’ which, it turns out, is rather important to you enjoying your potential new job.

When I interviewed at St Andrews, everyone I spoke to mentioned how small the town is.  I didn’t think it would be a problem, but on moving here, the realisation that I had never previously lived outside a city certainly hit home. Within my first few weeks here I understood that this common point of conversation had been an important warning. Starting your first academic post can be lonely (even if you go with family), and being in a place that doesn’t feel right for you can make you feel even lonelier. I would never have turned down the offer to work here, but I suspect that another candidate for the job I went on to accept did, and it was probably something to do with ‘fit’.

Start-up negotiations are also worth devoting some thought to once you’ve established that the ‘fit’ is going to be satisfactory. You’ll have to walk a fine line between making sure you don’t do yourself out of money you will need to set up a lab that is capable of doing the research you are being employed to do, and asking for too much and appearing (or being) greedy. My experience of start-up negotiation was that the equipment I wanted was a lot easier to obtain than the scanner time I wanted. Colleagues have mentioned an informal loan arrangement where the School provided expensive equipment on condition that costs be recouped further down the line, so that could be a useful negotiation strategy, particularly when expensive equipment is required from the outset. One thing I wish I had done was to speak to an academic who had recently started, to ascertain where they thought they went wrong in their start-up request. I, for example, realised too late that I would have to buy my own printer toner, which ended up having to come out of my research budget for the 2010/2011 academic year.

Your first weeks – These are lonely and stressful. Simple things like making external phone calls can be challenging. Of course, people offer their help and advice, but you want to appear capable and self-sufficient so you end up spending far too much time working things out on your own.  If there are other new hires in your department, pooling your newly acquired knowledge will help. Induction events are also a good way to get to know people throughout the University.

Department coffee mornings are supposed to be an excellent way of establishing yourself amongst your new colleagues.  But, I found these to be something of a double-edged sword.  Despite the social benefits, there will be times you wish you hadn’t gone. Within a week of starting, going to grab some coffee had led to me being roped in to give a cover lecture on probability theory. I felt like hiding in my office after that (and I did for a while), but the best strategy is to…

Learn how to say “no” – You won’t want to appear uncollegiate, but people will ask you to do things until you learn how to say “no”. You’ll probably receive a lot of requests to cover lectures and complete one-off tasks in your first few weeks.  Some of this is down to people wrongly assuming that you won’t have anything else to do, and some of it, I think, is down to people testing the water and seeing whether you are a ‘yes-(wo)man’ who will agree to anything.

Crafting that first refusal will probably take a lot of time, but it is an important step to take.  Just make sure that you:
a) can demonstrate that you have shown willing (it helps to have said “yes” at least once before your first “no”);
b) say why you are refusing (not the right person for the job, have already said “yes” to too many other requests, too little time at this stage, though happy to muck in next semester when things have settled, etc.);
c) don’t let the task you initially agreed to morph into something that you would never have agreed to in the first place (e.g. it’s OK for a “yes” to become a “no” if a one-off lecture turns into longer-term cover for a lecturer on maternity leave).

Saying “no” gets easier, it just takes a bit of practice.  With some strategic refusals and a bit of luck, you’ll calibrate the system so that you’re not having to say “no” to very much because people making requests of you will make sure that you really are the right person for the job before asking.

If you run into a persistent problem of people making too many unreasonable demands of you, a mentor who is looking out for your interests will help. I haven’t yet had to call on my mentor for this, but I’m fairly certain that she has been doing so anyway, if only by not suggesting me for admin duties whose allocation she controls.

Time – When I was a postdoc, nothing felt too difficult.  All anything took was time, sometimes plenty of it, but it didn’t matter because time was something I was given plenty of.  I spent months learning Matlab, weeks scripting analyses and days making a couple of lines of code to do just what I wanted them to do. Now, some tasks are too difficult because I don’t feel I have the time to devote to them. Of course, I have much more time than I would if I had a full teaching load, but I have much less time than I had as a postdoc.

To remedy this perceived lack of time, I’m considering devoting a few weeks here and there to an ‘at-work-retreat’.  That is, I will go to work, and just work on what I need to work on to get analyses done and papers written without the distraction of e-mail, admin jobs (which will be put on hold) and teaching. I think it might even be appropriate to use an e-mail auto-response, the exact working of which I will have to be very careful about, to let people know of my unavailability. This fellowship period of my job should be a perfect opportunity for me to do this sort of thing and it may be something worth writing about on the blog at a later date.

Money – I need to funding to carry out neuroimaging. I therefore need grant funding. I don’t mind that the School strongly encourage me to apply for grant-funding because I need to apply for it anyway. That said, it feels like I have only just learned how to write journal articles and now I’m being asked to write in a totally different style with a totally different emphasis.  Applying for grant funding has probably taken more time than any other activity in my first 10 months here. It’s a shame, because I could have devoted this time to writing journal articles that would have added to my CV and made me more ‘fundable’.  Still, I need to do it at some stage, and now is as good a time as any.

The non-breaking space looks like a normal space, but prevents an automatic line-break from occurring between the two text items it connects.

Wikipedia link: Non-breaking space

I use it when I want words not to get separated from their inline bullet-type markers [(i), a), – etc.]  This is useful in grant application documents where you might want to use lists but space is at a premium.

To type a non-breaking space:
Windows:   Alt+255
Mac:          Option+Space

Having deleted my facebook account nearly two years ago, as I activated a Google+ account this week I was wary of repeating previous mistakes.  Back in 2009 I had decided that I wasn’t getting as much out of facebook as I was putting into it. Specifically, I was ashamed at the amount of my time it consumed, I was worried, not so much about my privacy, as the disregard of my right to it (even if I chose not to take it up), and I was anxious about expressing myself too freely lest I cause offence to my friends.  Google+ has lessened my anxiety with its subdivision of friends into circles, though, of course, its potential to cause me shame and worry over my time and privacy are just as real as they ever were with facebook.

With all this in mind, my early experiences of Google+ have been very positive. As I was hoping to, I have re-connected with some lovely friends who had remained on facebook and never ventured onto twitter. Perhaps most encouragingly though, it looks like Google+ might be able to reach beyond the social, and enrich my professional life too. The following exchange, which I started to try and learn more about the use of Amazon’s Mechanical Turk in psychological research, is the sort of thing that’s making me very excited about this possibility.

Full Google Plus conversation

The discussion went in to far more detail than I could have hoped, and for those who are interested, a text-searchable pdf of the exchange complete with clickable links is available here.  (Incidentally, the reason I have had to go to so much trouble to provide a link to the thread with jpgs and pdfs, as opposed to the sort of easy html permalinking offered by twitter, is down to Google’s as-yet imperfect post-hoc sharing system. Once I decided that the thread deserved a wider audience, the options available to me were a) to re-share my original post, without the comments, to anyone on the web, or b) to provide a permalink to the whole thread that was only accessible to those with whom I had originally shared my first post. An option to change the sharing permissions for the entire thread, with the permission of all contributors of course, would be highly appreciated!)

As to why the question about Mechanical Turk generated so much useful information, there are three reasons I can think of.  The first is a simple affordance of the length of posts and comments.  Unlike twitter, detail can be provided when detail is required.  Whilst I have read the thoughts of writers praising the cognitive workout required to condense their tweets to be both eloquent and informative, it is limited medium that doesn’t lend itself to information-rich content or detailed evaluation.  Google+provides a clean, long-format forum in which ideas can be effectively transferred.

The second reason lies in the flexibility of the medium to provide relevant information to those who care.  Circles can be used to selectively share updates with certain groups.  This means that scientific updates can be restricted to my ‘Science’ circle, posts on running can be restricted to my ‘Runners’ circle,  and users may be feeling the effects of a more targeted dose of updates and information.  Comments aren’t driven by a desire to appear funny to a large number of people who probably share your boredom at the fact that, as it’s Sunday, Akira has once again completed a 6.3 mile run in a smidgen over 50 minutes – you’d probably only reach 5 people who would actually be rather preoccupied with trying to work out why Akira hasn’t managed to improve on his 6-mile time despite having done the same run every week for about 6 months.  Depending on your willingness to invest time in the categorisation of contacts, you can be taken as seriously as you want.

Finally, and maybe most importantly, Google+ is current rife with early-adopters. These are technologically ‘switched-on’ folk, who are willing to take a punt on a new medium, testing its capabilities and its uses as they go. To illustrate, Tom Hartley, Tal Yarkoni and Yana Weinstein all maintain current blogs/websites of their own and all contributors to the above thread are active twitter users (and well worth following).  Asking a question about how to conduct science using a nascent technology via a nascent communication technology stood every chance of being successful given the overlap in the Venn diagram of technology users.  Add to that the diminished risk of being called out as a ‘geek’, we’re all geeks here even before uber-geeks are further isolated within the ‘Geek’ circle, and we have the optimum conditions in which to find out about Amazon’s Mechanical Turk.

This isn’t to say that Google+ won’t be successful for non-technological academic discussion, or for technological discussion even after the the laggards arrive.  But I think that success depends on the parameters for its use in academia being established now.  If academics recognise that Google+ can be used to exchange work-related ideas early on in its life-cycle, then it has a much better chance of taking off and even being further developed with this use in mind. It already seems to me a far more attractive site for academics than academia.edu which has comprehensively failed to do anything other than act as a repository for electronic papers and CVs.

So, I’m quietly optimistic… until the next big thing comes along and I jump ship, desperately trying to keep up with all the other early-adopters.

[youtube http://www.youtube.com/watch?v=suJgV9HhJp8]

Tomorrow I travel to Saarbrucken to give an invited  seminar at Saarland University. It will be my seventh and penultimate talk of the academic year (with my last talk scheduled for ICOM in York at the beginning of August).

Chris Moulin
Chris Moulin: My PhD supervisor and one of the best academic public speakers I have encountered.

I don’t much enjoy giving talks.  In fact, if you had told me a year ago that the within my first year as a full-time lecturer, I would be obliged to give eight academic talks, I may well have reconsidered whether I wanted a career in academia at all! This may sound like quite a strange position to be in given that my job-title, ‘Lecturer’, doesn’t exactly lend itself to someone who doesn’t enjoy speaking in front of an audience. But my enjoyment of conducting research and disseminating it through written media generally overrides the revulsion I feel for public-speaking enough to make me think of my job as one I love doing.

I haven’t always disliked giving talks as much as I do now.  In fact, I can pin-point the moment my healthy disdain for them matured into a bravado-less fear to the moment I realised that my talks were sometimes good, sometimes bad, but generally unpredictable.  This realisation came as a direct consequence of feedback I received from my postdoctoral mentor following a rather bad talk given for the  Brain and Behavior Colloquia series at Washington University in St Louis.  I hated receiving this feedback, but it has done me the world of good, and I’m thankful I went through the few seconds of intense personal embarrassment and the couple of days of painful rumination to get to where I am now.  The feedback I was given was a much more tactful paraphrasing of:

“You’re no Richard Burton. You can’t talk endlessly and engagingly about something that people don’t inherently find interesting. You therefore need to plan and practice your talks accordingly.”

Receiving that feedback made me realise that I wasn’t going to get better at giving talks by simply giving more talks as I had been giving them.  I’m not a natural public speaker, so I shouldn’t expect to adopt the style, in preparation and delivery, of a natural speaker and hope everything falls into place. I was going to get better at giving talks by honing how I prepare to give talks and practising the hell out of their delivery.

I now hate giving talks because I’ve experienced the clear benefit that a revised, personally-appropriate talk preparation has on the quality of my delivery. I know that the quality of my talk reflects directly how much time and effort I have put into its preparation. The audience’s judgement of my talk feels like much more of a valid judgement of me than I used to think it was, and of course, now, it is. I now hate talks because I know that if I spend weeks preparing them, they will go well. To do anything less for a talk I care about would simply be irresponsible.

I now have a quite rigorous regime for talk preparation and delivery as follows:

  • I plan and construct the slides for the talk weeks in advance.
  • Once the slides have been written, I script the talk. I don’t think there’s any shame in this as long as you have weaned yourself off the script by the time you deliver your talk and it seems that others agree, see Lifehacker’s post on nailing your talk à la Malcolm Gladwell.
  • I iteratively refine my slides and script through daily practice. Easy for a 15 minute talk, more challenging for an hour-long seminar.
  • I learn when I can ad lib, and when I must stay on script. Some points are so complicated or nuanced that they require minimal deviation from your script.
  • I wean myself off my script through dress rehearsals. I don’t mean that I wear my conference garb every time I  run through my talk, but I try to deliver the talk as I intend to deliver it on the day. I practice my pace, my volume, and my phrasing. For example, when I am delivering a bad talk, I speak too slowly, too quietly and I tail off at the end of sentences. Reversing all of these bad habits during practice runs has stopped me defaulting to this state on final delivery.
  • I use my own laptop (PC) to present the talk.  Conferences tend to provide additional VGA cables to which Mac users can connect their laptops. Using my own PC laptop with this cable instead of the pre-supplied PC minimises the chances of media files going astray and Powerpoint versions causing problems.
  • I remove myself from the lectern. This takes a bit of guts at first, but it removes the temptation to read slides. It also introduces more naturalistic and conversational hand movement. I not a big gesturer, so I don’t think too many people find my hand movement off-putting. If you look like a semaphore signaller when you talk, this might be something to work on reducing!
  • I make sure there is water available to sip on during the talk. Again, taking my first sip is sometimes awkward, but not nearly as awkward as a gummed up mouth adding unnecessary consonants to my otherwise well-delivered talk.
This regime helps me to deliver better talks, but it has also had knock-on benefits to other aspects of my presentation:
  • I now write better slides because I have more confidence in my delivery.
  • I can upload my slides with script to the blog. Of course, the final version of the talk I give will deviate significantly from my script, but the script is still useful in helping people make sense of my slides.
  • I don’t overrun my scheduled time because I have had so many opportunities to practice this. This helps to keep stress levels down.
Of course, this approach is totally unsustainable when delivering a programme of lectures, but it’s great for one-off events.  In fact, I would go as far as saying that it got me a job and helped me to deliver a some of the best academic talks of my career along the way.

I have almost emerged from one of the most challenging times in my first year at St Andrews, a period of sustained grant-writing.

Below are some tips and resources that have helped me during the past few months.  Some of the resources are specific to St Andrews (where I am) and the BBSRC (who I am applying to) but most are non-specific.


– A condensed guide to grant-writing from Edinburgh’s Prof. Alan Bundy

– The Chronicle’s guide to how to fail at grant-writing

– Chapters 8 and 9 of The Compleat Academic.


 Read the grant-, scheme- and electronic application-specific guidance notes and organise your application, particularly your Case for Support according to their requirements.
e.g. BBSRC (funding body) http://www.bbsrc.ac.uk/funding/apply/grants-guide.aspx
Je-S (electronic submission system) https://je-s.rcuk.ac.uk/jesHandBook/jesHelp.aspx

– Get hold of applications that others have submitted to the same or similar funding bodies. This is probably the most reassuring thing you can do in the early stages of an application.

– Make sure you keep up with changes in funding policy. Twitter, RSS feeds, e-mails doing the rounds at work will help you to make sure you’re not tailoring your grant proposal to a priority that recently been de-prioritised.



– Go through official channels with your costings
e.g. http://www.st-andrews.ac.uk/rfo/Costingadvice/ResearchProjectCosts/
but beware of accepting place-holder values that your finance representative puts in for you, for example, in pooled staff costs.

– Should you be successful, you will have authority to spend funds that you have applied for as directly incurred but not those applied for as directly allocated. (Just so you know.)

– Read the guidance notes. Don’t apply for stuff that seems reasonable, but that the guidance notes state should be provided by your home institution (e.g. a desktop computer for day-to-day work)


– Give yourself plenty of it. Not only will it make the experience less stressful, it will also allow you to take time out of the all-consuming process every now and again. A fresh eye spots mistakes in text that a tired one doesn’t even bother reading.

– Be aware of the deadlines

Speak to people (Head of School, Director of Research, your friends in academia) about when you aim to have the grant submitted.  They will give you an indication of when you need to submit it at your end in order that it can by submitted to the research council or charity at their end and still make it in before the deadline.

Finishing touches

Daniel Higginbotham’s guide to visual design. Great for polishing those figures and making pages of dense text comprehensible.

Last night I capitulated and ordered an iPad2.

Since blogging about whether it might be a good idea to get one a while ago, I have noticed mention of iPads cropping up more and more in my RSS feeds.  Of course, this is down to the release of the latest version of the iPad, but I generally find it easy to ignore the engadget hype posts about stuff I’m not all that excited about… for example I really don’t give a toss about the Nintendo 3DS and there’s a fair amount been written about those lately too.  More difficult to ignore have been the mentions in the academic blogs I subscribe to (such as the consistently interesting Profhacker) and recommendations from friends (such as @pam_psych).

Other contributing factors have included the construction of  a ‘reading nook’ in my office, having to over-ride my HP printer’s helpful out of ink notification (if you’re out of ink, why can I override you and still get perfectly readable printouts?)  and noticing my ‘to read’ GMail label pile up to over 20 items once again.  The straw that broke the camel’s back was seeing Alex Easton give a very smoothly presented departmental seminar on Friday, all administered using, rather inevitably, an iPad.

Steve Jobs while introducing the iPad in San F...
Look at his smug little face. Image via Wikipedia

Which has led me to quite a conflicted state of being.  I dislike Steve Jobs immensely.  I despise the arrogance with which he suggests that the iPad is “magical” (I invariably had to stop myself from spitting on the sign that proclaimed this ridiculousness outside the Washington University Bookshop).  I even bought an Android phone so I wouldn’t line his pockets.  But the allure of a well-designed, perfectly useful product has made me eat my words and give him some of my heard-earned cash.

So, I’m now looking forward to the day in May when I receive my magical and revolutionary product.  I’m eagerly awaiting the sense of frustration I’ll feel when I realise that its 1024×768 resolution isn’t quite good enough to read a single page of a pdf article in fullscreen.  I can’t wait until the perfectly standard use of Flash on a website I’m viewing fails to load.  I’m on tenterhooks to experience the pointed glances that scream “pretentious wanker” at my smug little face.  Because from within my £399 walled garden, I will have even more reasons to dislike Mr Jobs.


SPM will, by default, show you 3 local maxima within each cluster displayed when you click ‘whole brain’ within Results.  To change the default number of local maxima displayed in the output table, edit spm_list.m and replace the variable ‘Num’ (line 201 in the spm_list.m supplied with SPM8). I currently have it set to 64.

You can also edit the variable ‘Dis’ in the same .m file (line 202 in SPM8) to change the minimum distance between peak voxels displayed.


Here’s a link to some pretty useful tweaks for getting the most speed possible out of a Windows Remote Desktop connection:


The end result isn’t too much faster than when using the default settings for a slow connection, but the difference is noticeable.

Multiple Connections to Computers behind a Router

This is for when you want to Remote Desktop into more than one computer sharing an internet connection through a router.


You’ll need to:
1) change to the default port designated for the Remote Desktop connection on each computer locally (default is 3389);
2) set up port-forwarding appropriately on your router;
3) make sure that whoever runs your  networks allows external access not only to the default port for Remote Desktop on the IP address occupied by your router, but also to the additional ports you have specified on each additional target computer.  This is a particularly important step if you’re doing this on a work connection external access to most ports is blocked by default.

On my recent submission of a manuscript to the Journal of Memory and Language (an Elsevier journal), I was faced with the unexpected task of having to provide  “Research highlights” of the submitted manuscript.  Elsevier describe these highlights here, including the following instructions:

  • Include 3 to 5 highlights.
  • Max. 85 characters per highlight including spaces…
  • Only the core results of the paper should be covered.

They mention that these highlights will “be displayed in online search result lists, the contents list and in the online article, but will not (yet) appear in the article PDF file or print”, but having never previously encountered them, I was (and am still) a little unsure about how exactly they would be used (Would they be indexed on Google Scholar? Would they be used instead of the abstract in RSS feeds of the journal table of contents?)  The thought that kept coming to me as I rephrased and reworked my  highlights was “they already have an abstract, why do they need an abstract of my abstract?”

Having pruned my five highlights to fit the criteria, I submitted them and thought nothing more of them. .. until tonight.  I checked the JML website to see if my article had made it to the ‘ Articles In Press’ section and rather than seeing my own article, saw this:

This was my first encounter of Research Highlights in action.  I was impressed.  I’m not too interested in language processing, so would never normally have clicked on the article title to read the abstract, but I didn’t need to. The highlights were quick to read and gave me a flavour of the research without giving me too much to sift though.  I guess that’s the point, and it’ll be interesting to see whether that  is maintained when every article on the page is accompanied by highlights.

It’s hard to tell if the implementation of research highlights in all journals would improve the academic user-experience.  No doubt, other journal publishers are waiting to see how Elsevier’s brain-child is received by researchers.  But there is another potential consequence that could be extremely important.  In the example above, I was able to read something comprehensible to me on a field a know next-to-nothing about.  In the same vein, maybe these highlights will be the first port of call of popular science writers looking to make academic research accessible to laymen.  If the end-result of the research highlight experiment is that a system is implemented that helps reduce the misrepresentation of science in the popular media, then I would consider that a huge success.