Over the past couple of days, I have been archiving published fMRI projects, and copying data from SD cards to start new ones. I have written previously about ways in which I have copied and verified copied files, and this is a quick update to that post to document another tool for verifying copies.
As far as the copying itself is concerned, I still swear by Teracopy. As far as verifying that copies have been successfully made though, I have recently started using Exactfile. The tagline “Making sure that what you hash is what you get” sums up the procedure for using Exactfile, once you have installed it on a Windows machine.
Create a single file checksum, or, if you are comparing all the files and subfolders within folders (even massive folders containing gigabytes of fMRI data) a checksum digest (illustrated above). This will be saved as a file using which you can…
Test your checksum digest. You locate your digest file and the copied data you wish to compare against the checksums, and it runs through making sure each file is identical.
That’s it – pretty straightforward. Step 1 takes a little longer than Step 2, and if you’re comparing hundreds of thousands of files, you should prepare to have this running in the background as you get on with other stuff.
I am heavily reliant on Google Reader for how I keep up with scientific literature.
I have customised RSS feeds for PubMed search terms. I have RSS feeds for journal tables of contents. I access my Reader account on my work computer via the website, on my iPad with the paid version of Feeddler and on my Android with the official Google app. I use IFTT to and a Gmail filter to send everything I star for reading back to my email account so it can all get dealt with at work. It’s not perfect, but it’s efficient and it has taken me well over five years to arrive at this system.
The RPi does not support OpenGL. I approached this system with the idea of using a python environment to create and present experiments. There are two good options for this that I know of, opensesame and psychopy. Psychopy requires an OpenGL python backend (pyglet), so it won’t run on the Rpi. Opensesame gives you the option of using the same backend as PsychoPy uses but has other options, one of which does not rely on openGL (based on pygames). This ‘legacy’ backend works just fine. But the absence of openGL means that graphics rely solely on the 700 mHz CPU, which quickly gets overloaded with any sort of rapidly changing visual stimuli (ie. flowing gabors, video, etc.).
Because of the lack of OpenGL support on the Pi, Psychopy is out (for now) leaving OpenSesame as the best cog psych-focused python environment for experiment presentation. The current situation seems to be that the Pi is suboptimal for graphics-intensive experiments, though this may improve as hardware acceleration is incorporated to take advantage of the Pi’s beefy graphics hardware. As things stand though, experiments with words and basic picture stimuli should be fine. It’s just a case of getting hold of one and brushing up on python.
To present stimuli for my experiments in the lab, I use Psychophysics Toolbox (Psychtoolbox) in conjunction with Matlab.
One limitation of Psychtoolbox is that the included DrawFormattedText function does not allow text to be horizontally centered on a point other than the horizontal center of the screen. That frustration doesn’t seem to make much sense, but what I mean by it is that you cannot offset the centering (as you could by choosing to centering within different columns of a table) – If you try and place the text anywhere other than the horizontal center of the screen, the text must be left-aligned.
This means that, when using the original DrawFormattedText, instead of nice-looking screens like this:
you get this:
which is a little messy.
To fix this, I have modified the DrawFormattedText file to include an xoffset parameter. It’s a very basic modification, that allows text to be centered on points offset from the horizontal center of the screen. For example, calling DrawFormattedText_mod with:
1) xoffset set to -100, centers text horizontally on a point 100 pixels to the left of the horizontal center of the screen.
2) xoffset set to rect(3)/4 (where rect = Screen dimensions e.g. [0 0 1024 768]), centers text horizontally 1/3 of the way from the left hand edge.
I haven’t replaced my DrawFormattedText.m with my DrawFormattedText_mod.m just yet, but it has been added to the path and seems to be doing the trick.
This year, I decided to learn how to present cognitive psychology experiments online. Five months in, I’m happy with my progress.
Before I present the experiment I have spent the past few months working on, here are a few things I have learned from the experience so far.
3) It is very quick and very easy to learn how to code functionally… if it works – it is generally functional. It is much more difficult to learn how to code both elegantly and functionally. I do not know how to code elegantly and I don’t think I ever will. (I’m not flippant about this either. This is something I would really like to learn how to do).
5) Web security is a subject on which I have very little knowledge.
6) Sending information from a browser to a server is a pain in the arse.
It is a fairly straightforward recognition experiment, takes about 15 minutes to complete and should provide data for use in a larger project, so do feel free to take it as seriously as you want. As I have already mentioned, it works on an iPad, and I thoroughly recommend you give it a go this way if you have access to one.
Which brings me to why I sought out Codecademy in the first place (thanks to @m_wall for the twitter-solicited tip-off) – I am preparing to teach Psychology undergrads how to code. From 2012/2013 onwards, my academic life is going to be a little more ‘balanced’. As well as the research, admin and small-group teaching I currently enjoy, I’m also going to be doing some large-group teaching. Although I have plenty to say to undergraduates on cognitive neuroscience and cognitive psychology, I think giving them some coding skills will actually be much more useful to most. As my experience with Codecademy has recently reinforced to me, coding basics are the fundamental building-blocks of programming in any language. They will hold you in good stead whatever dialect you end up speaking to your computer in. What’s more, they will hold you in good stead whatever you end up doing, as long as it involves a computer: coding is the most versatile of transferable skills to be imparting to psychology graduates who (rightly) believe they are leaving university with the most versatile of degrees.
I think the Raspberry Pi is going to be fantastic, for reasons summed up very nicely by David McGloin – the availability of such a cheap and versatile barebones technology will kickstart a new generation of tinkerers and coders.
It’s worth mentioning that this kickstart wouldn’t just be limited to the newest generation currently going through their primary and secondary school educations. Should my hands-on experience of the device live up to my expectations (and the expectations of those who have bought all the units that went on sale this morning), I will be ordering a couple for each PhD student I take on. After all, what’s the point in using an expensive desktop computer running expensive software on an expensive OS to run simple psychology experiments that have hardly changed in the past 15 years? What’s the point when technology like the Raspberry PI is available for £22? Moreover, if you can get researchers to present experiments using a medium that has also helped them pick up some of the most desirable employment skills within and outwith academia, expertise with and practical experience in programming, then I think that’s a fairly compelling argument that it would be irresponsible not to.
But won’t I have missed a critical period in my students’ development from technology consumers into technology hackers?
Every psychology student can and should learn how to code (courtesy of Matt Wall), and it’s never too late. I learned to code properly in my twenties, during my postdoc. The reason it took me so long was that I had never needed to code in any serious goal-driven way before this time. Until the end of my PhD, Superlab and E-Prime had been perfectly passable vehicles by which I could present my experiments to participants. My frustration with the attempts of these experiment presentation packages to make things ‘easy’, which ended up making things sub-optimal, led me to learn how to use the much ‘harder’ Matlab and Psychophysics Toolbox to present my experiments. Most importantly, I was given license to immerse myself in the learning process by my boss. This is what I hope giving a PhD student a couple of Raspberry Pis will do, bypassing the tyranny of the GUI-driven experiment design package in the process. Short-term pain, long-term gain.
In a few years, my behavioural testing lab-space could simply be a number of rooms equipped with HDMI monitors, keyboards and mice. Just before testing participants, students and postdocswill connect these peripherals to their own code-loaded Raspberry Pis, avoiding the annoyances of changed hardware settings, missing dongles and unre
liable network licenses. It could be brilliant, but whatever it is, it will be cheap.
I ran into the following error when trying to use a script to make Marsbar extract betas
Error in pr_stat_compute at 34
Indices too large for contrast structure
This problem occurred when I was trying to extract the betas for an unusual participant who had an empty bin for one condition, and for whom I had therefore had to manually alter the set of contrasts. In doing this, it turns out that I had inadvertently duplicated one contrast vector. Although the names were different, the number of contrasts had been amended to reflect the number of unique contrast vectors in SPM.xCon but not in Marsbar’s D, meaning that pr_stat_compute’s ‘xCon = SPM.xCon’ (line 23), did not return the same value as my own script’s ‘xCon = get_contrasts(D)’. This was causing the two xCons to differ in their length and the resultant error in pr_stat_compute.
The solution lay in removing the duplicate contrasts from the contrast specification for that participant.
The image below is to be used to showcase my research in the department foyer.
It is an adaptation of a panel from a figure in my Journal of Neuroscience paper. The mosaic effect is created using text from the paper. The image was generated using the somewhat buggy, but very usable Textaliser Pro.