Over the past couple of days, I have been archiving published fMRI projects, and copying data from SD cards to start new ones. I have written previously about ways in which I have copied and verified copied files, and this is a quick update to that post to document another tool for verifying copies.

As far as the copying itself is concerned, I still swear by Teracopy. As far as verifying that copies have been successfully made though, I have recently started using Exactfile. The tagline “Making sure that what you hash is what you get” sums up the procedure for using Exactfile, once you have installed it on a Windows machine.

Exactfile in action
Exactfile in action
  1. Create a single file checksum, or, if you are comparing all the files and subfolders within folders (even massive folders containing gigabytes of fMRI data) a checksum digest (illustrated above). This will be saved as a file using which you can…
  2. Test your checksum digest. You locate your digest file and the copied data you wish to compare against the checksums, and it runs through making sure each file is identical.

That’s it – pretty straightforward. Step 1 takes a little longer than Step 2, and if you’re comparing hundreds of thousands of files, you should prepare to have this running in the background as you get on with other stuff.

I am heavily reliant on Google Reader for how I keep up with scientific literature.

I have customised RSS feeds for PubMed search terms. I have RSS feeds for journal tables of contents. I access my Reader account on my work computer via the website, on my iPad with the paid version of Feeddler and on my Android with the official Google app. I use IFTT to and a Gmail filter to send everything I star for reading back to my email account so it can all get dealt with at work. It’s not perfect, but it’s efficient and it has taken me well over five years to arrive at this system.

And now, thanks to Google’s decision to kill Reader, I’m going to have to figure it all out again. That is, if Google Reader’s demise doesn’t kill RSS.

Right now, this video resonates with me.

English: Extract from Raspberry Pi board at Tr...
The Raspberry Pi (photo credit: Wikipedia)

A few months ago, I suggested that Raspberry Pis could be used as a barebones experiment presentation machine. Since then I have got my hands on one and tinkered a little, only to be reminded yet again that my inability to do anything much in both Linux and python is a bit of a problem.

Fortunately, others with more technological nous have been busy exploring the capabilities of the Pi, with some exciting findings. On the Cognitive Science Stack Exchange, user appositive asked “Is the Raspberry Pi capable of operating as a stimulus presentation system for experiments?” and followed up at the end of January with a great answer to their own question, including this paragraph:

The RPi does not support OpenGL. I approached this system with the idea of using a python environment to create and present experiments. There are two good options for this that I know of, opensesame and psychopy. Psychopy requires an OpenGL python backend (pyglet), so it won’t run on the Rpi. Opensesame gives you the option of using the same backend as PsychoPy uses but has other options, one of which does not rely on openGL (based on pygames). This ‘legacy’ backend works just fine. But the absence of openGL means that graphics rely solely on the 700 mHz CPU, which quickly gets overloaded with any sort of rapidly changing visual stimuli (ie. flowing gabors, video, etc.).

Because of the lack of OpenGL support on the Pi, Psychopy is out (for now) leaving OpenSesame as the best cog psych-focused python environment for experiment presentation. The current situation seems to be that the Pi is suboptimal for graphics-intensive experiments, though this may improve as hardware acceleration is incorporated to take advantage of the Pi’s beefy graphics hardware. As things stand though, experiments with words and basic picture stimuli should be fine. It’s just a case of getting hold of one and brushing up on python.

UPDATE via Comments (1/4/2013) – Sebastiaan Mathôt has has published some nice Raspberry Pi graphics benchmarking data, which are well worth a look if you’re interested.
http://www.cogsci.nl/blog/miscellaneous/216-running-psychological-experiments-on-a-raspberry-pi-with-opensesame

To present stimuli for my experiments in the lab, I use Psychophysics Toolbox (Psychtoolbox) in conjunction with Matlab.

One limitation of Psychtoolbox is that the included DrawFormattedText function does not allow text to be horizontally  centered on a point other than the horizontal center of the screen. That frustration doesn’t seem to make much sense, but what I mean by it is that you cannot offset the centering (as you could by choosing to centering within different columns of a table) – If you try and place the text anywhere other than the horizontal center of the screen, the text must be left-aligned.

This means that, when using the original DrawFormattedText,  instead of nice-looking screens like this:

Note that words 1 and 3 are well centered within their boxes

you get this:

Note that word 2 is centered, but words 1 and 3 are left-aligned within their boxes

which is a little messy.

To fix this, I have modified the  DrawFormattedText file to include an xoffset parameter. It’s a very basic modification, that allows text to be centered on points offset from the horizontal center of the screen.  For example, calling DrawFormattedText_mod with:
1) xoffset set to -100, centers text horizontally on a point 100 pixels to the left of the horizontal center of the screen.
2) xoffset set to rect(3)/4 (where rect = Screen dimensions e.g. [0 0 1024 768]), centers text horizontally 1/3 of the way from the left hand edge.
I haven’t replaced my DrawFormattedText.m with my DrawFormattedText_mod.m just yet, but it has been added to the path and seems to be doing the trick.

You can download my DrawFormattedText_mod.m here: https://dl.dropbox.com/u/4127083/Scripts/DrawFormattedText_mod.m

This year, I decided to learn how to present cognitive psychology experiments online. Five months in, I’m happy with my progress.

Since the second-year of my PhD, when I spent a couple of weeks getting nowhere with Java, I have been keen to use the web to present experiments. What enabled me to move from thinking about it to doing it was Codecademy. I’ve previously blogged about how useful I found the codecademy website in getting me familiar with the syntax of Javascript, but at the time of writing that post, I was unsure of how a knowledge of the coding architecture alone (and certainly not coding aesthetic) would translate into a webpage presenting a functional cognitive psychology experiment. Thankfully I did have a barebones knowledge of basic HTML, much of which is now obsolete and deprecated, from which I was able to salvage snippets to combine with CSS (thanks to w3schools) to get something functional and not hideously ugly.

Syllable-counting in the Study Phase.
(click to be taken to the experiment)

Before I present the experiment I have spent the past few months working on, here are a few things I have learned from the experience so far.

1) In choosing Javascript over a medium like Flash, I hoped to maximise the number of devices on which the experiments would run. I think I made the right choice. Pressing response buttons with your finger on an iPad or an Android phone feels like a Human Factors triumph!

2) Javascript-driven user-interaction operates quite differently to user-interaction in languages like Matlab. Javascript is user-driven, which means you can’t have the browser start an event that waits for a response – the browser will crash. Instead, you must start an event that changes the state of the elements within the browser, such that should those elements be responded to, it will be as if the browser had waited for a response.

3) It is very quick and very easy to learn how to code functionally… if it works – it is generally functional. It is much more difficult to learn how to code both elegantly and functionally. I do not know how to code elegantly and I don’t think I ever will. (I’m not flippant about this either.  This is something I would really like to learn how to do).

4) Getting everything to look OK in different browsers is a pain. It wasn’t so much the Javascript as the newer snippets of HTML5 that I have struggled to get to work in every browser.

5) Web security is a subject on which I have very little knowledge.

6) Sending information from a browser to a server is a pain in the arse.

 

And finally, here is the experiment:

http://www.st-andrews.ac.uk/~oclab/memoryframingexpt/

It is a fairly straightforward recognition experiment, takes about 15 minutes to complete and should provide data for use in a larger project, so do feel free to take it as seriously as you want. As I have already mentioned, it works on an iPad, and I thoroughly recommend you give it a go this way if you have access to one.

Points and badges

For the past week or so, I have been working my way through Codecademy’s JavaScript tutorials. I can’t recommend them highly enough.

As things stand, I have a full house of 480 points and 35 badges and, as the Codecademy creators would undoubtedly hope, I am rather satisfied with the JavaScript proficiency I have attained. ‘Attained’ is probably the wrong word to use though. Being a self-taught Matlab hacker, I have found most of my coding know-how has translated fairly well into Javascript. A few concepts (recursion in particular) have presented me with some difficulty, but the overall experience has been more like learning a new coding dialect  than a new language altogether. I haven’t attained a proficiency, so much as uncovered a hidden one.

Which brings me to why I sought out Codecademy in the first place (thanks to @m_wall for the twitter-solicited tip-off) – I am preparing to teach Psychology undergrads how to code. From 2012/2013 onwards, my academic life is going to be a little more ‘balanced’. As well as the research, admin and small-group teaching I currently enjoy, I’m also going to be doing some large-group teaching. Although I have plenty to say to undergraduates on cognitive neuroscience and cognitive psychology, I think giving them some coding skills will actually be much more useful to most. As my experience with Codecademy has recently reinforced to me, coding basics are the fundamental building-blocks of programming in any language. They will hold you in good stead whatever dialect you end up speaking to your computer in. What’s more, they will hold you in good stead whatever you end up doing, as long as it involves a computer: coding is the most versatile of transferable skills to be imparting to psychology graduates who (rightly) believe they are leaving university with the most versatile of degrees.

With all this in mind, one of Codecademy’s limitations is the difficulty with which its students can translate their new-found JavaScript skills into useful ‘stuff’ implemented outside the Codecademy editor. As Audrey Watters points out, there is barely any acknowledgement within the Codecademy tutorials that the goal of all of these points and badges is to encourage you to write interactive web contact in an IDE. Indeed, last night when I thought about how I would use JavaScript to administer online  memory experiments, I had to do a lot more reading. This could all be about to change though. If the latest Code Year class on HTML is anything to go by, the folks at Codecademy are mindful of this limitation, and are attempting to remedy it.

It’s just a shame that the html integration has come so late in the Code Year (yes, I say this with full awareness that we’re only on week 13).  If the HTML-Javascript confluence had come a little further upstream, I think there probably would have been a fledgling memory experiment linked to from this blogpost!

Raspberry Pi schematic from http://www.raspberrypi.org

I think the Raspberry Pi is going to be fantastic, for reasons summed up very nicely by David McGloin – the availability of such a cheap and versatile barebones technology will kickstart a new generation of tinkerers and coders.

It’s worth mentioning that this kickstart wouldn’t just be limited to the newest generation currently going through their primary and secondary school educations. Should my hands-on experience of the device live up to my expectations (and the expectations of those who have bought all the units that went on sale this morning), I will be ordering a couple for each PhD student I take on. After all, what’s the point in using an expensive desktop computer running expensive software on an expensive OS to run simple psychology experiments that have hardly changed in the past 15 years? What’s the point when technology like the Raspberry PI is available for £22? Moreover, if you can get researchers to present experiments using a medium that has also helped them pick up some of the most desirable employment skills within and outwith academia, expertise with and practical experience in programming, then I think that’s a fairly compelling argument that it would be irresponsible not to.

But won’t I have missed a critical period in my students’ development from technology consumers into technology hackers?

No.

Every psychology student can and  should learn how to code (courtesy of Matt Wall), and it’s never too late.  I  learned to code properly in my twenties, during my postdoc. The reason it took me so long was that I had never needed to code in any serious goal-driven way before this time. Until the end of my PhD, Superlab and E-Prime had been perfectly passable vehicles by which I could present my experiments to participants.  My frustration with the attempts of these experiment presentation packages to make things ‘easy’, which ended up making things sub-optimal, led me to learn how to use the much ‘harder’ Matlab and Psychophysics Toolbox to present my experiments.  Most importantly, I was given license to immerse myself in the learning process by my boss. This is what I hope giving a PhD student a couple of Raspberry Pis will do, bypassing the tyranny of the GUI-driven experiment design package in the process.  Short-term pain, long-term gain.

In a few years, my behavioural testing lab-space could simply be a number of rooms equipped with HDMI monitors, keyboards and mice. Just before testing participants, students and postdocswill connect these peripherals to their own code-loaded Raspberry Pis, avoiding the annoyances of changed hardware settings, missing dongles and unre

liable network licenses. It could be brilliant, but whatever it is, it will be cheap.

Do not duplicate

I ran into the following error when trying to use a script to make Marsbar extract betas

Error in pr_stat_compute at 34

Indices too large for contrast structure

This problem occurred when I was trying to extract the betas for an unusual participant who had an empty bin for one condition, and for whom I had therefore had to manually alter the set of contrasts.  In doing this, it turns out that I had inadvertently duplicated one contrast vector.  Although the names were different, the number of contrasts had been amended to reflect the number of unique contrast vectors in SPM.xCon but not in Marsbar’s D, meaning that pr_stat_compute’s ‘xCon = SPM.xCon’ (line 23), did not return the same value as my own script’s ‘xCon = get_contrasts(D)’.  This was causing the two xCons to differ in their length and the resultant error in pr_stat_compute.

The solution lay in removing the duplicate contrasts from the contrast specification for that participant.