I give all of my lectures and presentations using the cloud-based Prezi. Because of this, I have a subscription, which gives me access to the offline Prezi creator, Prezi Desktop. The major advantage of Prezi Desktop is that you can work on presentations without an internet connection and upload your presentations to the cloud later… In theory.

Upload to Prezi.com...
Upload to Prezi.com…

Last week I ran into an issues with Prezi Desktop where the ‘Upload to Prezi.com…’ menu function didn’t work. This is not what you want to happen the night before your lecture. Here was the problem as I described it to Prezi Support:

I cannot upload a prezi I have created in the desktop editor (2.84 MB in size). Sometimes it stops (hangs) at 10%, other times as high as 35%. Occasionally  (~20% of the time)  I get an error message about media, prompting me to strip out all embedded videos (less than ideal), but this still does not resolve the issue. The exact error I receive is:

There was some trouble uploading your content (Error: uploading_media_files)

A search of Prezi’s support data base yields this, which suggests that there’s something wrong with my firewall settings. I therefore tried it with firewall off and on, on different computers, on different ISPs. No luck.

Eventually Prezi support uploaded the file for me and also came through with the following nugget of advice.

…my suggestion for next time would be to name the pez file something very simple, with only letters and numbers.

The files I had been trying to upload were named PS2001_20122013_Lect1.pez, PS2001_20122013_Lect2.pez etc. As soon as I stripped them down to PS2001Lect1, PS2001Lect2 etc., upload worked just fine. This is a very annoying bug in the Prezi Desktop functionality that needs to be fixed, especially as this filename suggestion isn’t available on Prezi’s support forum.

To guarantee Prezi Desktop’s ‘Upload to Prezi.com…’ menu function, make your prezi filenames short and avoid non-alphanumeric characters e.g. _s, &s etc.

IFTTT, if this then that, is an online, multi-service task automation tool I first read about on Lifehacker last year. I finally started using it today, and am seriously impressed.

IFTTT

Once you’ve signed up for an account, you can create IFTTT ‘recipes’ to check for actions and events on one online service (e.g. Google Reader, Dropbox, WordPress, Facebook etc.) and use them as an automatic trigger of a predetermined action in another (e.g. Gmail, Google Calendar, Tumblr etc.)

Example: To keep track of journal articles I should read, I monitor journal table of contents RSS feeds and e-mail interesting posts to myself for later download and consumption. I use my iPad, my phone, and occasionally my PC browser to access Google Reader, but struggle with how fiddly it is to e-mail myself on my mobile devices (with my filter-trigger keywords in the message body) whenever I find an article I want to read.  I’m sure I’ve missed articles I ought to have read through setting my action criterion a bit too high, as a direct result of how annoying it is to e-mail myself articles using the various Google Reader interfaces on my mobile devices. Today I set up IFTTT to check for starred Google Reader feed items, and automatically do everything else beyond this that I find annoying. Perfect!

IFTTT will check for custom recipe triggers every 15 minutes, so it isn’t something you’d want to use for actions you require to be instantaneous, but it’s perfect for situations like the above. The services with which it is integrated are many and varied, and the possibilities therefore nearly limitless.

UPDATE 16/04/2014: I just came across this page and found that I had referenced the now defunct Google Reader. When Reader died I moved all of my RSS feeds across to feedly, which IFTTT supports with identical functionality. I also apply the same rule to twitter posts I favourite, meaning that I have a Gmail folder in which IFTTT aggregates all of the stuff I want to read from both feedly and twitter.

The lab’s first Javascript experiment has been online for about 3 weeks now, and has amassed close to 200 participants. It’s been a great experience discovering that the benefits of online testing (60+ participants a week, many of them run while I’m asleep!) easily outweight the costs (the time expended learning Javascript and coding all the fiddly bits, particularly the informed consent procedures and performance-appropriate feedback).

On top of the study completion data that’s obvious from the 7 KB csv file that each happily-debriefed participant leaves behind, the Google Analytics code embedded in each page of the experiment provides further opportunity to explore participation data.

Attrition

As the experiment structure is entirely linear, it’s possible to track the loss of participants from each page to the next.

Study Attrition

The major point of attrition is between the Participant Information Page and the Consent Form – not surprising given quite how text-heavy the first page was, and how ‘scary’ headings like “Are there any potential risks to taking part?” make the study sound. The content of that first page is entirely driven by the Informed Consent requirements of the University of St Andrews, but the huge attrition rate here has prompted a bit of a redesign in the next follow-up study.

Browser

New Visits by Browser

Other information useful for the design of future studies has been the browser data. As might be expected, Firefox and its relatives are the dominant browsers, with Chrome a distant second and Internet Explorer lagging far behind. Implementing fancy HTML5 code that won’t work in Firefox is therefore a bad idea. On top of that, despite how tablet- and phone-friendly the experiment was, very few people used this sort of device to complete the study – it’s probably a waste of time optimising the site specifically for devices like iPads.

Study Completions by Browser
Study Completions by Browser

Curiously enough, when the data for study completions are explored by browser, the three major platforms start to level up. Chrome, Firefox and IE all yield similar completion statistics, suggesting that IE browsers are far more likely to follow through and complete the study once they visit the site. I’m speculating here, but I suspect that this has something to do with a) this being a memory study and b) IE being used by an older demographic of internet user who may be interested in how they perform. Of the three major browsers, Firefox users have the worst completion rate.

Location

Another consideration with word-based experiments is the location of participants. This could impact on the choice of words used in future studies (American or UK spellings) and could be considered important by those who are keen to exclude those who don’t speak English as their first language. Finer grained information about participants’ first languages is something we got from participant self-reports in the demographic questionnaire, but the table of new visits and study completions is still rather interesting.

New Visits and Study Completions by Country

Once again, there are few surprises here, with the US dominating the new visits list, though one new visit from a UK- or India-based browser is more likely to lead to a study completion. A solid argument for using North American spellings and words could also be made from these data.

Source of Traffic

The most important thing to do to make potential participants aware of an online psychology study is to advertise it. But where?

Study Completions by Source

While getting the study listed on stumbleupon was a real coup, it didn’t lead to very many study completions (a measly 2.5%). That’s not surprising – the study doesn’t capture the attention from page 1 and doesn’t have much in the way of internet meme-factor. That is, of course, something that we should be rectifying in future studies if we want them to go viral, but it’s tough to do within the rigid constraints of the informed consent pages that must precede the study itself.

The most fruitful source of participants was the psych.hanover.edu Psychological Research on the Net page. It was much more successful at attracting visits and study completions than facebook, the best of the social networks, and the other online experiment listing sites on which we advertised the study (onlineresearch.co.uk and http://www.socialpsychology.org/expts.htm). What’s more, there has been a sustained stream of visitors from the psych.hanover.edu page that hasn’t tailed off as the study has been displaced from the top of the Recently Added Studies list.

These statistics, surprised me more than any other.  I assumed that social networking, not a dedicated experiment listing page, would be how people would find the study. But in retrospect, it all makes sense. There is clearly a large number of people out there who want to do online psychology studies, and what better way to find them than to use a directory that lists hundreds of them.  If there’s one place you should advertise your online studies, it’s psych.hanover.edu.

To present stimuli for my experiments in the lab, I use Psychophysics Toolbox (Psychtoolbox) in conjunction with Matlab.

One limitation of Psychtoolbox is that the included DrawFormattedText function does not allow text to be horizontally  centered on a point other than the horizontal center of the screen. That frustration doesn’t seem to make much sense, but what I mean by it is that you cannot offset the centering (as you could by choosing to centering within different columns of a table) – If you try and place the text anywhere other than the horizontal center of the screen, the text must be left-aligned.

This means that, when using the original DrawFormattedText,  instead of nice-looking screens like this:

Note that words 1 and 3 are well centered within their boxes

you get this:

Note that word 2 is centered, but words 1 and 3 are left-aligned within their boxes

which is a little messy.

To fix this, I have modified the  DrawFormattedText file to include an xoffset parameter. It’s a very basic modification, that allows text to be centered on points offset from the horizontal center of the screen.  For example, calling DrawFormattedText_mod with:
1) xoffset set to -100, centers text horizontally on a point 100 pixels to the left of the horizontal center of the screen.
2) xoffset set to rect(3)/4 (where rect = Screen dimensions e.g. [0 0 1024 768]), centers text horizontally 1/3 of the way from the left hand edge.
I haven’t replaced my DrawFormattedText.m with my DrawFormattedText_mod.m just yet, but it has been added to the path and seems to be doing the trick.

You can download my DrawFormattedText_mod.m here: https://dl.dropbox.com/u/4127083/Scripts/DrawFormattedText_mod.m

Do not duplicate

I ran into the following error when trying to use a script to make Marsbar extract betas

Error in pr_stat_compute at 34

Indices too large for contrast structure

This problem occurred when I was trying to extract the betas for an unusual participant who had an empty bin for one condition, and for whom I had therefore had to manually alter the set of contrasts.  In doing this, it turns out that I had inadvertently duplicated one contrast vector.  Although the names were different, the number of contrasts had been amended to reflect the number of unique contrast vectors in SPM.xCon but not in Marsbar’s D, meaning that pr_stat_compute’s ‘xCon = SPM.xCon’ (line 23), did not return the same value as my own script’s ‘xCon = get_contrasts(D)’.  This was causing the two xCons to differ in their length and the resultant error in pr_stat_compute.

The solution lay in removing the duplicate contrasts from the contrast specification for that participant.

Having experimented with prezi a fair amount recently, I’ve been looking at ways to showcase the output on this blog, hosted on wordpress.com.  Pasting the standard embed code into the ‘HTML editor’ just results in a garbled mess of html code being displayed on the blog post, so I searched for and found a way of doing this successfully.

The prezi community provides the answer below:
http://community.prezi.com/prezi/topics/how_to_embed_prezi_in_wordpress_com_blog

Here are the instructions from bookbagdesigner, posted in January 2011.


It can take a while to find the appropriate bit of the embed code described in step 2.  But once you’ve found it, steps 3 and 4 are straightforward and the results are a success.

Since setting up lab in St Andrews I’ve consistently run into a DICOM Import Error that causes the process to terminate about half-way through. I finally fixed the problem today after a quick search on the SPM mailing list.

The error I was receiving was as follows:

Running ‘DICOM Import’
Changing directory to: D:Akira Cue Framing 2011PP03
Failed ‘DICOM Import’
Error using ==> horzcat
CAT arguments dimensions are not consistent.
In file “C:spm8spm_dicom_convert.m” (v4213), function “spm_dicom_convert” at line 61.
In file “C:spm8configspm_run_dicom.m” (v2094), function “spm_run_dicom” at line 32.

The following modules did not run:
Failed: DICOM Import

??? Error using ==> cfg_util at 835
Job execution failed. The full log of this run can be found in MATLAB command window, starting with the lines (look for the line
showing the exact #job as displayed in this error message)
——————
Running job #[X]
——————

Error in ==> spm_jobman at 208

??? Error while evaluating uicontrol Callback

This was a little mysterious, as the appropriate number of nifti files appeared to be left after the process terminated unexpectedly.

The following link suggested an SPM code tweak that might fix it:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind1106&L=SPM&P=R49499&1=SPM&9=A&J=on&d=No+Match%3BMatch%3BMatches&z=4

The proposed fix from John Ashburner simply requires changing line 61 of spm_dicom_convert.m from:

out.files = [fmos fstd fspe];

to:

out.files = [fmos(:); fstd(:); fspe(:)];

Works like a charm!

 

Can the iPad2, with its 132ppi 1024 x 768 screen, be used to comfortably read pdfs without the need to zoom and scroll about single pages?

That was a question that troubled me when I was splashing out for one earlier this year. To try to get a better idea of what a pdf viewed on only 800,000 pixels might look like was hard. Neither my attempt to I resize a pdf window to the correct number of pixels (too small) nor my attempt to screengrab  a pdf at a higher resolution and shrink it using GIMP (too fuzzy) were particularly informative. I just had to take plunge and see.

There’s enough wiggle-room (as you can see in the screenshots below) to suggest that there’s no definitive answer, I think the answer is probably yes. But, that’s only if you take advantage of some nifty capabilities of pdf-reading apps, Goodreader being the one I use, mostly thanks to its almost seamless Dropbox syncing capabilities.

Below is a screengrab of a standard, US letter-size, pdf, displayed unmodified on the iPad. The size, when the image is viewed inline with this text (and not in its own separate window), is approximately the same as it appears on the iPad (there is some loss of resolution which can be recovered if you click on the image and open it in its own window).

Click on the image to simulate holding the iPad close to your face whilst squinting.

The screengrab above demonstrates that virgin pdfs aren’t great to read. The main body of the text can be read at a push, but it’s certainly not comfortable.

Thankfully, the bulk of the discomfort can be relieved using Goodreader’s cropping function, which allows whitespace around pdfs to be cropped out (with different settings for odd and even pages, if required).  A cropped version of the above page looks like this:
A marked improvement which could be cropped further if you weren't too worried about losing the header information. Click on the image to see the screengrab with no loss of resolution.

The image above demonstrates that cropping can be used to get most value from the rather miserly screen resolution (the same on both the iPad and iPad2, though almost certainly not on the iPad3, when that’s released).

But, cropping doesn’t solve all tiny text traumas.  There are some circumstances, such as with particularly small text like the figure legend below, that necessitate a bit of zooming.

The figure legend is a little too small to read comfortably, even when the page is cropped.

I don’t mind zooming in to see a figure properly, but that’s probably a matter of personal taste.

If you’re used to using an iPhone4, with its ridiculous 326ppi retina display, then you’ll find reading pdfs on a current model iPad a bit of a step back. But, it’s passable and I certainly don’t mind doing it. It certainly beats printing, carrying and storing reams of paper.

The non-breaking space looks like a normal space, but prevents an automatic line-break from occurring between the two text items it connects.

Wikipedia link: Non-breaking space

I use it when I want words not to get separated from their inline bullet-type markers [(i), a), – etc.]  This is useful in grant application documents where you might want to use lists but space is at a premium.

To type a non-breaking space:
Windows:   Alt+255
Mac:          Option+Space

SPM will, by default, show you 3 local maxima within each cluster displayed when you click ‘whole brain’ within Results.  To change the default number of local maxima displayed in the output table, edit spm_list.m and replace the variable ‘Num’ (line 201 in the spm_list.m supplied with SPM8). I currently have it set to 64.

You can also edit the variable ‘Dis’ in the same .m file (line 202 in SPM8) to change the minimum distance between peak voxels displayed.