Can the iPad2, with its 132ppi 1024 x 768 screen, be used to comfortably read pdfs without the need to zoom and scroll about single pages?

That was a question that troubled me when I was splashing out for one earlier this year. To try to get a better idea of what a pdf viewed on only 800,000 pixels might look like was hard. Neither my attempt to I resize a pdf window to the correct number of pixels (too small) nor my attempt to screengrab  a pdf at a higher resolution and shrink it using GIMP (too fuzzy) were particularly informative. I just had to take plunge and see.

There’s enough wiggle-room (as you can see in the screenshots below) to suggest that there’s no definitive answer, I think the answer is probably yes. But, that’s only if you take advantage of some nifty capabilities of pdf-reading apps, Goodreader being the one I use, mostly thanks to its almost seamless Dropbox syncing capabilities.

Below is a screengrab of a standard, US letter-size, pdf, displayed unmodified on the iPad. The size, when the image is viewed inline with this text (and not in its own separate window), is approximately the same as it appears on the iPad (there is some loss of resolution which can be recovered if you click on the image and open it in its own window).

Click on the image to simulate holding the iPad close to your face whilst squinting.

The screengrab above demonstrates that virgin pdfs aren’t great to read. The main body of the text can be read at a push, but it’s certainly not comfortable.

Thankfully, the bulk of the discomfort can be relieved using Goodreader’s cropping function, which allows whitespace around pdfs to be cropped out (with different settings for odd and even pages, if required).  A cropped version of the above page looks like this:
A marked improvement which could be cropped further if you weren't too worried about losing the header information. Click on the image to see the screengrab with no loss of resolution.

The image above demonstrates that cropping can be used to get most value from the rather miserly screen resolution (the same on both the iPad and iPad2, though almost certainly not on the iPad3, when that’s released).

But, cropping doesn’t solve all tiny text traumas.  There are some circumstances, such as with particularly small text like the figure legend below, that necessitate a bit of zooming.

The figure legend is a little too small to read comfortably, even when the page is cropped.

I don’t mind zooming in to see a figure properly, but that’s probably a matter of personal taste.

If you’re used to using an iPhone4, with its ridiculous 326ppi retina display, then you’ll find reading pdfs on a current model iPad a bit of a step back. But, it’s passable and I certainly don’t mind doing it. It certainly beats printing, carrying and storing reams of paper.

Speed

Here’s a link to some pretty useful tweaks for getting the most speed possible out of a Windows Remote Desktop connection:

http://www.tech-recipes.com/rx/11235/how-to-improve-remote-desktop-protocol-performance/

The end result isn’t too much faster than when using the default settings for a slow connection, but the difference is noticeable.

Multiple Connections to Computers behind a Router

This is for when you want to Remote Desktop into more than one computer sharing an internet connection through a router.

http://www.pchell.com/support/useremotedesktoptoaccessmultiplecomputers.shtml

You’ll need to:
1) change to the default port designated for the Remote Desktop connection on each computer locally (default is 3389);
2) set up port-forwarding appropriately on your router;
3) make sure that whoever runs your  networks allows external access not only to the default port for Remote Desktop on the IP address occupied by your router, but also to the additional ports you have specified on each additional target computer.  This is a particularly important step if you’re doing this on a work connection external access to most ports is blocked by default.

Whilst Windows easily copies lots of data, it struggles when you ask it to copy lots and lots and lots of data.  Teracopy is a neat file copying utility that provides peace of mind as you transition from copying gigabytes of data to terabytes of data.

In order to get my fMRI data from St. Louis to St. Andrews, I have embarked upon the somewhat arduous task of copying everything to a portable hard-drive.   After a few attempts that ended in the failure to copy a file or two, seemingly at random, I lost faith in using the standard drag-and-drop copy in Windows, and searched for alternatives.  The command line option seemed fastest, but I didn’t want to bring the server down for everyone else for a few hours whilst I did my copying.  Then I found Teracopy.

A picture of TeraCopy
Teracopy in action (Image via Wikipedia)

Teracopy (freeware) is a straightforward utility that improves upon the Windows interface in a number of ways.  Copying is (apparently) faster and it certainly seems more reliable than the standard Windows approach.  One very nice feature is that it allows you to pause and resume your copying for when you need to free up system resources temporarily.

download Teracopy

So far I have copied close to a terabyte of data onto my portable hard-drive with no problems.  Now all that remains is to check it all with another utility (Windiff) to make sure all my files really did get copied successfully, and to actually transport my hard-drive without banging or dropping it.

Below (indented) is a straightforward MATLAB/SPM/Marsbar script for generating separate Marsbar ROI and Nifti ROI spheres (user-defined radius) from a set of user-defined MNI coordinates stored in a text file (‘spherelist.txt’ saved in the same directory as the script).  ROI files are saved to mat and img directories that the script creates.
I use this script to generate *.mat files as seeds for resting connectivity analyses.  Once a *.mat ROI is generated, Marsbar can be used to extract raw timecourses from this region to feed into connectivity analysis as the regressor of interest.  Because MRIcron doesn’t read *.mat Marsbar ROI files, I render the equivalent *.img seed regions on a canonical brain when I need to present them graphically.

% This script loads MNI coordinates specified in a user-created file,
% spherelist.txt, and generates .mat and .img ROI files for use with
% Marsbar, MRIcron etc.  spherelist.txt should list the centres of
% desired spheres, one-per-row, with coordinates in the format:
% X1 Y1 Z1
% X2 Y2 Z2 etc
% .mat sphere ROIs will be saved in the script-created mat directory.
% .img sphere ROIs will be saved in the script-created img directory.
% SPM Toolbox Marsbar should be installed and started before running script.

% specify radius of spheres to build in mm
radiusmm = 4;

load(‘spherelist.txt’)
% Specify Output Folders for two sets of images (.img format and .mat format)
roi_dir_img = ‘img’;
roi_dir_mat = ‘mat’;
% Make an img and an mat directory to save resulting ROIs
mkdir(‘img’);
mkdir(‘mat’);
% Go through each set of coordinates from the specified file (line 2)
spherelistrows = length(spherelist(:,1));
for spherenumbers = 1:spherelistrows
% maximum is specified as the centre of the sphere in mm in MNI space
maximum = spherelist(spherenumbers,1:3);
sphere_centre = maximum;
sphere_radius = radiusmm;
sphere_roi = maroi_sphere(struct(‘centre’, sphere_centre, …
‘radius’, sphere_radius));

% Define sphere name using coordinates
coordsx = num2str(maximum(1));
coordsy = num2str(maximum(2));
coordsz = num2str(maximum(3));
spherelabel = sprintf(‘%s_%s_%s’, coordsx, coordsy, coordsz);
sphere_roi = label(sphere_roi, spherelabel);

% save ROI as MarsBaR ROI file
saveroi(sphere_roi, fullfile(roi_dir_mat, sprintf(‘%dmmsphere_%s_roi.mat’,…
radiusmm, spherelabel)));
% Save as image
save_as_image(sphere_roi, fullfile(roi_dir_img, sprintf(‘%dmmsphere_%s_roi.img’,…
radiusmm, spherelabel)));
end

UPDATE: WordPress messed with the characters in the above script, so here is a link to the script file and an example spherelist.txt file.

Dropbox is a fantastically versatile piece of software based on seamless integration of user-defined folders with ‘the cloud‘.  Much has been written about how it can be used for general computing, e.g. from Lifehacker:

…Dropbox instantaneously backs up and syncs your files over the internet and to any computer. After you install the application, it will create a Dropbox folder on your hard drive. Any file you put inside that folder will automatically be synced and monitored for changes, and each time a change is saved, it backs up and syncs the file again. Even better, Dropbox does revision history, so if you accidentally saved a file and wanted to revert to an old version or deleted a file, Dropbox can recover any previous version.

It also has some nice collaborative features that allow you to share documents you’re working on, pushing updated versions out to all synchronised dropbox directories as changes are made.  Crucially, whenever dropbox detects a connection to the internet, it synchronises all the files contained within the cloud-synched directories, but it doesn’t require an internet connection to work on those files.  This feature has revolutionised the way I programme, debug and transport participant data from psychological experiments.

Programming Experiments: I, like most psychologists I know, don’t programme my experiments on the machines on which I test participants.  I have a workhorse desktop machine on which I programme, and I use a number of lower-specced machines to gather data.  For instance, I have an fMRI-dedicated laptop which I take to and from the scanner, from which I present stimuli to participants, and on which I store their behavioral data in transit.

I don’t programme on the fMRI laptop because I don’t like spending lots of time working on the cramped keyboard, touchpad and small screen, and because I try and keep the number of applications installed on the machine to a minimum.  A problem arises when I need to test my programming to make sure that: a) it runs on the fMRI laptop; and b) what I have programmed on my 1920×1200 monitor translates well to presentation on 1024×768, the resolution I set the laptop to to be compatible with the scanner projection system.

It’s easy enough to save the experiment and all of its associated files onto a memory stick and transfer them to the laptop whenever I need to test it, but it’s a hassle; one that dropbox eliminates.

I have dropbox installed on both my desktop machine and the fMRI laptop.  When programming, I just set the laptop up next to my desktop keyboard and create an ‘experiment’ directory in my dropbox using my desktop.  I then programme as normal, using my desktop to edit picture stimuli using GIMP and generate instruction slides using Powerpoint, saving everything into the ‘experiment’ directory.  When it comes to testing the experiment, I simply turn to the laptop, where I find all the experiment files have been updated in real-time over wifi.  Perfect! If I run the experiment on the laptop and find that some of the images I’m using are the wrong size I can simply resize them on the desktop and try again, no memory stick required.  I’m able to debug a lot more efficiently like this – it places far less working memory load on me as I can make the required changes as I notice them, rather than once I’ve run through the entire experiment.

Running the Experiment and Transporting Participant Data: I’ve already mentioned that I turn off the laptop’s wifi connection at the scanner.  I also exit the dropbox application (which runs in the background and coordinates the cloud-synching.  The beauty of the system is that all your dropbox files remain available locally when internet connections aren’t available.  I can still run my Matlab scripts and collect data into the dropbox directory.  As soon as I’ve finished testing, I start the dropbox application and enable the wifi connection and the new participant data gets uploaded to the cloud and pushed out to my desktop machine.  Just like that, the data is backed-up and transported to my desktop.  Again, no memory stick required.

This is just one domain of my day-to-day work that dropbox has changed for the better.  I also use the dropbox ‘Public Link’ capability to make my CV available on the web.  Instead of sending my CV to each web-site that wants to host it (e.g. the Dobbins lab website), I now provide a link to the CV in the ‘Public’ folder of my dropbox.  Whilst this difference might seem trivial, it enables me to update my web-based CV in real-time without having to e-mail the administrator of each web-site with an attachment each time I want a change pushed out.

I’m sure there are many other uses I have yet to discover and that’s the beauty of such a straightforward yet polished technology.

WARNING: When you install dropbox, you give it control over everything you put in your dropbox folder – you have to be aware that all changes made to files in your dropbox on one machine will instantaneously get pushed out to all your other machines.
Use antivirus software. If a virus makes its way into any file on your dropbox, it will get pushed out to all the other computers synced to your account.
Be disciplined about backing up your files, even cloud-synced files. Protect youself against the accidental deletion of files in your dropbox.  Once they are deleted on one machine, they will get deleted on all your other dropbox-synced computers.  I have a recurring nightmare where I lose my experiment data because it gets deleted on the fMRI laptop (e.g. it gets stolen and prior to selling it on, the thief deletes everything in the ‘My Dropbox’ folder).  Because I have a nightly backup running, this wouldn’t be terminal, but the prospect of it happening is still scary.



fMRI scans produce an awful lot of data.  Depending on whether you’re getting you scanner to output 3D or 4D data, you can end up with one file per measurement, or file per scan.  If you’re dealing with 3D data (the one files per measurement option), you’re in for a long night if you decide that you don’t like the existing file-naming convention, and want to replace it with something a little simpler e.g. if you want to remove unique identifiers to allow for batch processing .  For example, the experiment I’m testing right now collects 292 measurements in each of its five functional scans, that’s 1460 files in total.  If it only takes me a three seconds to rename a file manually, then it’ll still take me over an hour to finish, and that’s just for one participant.

Of course, you could script something to do this pretty quickly, there’s an even easier way: Ant Renamer.

With a few clicks, you can select all the files within a folder (and its subfolders) and carry out a pretty versatile range of file renaming tasks in seconds – it takes about 5 seconds to rename 1460 files once I tell Ant Renamer what to do.  I use it to remove unique identifiers from fMRI data filenames, but the options are there to delete the first x characters from a file, change the extension, add incremental numbers to filenames, even use MP3 metadata to rename MP3s.

Ant Renamer is well worth investigating if you ever find yourself daunted by the prospect of file renaming in bulk – I’ve found it so useful that I keep a portable copy (no install required) on my dropbox so that I can access it on the move.

One of the most annoying and stressful things that can happen during an fMRI experiment is for system notifications, pop-ups or even the Windows taskbar to suddenly appear on the screen on which you are presenting stimuli to participants.  Here I outline a few things that I do to minimise the likelihood of this sort of disruption when running Matlab on a Windows XP machine.

1) Turn off your wireless network adapter. This reduces the processing burden on your system – crucial if you’re interested in measuring response times – and stops a lot of annoyances (Flash updates, Windows updates etc.) being pushed to your system.  My laptop has a manual switch on the exterior than I can flick to turn it off.  Alternatively, the wireless network can be disabled within windows by navigating Network Connections, right-clicking on the wireless network, and selecting ‘disable’.

2) Disable Real-Time Antivirus Protection and Windows Automatic Updates. This again reduces the burden on your system  and stops annoying notifications popping up.  Whatever it is, it can wait.  However, disabling real-time protection  will probably lead to an ugly warning in your system tray, but no-one needs to see that if you…

3) Turn off the ‘always on top’ property of the Windows Taskbar. Once you do this, Matlab will sit entirely on top of the taskbar, and the taskbar shouldn’t ever become visible at inopportune moments (something I inexplicable struggled with when designing my latest fMRI experiment).  Right click on the taskbar, select Properties, and untick the ‘Keep the taskbar on top of other windows’ checkbox.

4) Disable balloon tips in the notification area. Whilst you could turn off the system tray altogether, that shouldn’t be necessary if you’ve already followed step 3.  (One reason I like to keep the system tray visible is that I find it a handy way to t manage wireless networks, Dropbox, etc. and I don’t want to lose that functionality entirely. ) However, to reduce the chances of anything else you haven’t already thought of ‘helpfully’ reminding you of something mid-experiment, turn off bubble notifications, as detailed in this Microsoft TechNet article.

That should give you the best crack at getting through an experiment with an ugly, flickering, Windows interruption.  Now that you’ve covered your bases,  all you need to do is make sure that your Matlab coding doesn’t give you any  grief – easier said than done.

UPDATE: These steps aren’t exclusive to Matlab stimulus presention either.  They could give you peace of mind before hooking your laptop up to give a formal presentation or jobtalk on Powerpoint… I’ve seen too many talks interrupted by pesky Windows Update notifications and ‘Found new wireless network’ bubbles.

Below is a list of tools I find useful or interesting.

Caveat: I use Windows XP on my current work machine.  Some of the listed tools are available only for Windows, others are geared at bringing some of the functionality of Windows 7 and Mac OS to my XP machine.

Academic Tools

Google Scholar (web)
Google’s academic search engine is a fantastic tool when you know bits and pieces about an article you’d like to get hold of, but don’t have a comprehensive reference.  I use it in combination with Wash U’s excellent library site whenever I’m after a pdf I’ve seen referenced in a talk or at lab meeting.

Publish or Perish (desktop)
A handy application that allows you to quickly explore an academic’s publication metrics.  I use it to keep track of citations, though it’s not the most reliable source of this information as it use Google Scholar’s liberal citation counts.

GPower (desktop)
An application with a number of nifty functions related to power analyses.  I can see this being very useful when it comes to grant application time.

Effect Size Calculator (web)
Online tool for quick and dirty effect size calculation.

Gimp (desktop)
Powerful image editing desktop application.  I use this for everything from fine-tuning figures for manuscripts to adjusting the resolution of instruction slide images for Experiments.  I’m not a image-editing power-user so I tend to create the images with in Powerpoint (or Statistica for graphs) before exporting them as high-resolution tiffs to touch up using Gimp.

General Windows Tools

Dropbox (web & desktop)
Dropbox has changed my whole approach to synching files across multiple computers.  I don’t need to worry about CDRs or memory sticks anymore, I can just drag something into my Explorer-integrated dropbox and see it pop-up on all the other machines on which I have dropbox installed.  I recently designed a new experiment on my desktop machine, and found that I could test each iteration of the development on the laptop machine on which it will be run as soon as I saved the desktop machine version.  Even if your internet connection goes down, you still have access to your files as local copies always available  and only updated from the cloud.  You get 2GB for free with incremental additional space available for certain forms of software evangelism (i.e. converting your friends) and subscription access to more space.  Wonderful!

Logmein (web & desktop)
A great alternative to remote desktop.  I use this to tunnel into my work PC from home if I need to check anything or gain access to resources on the work network.

Chrome (desktop)
Google’s lightning-fast browser.  I made the switch from Firefox when the Mozilla browser started getting very slow.  Although Firefox still wins the addons/extensions war, Chrome is making vast strides here.  Extensions I use include: AdblockGoogle Mail Checker Plus and ForecastFox. (I’ve also tried the even faster Opera, though it’s still a little too prone to crashes for my liking.)

Microsoft Security Essentials (desktop)
Excellent background antivirus software from Microsoft.

Revo Uninstaller (desktop)
A comprehensive uninstaller that searches and removes traces of applications forgotten by built-in application uninstaller.  I use the free version with no complaints at all.

RocketDock (desktop)
A mac-style application launcher.  It’s endlessly customisable and much more useful than the Windows XP taskbar.  I’m not sure I’ll persist with it once I get onto a windows 7 machine though

Stalled Printer Repair (desktop)
Portable tool that allows you to quickly flush a printer queue following a stall.  A vast improvement on the built-in printer queue manager’s efforts.

Novapdf (desktop)
I got this free from Lifehacker a while ago and it’s proved so useful I will be buying it for my next machine. It allows you to create pdfs from anything you can print – pretty useful if you’re putting documents online.

Aquasnap (desktop)
A utility that emulates Windows 7’s Aerosnap functionality.  Great if you have a large monitor and often have applications open side-by-side.

Taskbar Shuffle (desktop)
Drag and drop taskbar applications into new positions.  Not essential, handy if you’re into organising things.

Fences (desktop)
Another organisation utility, this time for the desktop itself.  I tend to use my desktop as a holding pad for anything and everything – Fences allows me to fence sections of it off for certain types of files, e.g. pdfs, portable applications, data I’ve been sent that I need to look at etc.  Also, if you double-click on the desktop, it all disappears, giving you the illusion of being ultra-organised.

Occasionally, it’s nice to look under the bonnet and see what’s going on during any automated process that you take for granted.  More often than not, I do this when the automaticity has broken down and I need to fix it (e.g. my computer won’t start), or if I need to modify the process in a certain way as to make its product more useful to me (e.g. installing a TV card to make my computer more ‘useful’ to me).  This is especially true with tools such as SPM.

One of the greatest benefits associated with using SPM is that it’s all there, in one package, waiting to be unleashed on your data.  You could conduct all of your analyses using SPM only, and you could never need to know how SPM makes the pretty pictures that indicate significant brain activations according to your specified model.  That’s probably a bad idea.  You, at least, need to know that SPM is conducting lots and lots of statistical tests – regressions – as discussed in the previous post.  If you have a little understanding of regressions, you’re then aware that what isn’t fit into your regression model is called a ‘residual’ and there are a few interesting things you can do with residuals to establish the quality of the regression model you have fit to your data.  Unfortunately with SPM, this model fitting happens largely under the bonnet, and you could conduct all of your analyses without ever seeing the word ‘residual’ mentioned anywhere in the SPM interface.

Why is this?  I’m not entirely sure.  During the process of ‘Estimation’, SPM writes an image containing all of your residuals to disk (in the same directory as the to-be-estimated SPM.mat file) in a series of image files as follows:

ResI_0001.img ResI_0001.hdr
ResI_0002.img ResI_0002.hdr
ResI_0003.img ResI_0003.hdr

ResI_xxxx.img ResI_xxxx.hdr
(xxxx corresponds to the number of scans that contribute to the model.)

Each residual image will look something like this when displayed in SPM. You can see from the black background that these images are necessarily subject to the same masking as the beta or con images.

SPM then deletes these images once estimation is complete, leaving you having to formulate a workaround if you want to recover the residuals for your model.  One reason SPM deletes the residual image files is that they take up a lot of disk space – the residuals add nearly 400MB (in our 300 scan model) for each participant which is a real pain if you’re estimating lots of participants and lots of models.

If you’re particularly interested in exploring the residual images (for instance, you can extract the timecourse of residuals for the entire run from an ROI using Marsbar), you need to tweak SPM’s code.  As usual, the SPM message-board provides information on how to do this.

You can read the original post here, or see the relevant text below:

… See spm_spm.m, searching for the text “Delete the residuals images”.  Comment out the subsequent spm_unlink lines and you’ll have the residual images (ResI_xxxx.img) present in the analysis directory.
Also note that if you have more than 64 images, you’ll also need to change spm_defaults.m, in particular the line
defaults.stats.maxres   = 64;
which is the maximum number of residual images written.
There are a few steps here:
1) open spm_spm.m for editing by typing
>> edit spm_spm
2) Find the following block of code (lines 960-966 in my version of SPM5):
%-Delete the residuals images
%==========================================================================
for  i = 1:nSres,
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
end
and comment it out so it looks like:
%-Delete the residuals images
%==========================================================================
%for  i = 1:nSres,
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.img’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.hdr’]);
%    spm_unlink([spm_str_manip(VResI(i).fname,’r’) ‘.mat’]);
%end
3) open spm_defaults.m for editing by typing

>> edit spm_defaults

4) Find the following line (line 35 in my version of SPM5):

defaults.stats.maxres   = 64;

and change to:

defaults.stats.maxres   = Inf;

5) Save both files and run your analysis.

Make sure that once you no longer need to see the residual images, you unmodify the code, otherwise you’ll run out of harddisk-space very very quickly!