Here’s an interesting wikibooks page detailing how you can make SPM faster.

http://en.wikibooks.org/wiki/SPM/Faster_SPM

Some of the tweaks involve simply adjusting the spm_defaults.m to utilise the amount of RAM you have installed at the model estimation stage.  Others involve a more costly (and potentially hugely beneficial?) purchase of the Parallel Computing Toolbox to utilise many cores in a single machine, or many machines served by a server.  I’ll certainly be taking a look at these tweaks in the coming weeks and months.

EDIT: Changing the defaults.stats.maxmem parameter from its default value of 20 to 33 (in order to use a maximum of 8GB of available memory; as outlined in the screengrab from the wikibooks site below) looks to have sped model estimation up by maybe a factor of 10.

A defaults variable 'maxmem' indicates how much memory can be used at the same time when estimating a model. If you have loads of memory, you can increase that memory setting in spm_defaults.m

Assuming you have a large amount of RAM to utilise, this is a HUGE time-saving tweak.  Even if you don’t have a large amount of RAM, I’m sure you can speed things up considerably by specifying the value as something greater than the meagre 1MB (2^20) SPM allocates by default.

SEE ALSO: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SPM;863a049c.1105 which has the following recommendation:

[change] line 569 in spm_spm from:

nbz = max(1,min(zdim,floor(mmv/(xdim*ydim)))); nbz = 1; %-# planes

to:

nbz = max(1,min(zdim,floor(mmv/(xdim*ydim)))); %-# planes

[i]n order to load as much data as possible into the RAM at once.

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> 

required


*