Only 80% of scheduled participants produce datasets where data from all runs is usable.  That’s the conclusion I have drawn from my limited experience of scanning participants for research here at Washington University.

My running totals
Study 1: Usable data from 19 participants  of 24 booked – 79%
Study 2: Usable data from 14 participants of 17 booked –  82%
Study 3 (so far): Usable data  from 6 participants of 8 booked – 75%

That’s 39 from 49, approx. 80% overall.

Reasons for unusable data include script and scanner problems, participants performing at or below chance, participants falling asleep, participants needing to end the experiment early, and participants failing to show up at all.  The participant no-show scenario isn’t really too much of a problem if you are billed only for the time used on the scanner (which is what happens here) though it is rearing its ugly head for me as I am coming to the end of my time here at Washington University – every absent participant reduces my sample size by one.

All of which means, the 20 slots I was counting on for Study 3 should yield a sample of 16 – I reckon that’s on the low end of fine, but still fine.  We’ll see.

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>