Tuesday, September 11, 2012

Post Hoc Power Analysis Article


A collection of insights on the pointlessness of post-hoc power analysis to try to make something out of an insignificant result from (insert virtually all Fisher-style tests for significance here).

I've seen this style of argument made regularly over the past few years at the honors presentations... it goes something like this.

".... I did a (massivly complex research design) and then did a MANCOVA/ANOVA etc which failed to show any significance relationships between any of my variables due to tiny effect sizes.  So I did a power analysis that showed if only I had x number of more participants it would have worked out differently...."

It always bugged me.  Firstly becasue they have the usual infatuation with testing the significance of the variance between derived numbers based on (often) questionable data processing .... while being oblvious to the issues of effect magnitude. 

I have come to like confidence intervals as a better way to present arguments about wether or not an effect really occured or not.  I just wish some more attention was paid to effect magnitudes.

No comments:

Post a Comment