Just stumbled on this terrific “ironic” guide for peer reviewers who want to reject a paper but lack the statistical expertise to actually come up with valid criticisms. It’s in a neuroscience journal, but it’s actually universally relevant (there’s no technical neuroscience material). A great compilation of some very common statistical misunderstandings (e.g., if you think that statistically-significant results from small samples are especially unsafe, you’ve got it precisely backwards). Also a very sharp and funny–and more than a little true–satire on the back-and-forth of the peer review process. An absolute must-read.