The stir caused by E. O. Wilson’s editorial on mathematics is nothing compared to the explosive news in the economics blogosphere this week. National governments around the world have been cutting spending in an attempt to bring down debt, and justifying this policy by appeal to a recent paper by two famous Harvard economists, Carmen Reinhart and Ken Rogoff. The paper has been widely interpreted (including by Reinhart and Rogoff themselves in various editorials) as showing that sufficiently high debt causes slow economic growth. That interpretation of the paper’s empirical results has always been controversial (there’s a very strong case that causality mostly runs the other way, so that trying to cut your way out of debt is actually counterproductive). But it turns out that that’s almost beside the point, because “Reinhart-Rogoff” (as the paper’s known) is rife with really basic, inarguable, out-and-out errors. Like an Excel spreadsheet goof that accidentally omitted a bunch of data from the analysis, data that totally changes the results if it’s included! The best part? The errors were discovered by a graduate student who was assigned the task of replicating Reinhart & Rogoff’s analyses as an exercise for a class.
Why am I telling you all this? Because I think it’s relevant to ecology, in several ways.
First, advocates of routine data sharing just got their go-to illustration of why data sharing matters. Never mind the abstract possibility that someone might someday want to make use of your data in a meta-analysis or whatever. There’s the very real possibility that you might have totally screwed up, and the rest of the world needs to be able to check your work!
Second, advocates of conducting all analyses using reproducible programming (say, in R), along with mandatory sharing of the code, just got their go-to illustration of why that matters. I admit that I still occasionally use Excel to do analyses. I always fill my spreadsheets with safeguards like checksums, which apparently makes me better at Excel than Reinhart and Rogoff. But yeah, I probably shouldn’t be using Excel for anything I might actually publish.
Third, Reinhart and Rogoff’s paper, while published in a leading journal, wasn’t actually peer-reviewed (it was published in a special issue dedicated to conference proceedings). I leave it to you to argue about the lessons here for pre-publication peer review, post-publication review, and the risks and benefits of publishing non-peer-reviewed papers in a citable form. Because I could see drawing various lessons from this incident; it’s a bit of a Rorschach blot, I think. ;-)
UPDATE: For discussion of these first three points, see The Monkey Cage, where statistican Victoria Stodden argues that reproducing computational results needs to become a routine part of peer review. She discusses what it would take to make that happen.
Fourth, Reinhart and Rogoff
responded in model fashion, admitting their errors and withdrawing their paper embarrassed themselves by continuing to stand by their results (see here and here). Can’t say I’m surprised to see prominent people who are heavily invested in a claim defend it long past the point when they really should’ve given it up, as the same thing happens in ecology. So to students: don’t ever take anybody’s word for anything just because they’re famous. And don’t ever assume that just because something’s been published, it must be true and you’re entitled to rely on it without a second thought. Don’t understand something? Ask! Think something you read or heard doesn’t sound right? Check it! Think somebody famous may have a made a serious mistake–even a boneheaded mistake? Well, maybe they did!