Major screw-up in economics, and what it means for ecology (UPDATED)

The stir caused by E. O. Wilson’s editorial on mathematics is nothing compared to the explosive news in the economics blogosphere this week. National governments around the world have been cutting spending in an attempt to bring down debt, and justifying this policy by appeal to a recent paper by two famous Harvard economists, Carmen Reinhart and Ken Rogoff. The paper has been widely interpreted (including by Reinhart and Rogoff themselves in various editorials) as showing that sufficiently high debt causes slow economic growth. That interpretation of the paper’s empirical results has always been controversial (there’s a very strong case that causality mostly runs the other way, so that trying to cut your way out of debt is actually counterproductive). But it turns out that that’s almost beside the point, because “Reinhart-Rogoff” (as the paper’s known) is rife with really basic, inarguable, out-and-out errors. Like an Excel spreadsheet goof that accidentally omitted a bunch of data from the analysis, data that totally changes the results if it’s included! The best part? The errors were discovered by a graduate student who was assigned the task of replicating Reinhart & Rogoff’s analyses as an exercise for a class.

Why am I telling you all this? Because I think it’s relevant to ecology, in several ways.

First, advocates of routine data sharing just got their go-to illustration of why data sharing matters. Never mind the abstract possibility that someone might someday want to make use of your data in a meta-analysis or whatever. There’s the very real possibility that you might have totally screwed up, and the rest of the world needs to be able to check your work!

Second, advocates of conducting all analyses using reproducible programming (say, in R), along with mandatory sharing of the code, just got their go-to illustration of why that matters. I admit that I still occasionally use Excel to do analyses. I always fill my spreadsheets with safeguards like checksums, which apparently makes me better at Excel than Reinhart and Rogoff. But yeah, I probably shouldn’t be using Excel for anything I might actually publish.

Third, Reinhart and Rogoff’s paper, while published in a leading journal, wasn’t actually peer-reviewed (it was published in a special issue dedicated to conference proceedings). I leave it to you to argue about the lessons here for pre-publication peer review, post-publication review, and the risks and benefits of publishing non-peer-reviewed papers in a citable form. Because I could see drawing various lessons from this incident; it’s a bit of a Rorschach blot, I think. 😉

UPDATE: For discussion of these first three points, see The Monkey Cage, where statistican Victoria Stodden argues that reproducing computational results needs to become a routine part of peer review. She discusses what it would take to make that happen.

Fourth, Reinhart and Rogoff responded in model fashion, admitting their errors and withdrawing their paper embarrassed themselves by continuing to stand by their results (see here and here). Can’t say I’m surprised to see prominent people who are heavily invested in a claim defend it long past the point when they really should’ve given it up, as the same thing happens in ecology. So to students: don’t ever take anybody’s word for anything just because they’re famous. And don’t ever assume that just because something’s been published, it must be true and you’re entitled to rely on it without a second thought. Don’t understand something? Ask! Think something you read or heard doesn’t sound right? Check it! Think somebody famous may have a made a serious mistake–even a boneheaded mistake? Well, maybe they did!

27 thoughts on “Major screw-up in economics, and what it means for ecology (UPDATED)

  1. There is an aspect of scientific integrity that was perhaps very wisely, almost not touched on in this post. In medical research there is a simply remarkable level of correlation between the economic interest who funded the privately funded drug research and the efficacy results demonstrated. In this clear case of economics malpractice and IMHO scientific prostitution, the funding came at least in part from the Peterson Institute and the American Enterprise Institute in part founded by the the Koch brothers. One funder a deficit obsessive billionaire and the other billionaire libertarian industrialists and anti-tax zealots. Sadly I have to class this form of scientific prostitutes far lower than sexual prostitutes who I find to be at least morally ambiguous.

    • Yes, I intentionally avoided getting into these sorts of issues. This isn’t the best occasion for raising them. Reinhart and Rogoff have done careful work in the past; they aren’t conservative hacks. And as far as I know they aren’t in the pay of any funder with a vested interest. Even if they were, the relevance to ecology would be questionable, since the vast majority of ecology research is not paid for by funders with vested interests.

      • Jeremy – the vast majority of ecology that you are interested in and published in Great summary and I agree on all points except this comment! The articles in the journals that you read are not funded by interested parties. But a huge amount of ecology is funded by interested parties, the work just never gets past a technical document. For example, Walmart wants to build a new store and they need to have an environmental impact assessment done at the site and farm this out to an environmental consulting company. And of course a lot of governmental work in fisheries and wildlife management is cross funded and even if government funded there is intense pressure to get certain results. And like the studies on sexual discrimination occurring when the judges are blinded from the sex, I’m sure we all think we are not prone to these influences but that is just our minds making us feel better.

      • Yes Jeff, you’re right that there is a lot of contract research and consulting in ecology, the outputs from which often don’t end up in the peer-reviewed literature. And yes, that work does matter, and certainly can be influenced by the interests of whoever’s funding it. Good point.

  2. This Excel error doesn’t really strike me as a good example of why we shouldn’t use that program. The error in this case is equivalent to getting the limits of a for-loop wrong or making a mistake with the indexing of a vector in R, C, BUGS or whatever program I might use. That kind of error can occur regardless of the platform; the solution is to make the data and analysis available, which can still be done even when using Excel.

    The bigger problems with using Excel are computational errors that have occurred when Excel, not the person, has made the error.

    By the way, as an example of making data and code available, see Geoff Heard’s recent post:

    http://gwheardresearch.wordpress.com/2013/04/21/code-and-data/

    • Yes, I agree with Michael. I’ve seen people using this on twitter to argue for R’s superiority, but, to me, this case doesn’t make that argument. As Michael said, one can just as easily make a mistake indexing in R as one can in Excel.

      • I disagree. A good R script will use smart indexing that is based on the data and, thus, changes with new data. This is very easy to implement properly. A similar type of smart indexing might be possible in Excel but it is not nearly as easy (as far as I know). Incidentally, Excel does warn you when operations refer to cells with adjacent data but the warning is pretty weak.

        Certainly, all kinds of other mistakes are possible. But in this specific instance, a simple R script (even a poorly constructed one) really could have made a difference.

    • Good point, although I’ve seen it argued both ways in the econ blogosphere. It basically seems to come down to “you can make a mistake in any software environment” vs. “it’s especially easy to make, and fail to detect, certain sorts of errors in Excel”. I’m not a good enough user of any software to have strong views on which side is more right.

      Note that econ and finance types are primed to see Excel as the problem here because of another newsmaking Excel screwup a few months back. A trader in London for one of the big international investment banks lost something like $10 billion *on one trade*, in large part because of a typo in an Excel spreadsheet.

  3. Very well said – I was recently planning to use a modelling framework for an analysis and, when trying to replicate it, found out that it unfortunately had some serious bugs! Fortunately, unlike in the above case, the authors have openly admitted their mistakes, and are more than happy for me to go ahead publish my own (extended) version of their framework to supersede theirs in the literature. Interestingly, if the computational results had been replicated by reviewers, the mistakes would almost certainly have never made it through peer review!

    So, there’s another side to this – it’s always worth looking closer because if you do, not only does it help the cause of science, but (more cynically) it can mean getting a relatively easy publication for your efforts!

    • Good point re: catching an error in someone else’s work being a good way to get an easy paper.

      Indeed, that’s sort of how I got my paper on the intermediate disturbance hypothesis in TREE recently. That paper doesn’t say anything new. All it does is point out to a broad audience something that should’ve already been widely known: the IDH has been refuted, theoretically and empirically. That was an easy paper to write, it just involved saying in plain language things that had already been said in the literature.

  4. As a young grad student, I recall almost the exact moment when it dawned on me – just because it’s published doesn’t make it true. A few years later, I watched as a young MSc student was complaining about how such and such numbers could be included without sufficiently detailed supporting methods that describe data collection protocols, and calculations of population size. For her, that was the “a-ha” moment when she realized that a published paper is not perfect. Instilling this notion in (under)graduate students, to think critically and be knoweldgeable enough to call BS when they see it, should be a pillar of (under)graduate programs.

  5. Interesting post, as usual (I have been a silent reader for quite a while now)
    This post, and the comments, remind me of a story.
    Let’s say that a Young and Naive Researcher (YNR) replicates, for fun, part of a very influential paper from 10 years ago. Let’s say that, after 100 checks YNR still cannot replicate the figure that goes with the paper and contact the Very Influential Authors (VIA) of the aforementioned very influential paper.
    After a dozen emails exchanged, the VIA write “oh of course, you forgot to [insert operation not mentioned in the paper]”. What should YNR do?
    Well, of course YNR thought it was too late to write a follow-up paper after 10 years and > 100 citations, as the figure was – after all – not really a big deal. Maybe YNR’s duty would have been to report the error? where?
    How often does that happen, I wonder? I can’t help thinking this type of check-recheck of details, even trivial, should have a tribune in science.

    • Good questions, to which I don’t have good answers.

      Further, your scenario is in some ways not even the most difficult case. In your scenario, VIA corresponds with YNR, and eventually VIA reveals the source of the failure to replicate. Much more likely, I think, are scenarios in which VIA just ignores YNR, or says he’ll get back to YNR but then never does, or sends VNR some information that doesn’t actually address YNR’s question, or etc. What does YNR do in that case? In particular, at what point does YNR start to worry that he’s getting the runaround because there’s something wrong with the paper that VIA knows about but doesn’t want revealed? I don’t have an easy answer.

      Indeed, something like this is what happened with Reinhart-Rogoff for a while. Their paper is three years old. But despite the journal’s data-sharing policies, Reinhart & Rogoff never made available the original data and spreadsheet until just recently. This despite many people (faculty and students) writing to them asking for it.

  6. For anyone interested in debating issues of pre- and post-publication peer review, what our attitude towards preprints should be, etc., you’ll probably find this new Paul Krugman post of interest:

    http://krugman.blogs.nytimes.com/2013/04/22/understanding-the-nber/

    He talks about how Reinhart-Rogoff was originally disseminated as a working paper, and the attitude towards working papers in economics (where preprints are a much more important part of scholarly communication than they are in ecology)

  7. Another aspect of this incident, which I didn’t mention in the post. The Excel goof was merely the most glaring problem with Reinhart-Rogoff, but far from the only problem. Another was a very questionable (at best) weighting scheme in the analysis, which ended up weighting a single year of bad growth + high debt in one country as heavily as many years of decent growth + high debt in another country. Reinhart & Rogoff described their weighting scheme in their paper–but in an ambiguous way. Indeed, the most natural reading of the ambiguous passage in their methods is as a description of a much more obvious and reasonable weighting scheme.

    This sort of ambiguity is the kind of thing a good pre-publication peer reviewer will catch (but which probably won’t be caught otherwise, since unless you’re reviewing a paper you don’t generally read it as carefully as a reviewer does). More than once as a reviewer for leading ecology journals I’ve run into ambiguously-phrased methods. And more than once, authors have either clarified in a way that confirmed my worst fears, or have been unable or unwilling to clarify to my satisfaction (which is another way of confirming my worst fears…)

  8. Pingback: Python Compliments R’s Shortcomings | Climate Change Ecology

  9. Pingback: Zombie ideas in ecology: the local-regional richness relationship | Dynamic Ecology

  10. Pingback: Friday links: gaming a game theory exam, “alpha females” in academia, and more | Dynamic Ecology

  11. Pingback: Python Complements R’s Shortcomings | spider's space

  12. Pingback: A proposal for replicating published statistical analyses in ecology & evolution | Dynamic Ecology

  13. Pingback: The downside of data sharing: more false results | Dynamic Ecology

  14. Pingback: Does scientific controversy help or hurt scientific careers? | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.