In an attempt to combat publication biases, “fishing” for statistical significance, and other common practices that skew the published literature as a record of the science we’ve conducted (see this old post of mine), a political science group called EGAP is proposing to establish a registry of planned political science experiments. The registry would oblige authors to specify in advance the purpose of the experiment, the data to be collected, the hypotheses to be tested, the analyses to be conducted, and the conditions under which the hypotheses would be accepted or rejected. The idea is that, once the experiment is complete and the paper written, editors and reviewers could compare the paper to the version of the study described in the registry to see if the authors actually did what they originally planned, and if not, why not (the registry information would only become public after publication, or some specified period of time) See The Monkey Cage for discussion.
On a related note, in psychology the journal Psychological Science is discussing a number of proposals to reduce statistical biases in the literature, including requiring authors to disclose things about the research process like results of unreported analyses. For discussion see here and here. Simmons et al. (authors of the now-famous paper on which my old post commented) have a new, short follow-up arguing forcefully in favor of disclosure requirements, which they compare to food labels.
Do we need a “planned experiment registry” or “research process disclosure statements” for ecology? Basically, it’d be a way to strongly encourage experimentalists to use classical, best-practice approaches to experimental design and statistical analysis, in particular pre-specification of hypotheses and tests. I say “strongly encourage” rather than “force” because authors would still be free to process and analyze the data however they wanted, to address whatever hypotheses they wanted. They’d just have to convince reviewers and editors that any deviation from their original plan was justified. One advantage of a disclosure requirement over a registry is that the disclosure requirement is more easily implemented.
As someone who just reviewed a paper based on an experiment that was originally conducted for one purpose, and which was subsequently used by the authors for a completely different purpose, I think there’s a lot of merit to the idea of a registry or disclosure statements. The original version of the paper was nearly impossible to interpret in key places because many of the design choices made no sense, given the stated purpose of the experiment. It took two rounds of review until I finally felt like I had enough background information to understand what had actually been done and why. This is an extreme example, but not a rare one. It’s common for authors (including me) to only decide what analyses to do, and for what purpose, after they’ve completed the experiment and looked at the data. I emphasize that this sort of thing isn’t unethical. I’m just saying that, as a reviewer, if you don’t know about this sort of thing, you’re not in a position to fully evaluate the paper.
Again, this isn’t about requiring anyone to do their analyses or write their paper a certain way. It’s not about banning exploratory analyses. All it is is a requirement of openness (a point Simmons et al. also stress in their follow-up) It’s a requirement to put your cards on the table. We teach our undergraduate students that the Methods section is where you describe what you did, so that your experiment can be evaluated by others and (in principle) replicated. Insofar as the Methods section omits lots of stuff, it doesn’t fulfill its purpose.
Could registries or disclosure statements be gamed or falsified? Sure. But as Simmons et al. point out in their follow-up, food labels can be gamed or falsified too, but yet we trust them and find them useful. And keep in mind that, if you register an experiment, or fully disclose your methods, and then analyze it as planned and get statistically-compelling results, that’s something that should really impress reviewers and editors. So there’s a potential upside to you as an author in registering your experiments, or in disclosing that your methods were exactly as stated in the paper, with nothing left on the “cutting room floor”.
Many ecology journals now require post-publication deposition of raw data in some sort of repository. Maybe it’s time to do something similar with our plans to collect and analyze those data in the first place.