I’ve written in the past about the growing movement in social science for transparency in research, particularly the idea of pre-registration of planned studies and their statistical analyses, so as to combat fishing for statistical significance (intentional or otherwise). Here’s a new series of articles on this movement. Worth checking out, and maybe something we should be thinking about in ecology too.
Writing in Science this week, here’s ace ecologist Tad Fukami on his prize-winning undergraduate course integrating inquiry-based learning and real research. Great stuff, although I have to say that not everyplace has the resources required to develop and teach this sort of course the way Tad did (he’s at Stanford). But the basic approach could be adapted to other contexts, I’m sure.
Faculty of 1000 Research is a publishing service based on post-publication review rather than pre-publication review. Or rather, it’s based on post-publication “review”–there’s strong evidence that reviewers don’t read the papers carefully. Tim Vines explains why, and discusses the likely consequences. I agree.
More on how to determine authorship: Adam Marcus and Ivan Oransky, who run the Retraction Watch blog, discuss authorship rules put forward by big publishers like Elsevier, and the International Committee of Medical Journal Editors (ICME). One controversial issue, which we discussed in a recent post, concerns whether you should be an author if your only role was to secure funding for the work, and/or provide general supervision of the research group. The ICME says “no”.
Special issue of Nature this week on open access publishing and other changes in scientific publishing practice. I found much of the material familiar, but this piece was thought-provoking. It argues that many current arguments about where to publish papers, how best to filter the published literature, how to determine authorship, etc., will be moot in the future because papers themselves will no longer be a primary research output. Indeed, there will no longer be a distinction between the process of research and the output. Could well turn out to be right, thought when it will turn out to be right is another question (I’d say we’re a generation away at least). Although, as I’ve argued before and will argue again in a forthcoming post, when it comes to filtering information, I think the brave new world of crowd-sourced, algorithmic filtering will look rather more like our current one than many people would like to believe. And the author actually seems to more or less agree, although his choice of words is sometimes unfortunate (his notion that currently we identify authoritative scholars “subjectively”, whereas algorithms that quantify “the wisdom of the crowd” do so objectively, indicates unfortunate confusion about what “subjective” and “objective” mean…) The other thought I had was that blurring the distinction between scientific process and scientific product seems to me to be in a bit of tension with the growing push for pre-registration of planned studies. There are very strong statistical arguments for deciding every aspect of study design and analysis in advance, for building a wall between exploratory and confirmatory analyses, or between hypothesis generation and hypothesis testing. It’s the only really sure way to avoid getting fooled by randomness, getting fooled into seeing “patterns” or “signals” that are really just noise. Is a world in which the distinction between scientific “process” and “product” is blurred also a world in which the distinction between hypothesis generation and hypothesis testing gets blurred (the hypothesis test being the “final product” that arises from the hypothesis-generation process)? Because if so, that’s a really serious problem.
Andrew Gelman on why he disagrees with “classical” or “strong” Bayesian philosophies of statistical inference. Much that he’s said before, but still worth a look. I agree with much of it, although not quite all. Andrew likes to emphasize that no effect is ever truly zero and attaches a lot of importance to this, but I think his point of view on this reflects the social science problems on which he focuses. I think “true zeroes” aren’t all that rare in science, and that ruling out a null hypothesis of “zero effect” or “no signal” often is of scientific interest. For instance, here’s Deborah Mayo discussing the search for the Higgs boson and how the success of that search ultimately comes down to standard frequentist statistical tests of null hypotheses of “zero effect”.
Another door opens for MOOCs in the US. The US Dept. of Education seems to be encouraging universities to apply for Title IV funding to develop MOOCs and other programs that certify that students have knowledge of a certain subject, or certain skills.
A sobering personal story of how difficult it is to find a postdoc, faculty position, or really any job doing ecology research in Canada these days. Even for the most highly-qualified people.