tl;dr: Making scientific debate faster can be a good thing, but only in combination with other good things. But the combination of “speed plus other good things” may not be a stable combination, because changes in technology and norms of scientific practice that promote speed also tend to inhibit those other good things.
I have various old posts noting that we’re living through a culture change–and thus a culture clash–when it comes to post-publication “review” (see here, here, and here). The latest example is online discussion of a high-profile PNAS paper by Case & Deaton, looking at variation in human mortality rates across different countries, and across different ethnic groups in the US. Andrew Gelman drove a lot of this discussion (here’s a recent post in his series).
But here I’m more interested in the associated discussion about norms of scientific communication. Andrew got upset with the paper’s authors, who complained in passing that the online discussion of their paper was moving too fast to be an effective scientific discussion.
I appreciate where Andrew’s coming from. But I think he went a bit too far on this one; Case & Deaton have a point. Noah Smith of Noahpinion articulates that point pretty well, as do many of Andrew’s commenters (who seem to disagree with him more on this one than they usually do). But I think Jeff Leek of Simply Statistics articulates it best. It’s a great summary of the culture clash, and includes thoughtful advice on what to do if there’s a fast-moving online discussion about your latest paper. So I’m going to outsource this to him, with a few comments of my own at the end:
It is much, much easier to critique a paper than to design an experiment, collect data, figure out what question to ask, ask it quantitatively, analyze the data, and write it up. This doesn’t mean the critique won’t be good/right it just means it will happen much much faster than it took you to publish the paper because it is easier to do…
The first thing to keep in mind is that the internet wants you to “fight back” and wants to declare a “winner”. Reading about amicable disagreements doesn’t build audience. That is why there is reality TV. So there will be pressure for you to score points, be clever, be fast, and refute every point or be declared the loser. I have found from my own experience that is what I feel like doing too. I think that resisting this urge is both (a) very very hard and (b) the right thing to do. I find the best solution is to be proud of your work, but be humble, because no paper is perfect and that’s ok. If you do the best you can , sensible people will acknowledge that…
I think this route can be the most scientific and productive if executed well. But this will be hard because people will treat that like “you didn’t have a good answer so you didn’t respond immediately”. The internet wants a quick winner/loser and that is terrible for science.
I agree with all of this, and I think Andrew Gelman would too.* As I read him, Andrew wants quick back and forth discussion. But he wants open ended discussion not necessarily leading to a quick resolution, except perhaps with regard to purely technical errors. And he doesn’t want to declare individuals as “winners” or “losers”, taking the sensible view that everyone is fallible.
Trouble is, I’m not sure Andrew, Jeff Leek, and I can have what want. Because the norms of post-publication review suggested by Jeff Leek and Andrew (and me) aren’t the only ones in circulation. And I’m not sure which norms, if any, are going to get widely established in the long run. I say that in part because people adopting more aggressive norms tend to dominate the conversation, which has the long-run effect of crowding out those who’d prefer to adopt less aggressive norms.
Note also that, if you don’t give people the quick response they want in an online discussion, some of them won’t stop at presuming that you don’t have a good response. They’ll also accuse you of having outdated, immoral norms of professional behavior that hold back science as a whole. And since science as a whole is much bigger than any individual scientist, in their minds holding back science as a whole gives them license to attack you personally as well as critiquing your work. Anecdotally, this seems like an increasingly common move in online debates about appropriate professional practice in science: characterizing practices you disagree with, and the people who adopt those practices, as immoral or unethical.
It’s for these reasons that I much prefer online debates about broad issues and widespread practices (e.g., statistical machismo) than debates about individual papers. Not only are broad issues and widespread practices usually more important than individual papers, but discussion about them tends to be more impersonal and slower moving. No one feels singled out by criticism of a widespread practice, and no one feels rushed to respond instantly. Not that that’s a panacea, obviously–I’m sure you can think of online debates about widespread practices that have become heated and unproductive.
Bottom line: I can certainly understand authors who’d prefer to discuss their papers in a “slower” way, via a process governed by agreed norms and procedures and overseen by an editor to enforce those norms.
It’s not all bad news on the normative front, of course. The developing norm to share one’s data and code encourages post-publication debate to be data-driven. That’s a good thing. But it’s not clear to me that data-drivenness crowds out other, less desirable norms of post-publication debate. For instance, data and code sharing didn’t prevent both of the post-publication debates I discussed here from becoming very heated and personal. Indeed, to the extent that data and code are readily shared, that only increases the speed with which “the crowd” can try to pick apart published work, and the speed with which authors are expected to respond.
I struggle with this because in part because I think that more and faster-moving discussion, criticism, and debate in science would be a good thing (e.g., this post and links therein). But not if it’s governed by the wrong norms, or no norms at all. Because then the debate, criticism, and disagreement won’t serve the purposes they should serve in a healthy scientific communication ecosystem.
*As an aside, it’s worth noting that elsewhere Andrew has said he strongly prefers blogs to Twitter. Blogs are of course slow compared to Twitter. So apparently, everyone thinks there’s an optimal speed at which scientific debate should be conducted, and it’s whatever speed they personally happen to prefer. 🙂 By the way, I say that as someone who much prefers blogs to Twitter as a medium of discussion and debate, for the same reason Andrew does. But I try to remember that my reasons for preferring blogs to Twitter are other people’s reasons for preferring peer-reviewed papers to blogs.