So on Friday, a group I’ve been working with (Maria Dornelas, Anne Magurran, Nick Gotelli, Hideyasu Shimadzu and others) came out with a paper in Science. We took 100 long-term monitoring datasets across 6 continents and many taxa and looked to see if there was a consistent trend in local alpha diversity over time. To our surprise there wasn’t – on average across datasets the slope was zero and most datasets were close to zero (and the one’s that weren’t cancelled each other out). We also found that temporal beta diversity (turnover in species composition within one local community) changed much faster than any reasonable null model would predict.
From a “the earth is doomed” prophesy point of view this is mixed results. Local alpha diversity is not looking as bad as we expected (also see this recent paper by Vellend et al), but species composition is churning really fast (and although we didn’t measure this, there is a reasonably good chance that its already widespread species moving in to replace rarer species that is driving this). This is probably bad news for those that care about the state of the planet. And finding of no trend in alpha diversity does NOT contradict declining global diversity (scale matters!). But all of us authors can quickly imagine certain subsets of the public cherry picking results and trumpeting “scientist’s prove claimed modern mass extinction not occurring”.
So I want to expand beyond this specific finding to the more general question, when scientists are working in domains that have strong implications for broad policy debates, how should they handle and think about how their work will play in the policy context vs how they should do their science. This plays out in questions of extinction, invasion, climate change, etc. It was very vividly played out in “climategate” and before that, in creationism, where Stephen Jay Gould, testifying before state supreme courts about whether evolution was well understand and widely agreed about by scientists had to back off claims he had made in the academic world that his theories of punctuated equilibrium were revolutionizing and overturning traditional views of how evolution worked.
One view of the relationship of science to the general public is that the public cannot be trusted and so we scientists all have to band together and not show any internal disagreement in public. If we reveal even one crack in the edifice we are building towards the whole thing will be pried apart. They note that there are vested interests who don’t play fair and will take things out of context. They note that modern 30 second sound bites and 140 character tweets don’t leave room for communicating the complexity. This means dissent, nuance, exceptions to the rule, etc should not be published in a way the general public will notice (it’s OK to bury them in obtuse language in the middle of discussion sections). And indeed, I had been told by colleagues about the paper I described above that “you can’t publish something like that”. Lest you think I exaggerate, Mark Vellend shared with me a quote from a review of his aforementioned paper (published in PNAS but this quote is from a prior review at Nature) that I reproduce here with Mark’s permission:
I can appreciate counter-intuitive findings that are contrary to common assumptions. However, because of the large policy implications of this paper and its interpretation, I feel that this paper has to be held to a high standard of demonstrating results beyond a reasonable doubt … Unfortunately, while the authors are careful to state that they are discussing biodiversity changes at local scales, and to explain why this is relevant to the scientific community, clearly media reporting on these results are going to skim right over that and report that biological diversity is not declining if this paper were to be published in Nature. I do not think this conclusion would be justified, and I think it is important not to pave the way for that conclusion to be reached by the public.
This quote is actually a perfect example of the attitude I am trying to summarize playing a role right in the center of the peer review process.
This is definitely a common view, and reasonable people can disagree, but I just can’t get on board with this “united front” approach for a number of reasons:
- Ethically, a scientist is obligated to be honest. This inlcudes not just honesty by commission (the things we say are true) which 99% of us do. It also includes honesty by not ommitting (not saying things we know to be true but inconvenient). Indeed this might be central to the definition of what it means to be a scientist, instead of say a lobbyist or maybe even a philosopher.
- Practically, a scientist is most likely to be seen as an honest broker by the public when at least some of the time things contrary to the main thinking get published. Or if not contrary at least nuancing (the general belief isn’t true in these particular conditions). Nobody believes somebody is objective when they can’t see and deal with evidence contrary to one’s beliefs. If we sound like a PR machine staying on message we won’t be trusted.
- Psychologically, an ecologist is most likely to be heard and paid attention to when they talk about good news related to the environment as well as all the bad news. Nobody can/wants to pay attention to a doomsayer.
For all of these reasons, I think it is a mistake to bury evidence that is contrary the general program that biodiversity is headed towards imminent destruction in every possible way in every corner of the earth. It’s actually a good thing, for all of the ethical, practical and psychological reasons given above to have scientists themselves putting out a more complex, nuanced view.
I can already hear the skeptics saying the public cannot handle complex and nuanced. But I think looking at climate change is informative. Take the the original IPCC reports and how careful they were to break out all the different aspects of climate change and the different levels of uncertainty around it. And then look at what got perceived as a “united front” and “win” attitude that came out in climatgate and how strongly the public reacted (NB: climategate was an overblown tempest in a teapot, but it speaks exactly to my point about how the public perceives scientists – or more to the point how the public wants to perceive scientists and how upset they get when the honest broker role appears inaccurate). The public CAN hear and accept complexity, uncertainty etc (barring an extreme fringe that will always exist). It just takes a LOT of work to communicate it. But I don’t think we as scientists have any other choice.
Brian, I agree with you entirely. I also think your arguments extend to the context of teaching. For example, when teachers can’t bring themselves to say “I’m not sure” to their students, what comes next is either misleading their students or blowing the question off. Equally bad is when teachers can offer a thought-provoking explanation but choose to give their students a simplified key phrase to memorize, instead (e.g., “genetic drift happens when one population gets split into two isolated populations by a change in landscape”).
Congrats on the paper.
I agree with you. I have also come across such “don’t rock the boat” views and judgements and consider them wrong headed.
Researchers deal with what is uncertain, and that is why we need research. We need critical voices, skeptics and mavericks to give fresh thinking and to keep us honest.
Community self-censorship it is a slippery slope (though specific codes of practice may sometimes be justified to address specific threats, though I cannot see that being necessary in ecology outside of certain areas of applied epidemiology).
How to communicate uncertainty in media etc. raises many issues, but I think transparency, honesty and willingness to change your mind with evidence are all key to credibility.
Was thinking a bit more on this and realized that there may be other arguably “more legitimate” justifications for a unified community “self censorship” in certain cases. I imagine this is an area where many of us feel a bit uncomfortable so a discussion is useful. That could be a topic for a blog post in itself .. and all are a tangent to your main point Brian, but my point is about “where we draw the line”.
So examples:
1) when revealing information about locations may increase threat to endangered species
2) when information may stigmatize an organism (and increase persecution)
I suspect some kind of ethical code of conduct would be useful in some very specific cases.
Would be interested to hear what others think.
Hi Douglas – I agree with you that there are cases such as those you mention where non-publication is appropriate. Your #1 is quite common practice these days. Your #2 I’ve never heard of but I could imagine.
Well said Brian. And judging from Mark’s experience, someone really needed to come out and say this.
It’s to your credit that you’re reading that reviewer of Vellend et al. in what I think is the most charitable and defensible way possible–as being concerned with the public’s ability to digest complexity and nuance. So let me be the bad cop here and suggest a less charitable reading.
The reviewer can be read as saying that scientific papers should be held to different standards in peer review based on guesses about how the public, media, and policymakers might react. Which I find appalling. The reviewer can be read as saying “No matter how you phrase this, it might be misunderstood or used to support policies I don’t like, therefore it shouldn’t be published at all.” On this reading, the reviewer’s ultimate concern isn’t about the ability of the media and public to digest complexity and nuance. Nor is the reviewer’s ultimate concern that high profile papers should be held to higher standards of proof. Those are both concerns, obviously, but they’re not the ultimate bottom line. On this reading, the reviewer’s ultimate bottom line is pure politics. The reviewer is basically saying “Politics is so overwhelmingly important that it trumps other considerations. So if suppressing certain science would lead to politically-desirable outcomes, well, then that science should be suppressed. At least, we ought to give very serious consideration to suppressing it.”
I freely admit that my uncharitable reading isn’t the only one possible–your more charitable reading is at least equally possible. But I don’t think my reading is so uncharitable as to be unfair, or to amount to a deliberate distortion of what the reviewer wrote. After all, the reviewer explicitly *praises* Vellend et al. for carefully explaining their results and the conclusions those results do or do not support. The reviewer then goes on to say that’s neither here nor there, and clearly implies that the best thing for public understanding would be to not publish the paper at all. On my admittedly-uncharitable reading, the reviewer isn’t ultimately worried that the public will reach an oversimplified, non-nuanced conclusion. Ultimately, the reviewer is worried that the public will reach a *politically undesirable* conclusion.
Put it this way: if Vellend et al. had found that local biodiversity was indeed declining on average, do you think this reviewer would’ve worried about the public’s or the media’s inability to grasp the nuances? Do you think the reviewer would’ve implied that the result shouldn’t be published, lest the public draw the overly-strong conclusion that biodiversity is collapsing everywhere? It’s a hypothetical, so we’ll never know the answer–but color me skeptical.
And the irony is that the “united front” approach isn’t only bad for science, and bad professional ethics, it seems likely that it’s bad from the perspective of cynical political tactics. After all, what chance do scientists have of having any influence on politics at all except by having some sort of standing as honest brokers?
In the end, Vellend et al. did get published, and your paper got published as well, so there’s an argument here for not getting to worked up over a couple of anecdotes. But man, I find these sorts of anecdotes scary.
As you say, I think it is scary on two fronts. 1) it is obviously bad for science to let politics drive science. 2) it is long-term bad for the ability of science to influence policy if we are perceived as just another biased interest. The “honest broker” role is really the only role in which science is more than just another seat at the policy table. And the more we scientists do to undermine the public view of us as honest brokers, the less we can claim to have a special role in the policy debate.
That said, it is VERY important to note that this agenda-inconvenient message has in fact gotten published in Science and PNAS, two of the top journals in the field. So anybody looking for a story about scientific censorship has, despite a few twists and turns, found the opposite here.
I want to second your last point above – I think it is quite important to note that both papers got published (and in high visibility arenas). And I find that reassuring.
On the comparative reading of the reviewer’s comments to Mark’s paper… I’d like to submit a little different take still – and here I want to draw attention to the reviewer’s remark that the article shouldn’t be published in ‘Nature’. I couldn’t discern from the quote you provide whether the reviewer felt it should not be published, period… or if it just shouldn’t be in ‘Nature’. If the former, the reviewer has grossly overstepped. If the latter then an entirely different light (to me) shines on what Nature’s reviewer’s image of the journal is, and it’s place or role in the public debate.
With that as background then I’d like to nitpick a little here with Brian’s two assertions – that 1) politics shouldn’t drive science, and 2) that science, done well should be able to influence politics (have a special place at the table).
I think politics has and likely always will drive science. We are political animals. We can make some very good efforts to be independent, objective, and unbiased. But at the end of the day we have to eat, protect ourselves from the elements, and find happiness in our lives. To accomplish these things and still have time to do science we need a paycheck, and our employer’s need overhead. Limited resources have to be allocated. Markets will influence choices, but the ‘invisible hand’ is only a metaphor… human hands (visible ones) actually write the checks. And human hands are connected to human brains. Looking around your own department you appreciate there are office politics at play. Its natural. It may not be desirable, but its life as we know it. So – I’m agreeing with the principal that science should be independent, objective, and unbiased. But until science can be done by some otherworldly being I expect politics will participate – for good or ill.
On the notion that scientists should somehow hold a special place in policy debate is less persuasive. I agree that scientists have a vested interest in being ‘honest brokers’, and an interest in being perceived as such. But I think it reflects a certain arrogance on our part if for some reason being an ‘honest broker’ somehow should entitle us to …”more than just another seat at the policy table.” A seat at the table should be sufficient – not allowing others to have ‘more’ than just another seat is a matter to take up next.
Indeed, congratulations with the paper! I entirely agree that whether studies agree or disagree with a particular preconception of the scientific community, or “harm” some sort of agenda, should never be a criterium in reviewing a paper. I can however imagine grounds why finding no consistent difference in alpha diversity such as your study, or the one by Vellend, could be considered less impressive than reveiling a consistent change: it is less conclusive. Now it depends on wether there is a drop of beta diversity if global diversity is reducing or not, as is discussed in the paper and the perspectives paper, but which was not the subject of the study. If alpha diversity had been consistently reduced, it would have been quite convincing evidence of a drop in global biodiversity (I am not a macro-ecologist but this is how it seems to me). This is definitely not meant to critisize your paper! Just to say that one could argue that one finding might contain more “information” than another, even regardless of whether it supports expectations.
Thanks for your comments. I think your understanding is exactly right. If we believe global diversity is declining (as I think most ecologists do), then the first suspect is that local alpha diversity is declining. We have now looked and it would appear it isn’t. This points us now in less obvious directions like beta diversity, which we now need to test empirically. I think this is exactly how science should work.
The new post up at Crooked Timber is relevant: http://crookedtimber.org/2014/04/21/tu-quoque/.
It’s a great paper, and I completely agree that you should report what you find, not what everybody else thinks they would have found, had they looked.
However, I do think there’s a potential technical problem. It’s unlikely that the measures of temporal beta diversity you used are perturbation-invariant. In other words, if you have two time intervals, over which each species experiences the same mean proportional population growth rate, you (almost certainly) won’t get the same rate of change in temporal beta diversity under your measures. I haven’t checked this out for the measures you used, but I have looked at lots of others (see Figure 1 in http://arxiv.org/pdf/1404.0942.pdf for examples). Lack of perturbation invariance leads to all sorts of logical inconsistencies, including rates of change that are not constant when everything about the environment remains constant, and incorrect ranking of what should be equivalence classes of changes in abundance.
I’d love to know whether the pattern holds up under a perturbation-invariant measure. There is one and it’s very simple: the among-species standard deviation of proportional population growth rates. It has a close geometric connection to the Living Planet Index, which is very widely used to measure changes in abundance of a “typical” species. It’s also proportional to the Aitchison distance, which is the right way to measure differences in relative abundance.
Hi – thanks for your comments. I agree that what you call “perturbation-invariant” is A desirable trait for metrics. But there are other desirable properties out there (and familiarity to the reader is not low on the list of desirable traits). In my experience in these debates there is no such thing as the one best metric (e.g. the long search for the “best” measure of evenness). In any case I can’t see how it would affect our results as, a) we evaluated multiple beta diversity indices and the answer was identical in each case, and b) our main claim is to compare empirical rates vs two null models, not achieve some perfect absolute measure.
I have an old post on the general issue of choosing among different measures or indices of the “same” thing, such as beta diversity:
https://dynamicecology.wordpress.com/2012/05/02/advice-on-choosing-among-different-indices-of-the-same-thing/
I think the issue is that time and space are not interchangeable, and hence “temporal beta diversity” (the thing that’s measured in Brian’s paper) is different from the spatial kind. So we aren’t really talking about choosing among different measures of the same thing. This has been more or less neglected in multivariate community ecology (in which you often see the same kinds of analyses applied to time series as to sets of points in space).
At a point in space (and assuming for now that the community is closed), you get from the relative abundance vector at time t1 to the relative abundance vector at time t2 through population growth. If, in two different cases, it takes the same amount of growth to get from one vector to another, then to be consistent with population dynamic principles, a measure of change should take the same value in each case. If your measure doesn’t have this property, then you don’t know whether an apparent change in the rate of change is real or an artefact. On the other hand, if you’re comparing two different points in space (again assuming for now that communities are closed), then you don’t get from one relative abundance vector to another by growth, and your measure doesn’t need to be consistent with population dynamic principles.
It’s more complicated in communities where changes are the result of a combination of population dynamics and dispersal (which is likely to be the case in Brian’s paper). If the numbers of dispersing organisms are small, then “all” you need is a way to measure colonizations and extinctions on a scale that’s consistent with the proportional scale for changes resulting from population dynamics (but that’s actually quite difficult).
As Brian rightly points out, if you just want to know whether your empirical results are different from some null model, it doesn’t matter too much what measure you use, provided that it’s capable of reflecting the kinds of differences you care about. But if your empirical results do turn out to be different, then it is important to understand exactly what you’re measuring.
As you say it all depends on context. And certainly one needs to think carefully between equating space to time (which we didn’t do). But in our case since we’re comparing vs a null model, getting order of magnitude differences between empirical and null, and used both abundance-based and presence-absence metrics and all gave the same answer I think this particular paper is on safe ground. Thanks for pointing out your paper.
Fascinating. The “public cannot handle the truth” tension between liberal and conservative minds rears its head again! Ron Bailey has a really enlightening article on evolution beliefs among conservative intellectuals that describes this tension beautifully: http://reason.com/archives/1997/07/01/origin-of-the-specious. Also, I would describe SJG as a political genius that was more interested in rhetoric than truth. His tendentious essays on the modern synthesis and his writings/testimony on evolution v. religion had very different goals and rhetorical tactics. What may seem inconsistent (changing his mind on modern evolutionary theory) is really consistent (using rhetoric at the expense of facts to persuade).
Brian,
I thought this was a *very* interesting paper, and you guys did a wonderful job of linguistically gilding the Lilly, as it were. The balance and thoughtfulness you brought to what was sure to be a controversial conclusion was really nice, and left me and my lab group feeling very thoughtful after our discussion of it today. The last comment of our discussion was “Well, this is clearly only a first pass. I can’t wait to see what they do next!”
I think you’re spot on – if any of us, after careful analysis and consideration – really do think the correct answer is one that maybe isn’t one that would be popular for one side of a policy-related argument, that should have no bearing on whether it is published or not. Our job as scientists is to adhere to the highest ethical standards, and let the data tell the story of how the world works.
That said, I actually did have a question. We were interested in looking at the data in a more fine-grained way (marine studies and Spaulding’s biogeographic provinces), but noticed that the linked data files, while they have study ID, don’t actually have what study goes with what ID (the Supplement has a reference number, but not a study ID, so it cannot be divined from that, either). Was there a missing file, or will that Rosetta Stone be part of a data push at KNB, Dryad, or somesuch?
A lot to mull over, and I look forward to what your group turns out next! The mind boggles at the possibilities – many of which address some of the fundamental questions raised by the paper!
Interesting write-up Brian, but I find it hard to agree with you. I do not doubt that often in science, it is difficult to publish something that goes against the dogma. And I do not doubt that there is a publication bias that is sometimes stacked against those challenging the status quo. But there are several additional things to consider:
1. There is a tendency among journals, especially those with high impact factors like Science, to prioritize controversy and readership over scientific rigor. You were far more likely to have your paper published in Science than to have it pass the review process in a disciplinary journal. I would, in fact, argue that journals like Science and Nature would be highly unlikely to publish another paper on extinction even though they would jump at the opportunity to publish something that stirs the pot and stimulates controversy. This runs counter to your suggestion about scientific suppression of controversy.
2. Both your Science paper, and the one by Vellend et al. have been highly controversial. And many individuals, including myself, have questioned whether or not the studies chosen to populate those datasets are actually representative of biodiversity change. For example, neither your group nor Vellend et al. summarized studies of how habitat loss or over-exploitation influence biodiversity. When your studies don’t consider the most pervasive drivers of biodiversity loss on the planet, it’s a little hard to swallow the idea that your studies are even representative of biodiversity change on the planet.
2. While your group and Vellend’s may have faced difficulty getting results published, it is far more difficult for researchers to publish their criticisms of your paper in an attempt to achieve balance and draw attention to the study limitations. As Jeremy has blogged about before, post-publication discussion and dissent is very much discouraged in science, and largely ignored by journals. And as was true with Mark’s paper where numerous responses and criticisms were summarily rejected by Peter Kareiva without review, I suspect the same will be true of the criticisms in response to your Science paper.
3. Big claims that overturn considerable bodies of data also require big proof. There are many instances where a maverick researcher or group publishes some hot paper claiming to overturn egads of data that challenges existing dogma … only for them to be shown demonstrably wrong or short-sighted a year or two later when people take a closer look at the their data or analyses. Unfortunately, the damage is usually done by that point since people pay far more attention to the ‘overly-sensionalized’ big publications that are contrarian than they do the corrections (think that utterly awful paper by Worm et al. Science 2006 that was overturned by re-analysis of data in Worm et al. Science 2009).
Because of this, I can only partly agree with your suggestion that researchers have an ‘obligation’ to publish their unpalatable papers. Yes, we absolutely need to publish data that shows the unexpected and that challenges thinking. At the same time, researchers have an obligation to (a) make sure they are correct, (b) explain how and why their results contrast with the previous bodies of literature, (c) forego sensationalism to publish contrarian results in the appropriate journals where data can first hold-up to scrutiny by disciplinary experts before it goes to the general public, and (d) make sure they be clear about the appropriate caveats and limitations to their work, and not extrapolate beyond the studies used to draw conclusions. Unfortunately, I don’t think your paper, nor that of Vellend et al. appropriately applied these criteria.
Brad – I’m not very clear what you’re saying. But taking your a,b,c,d at the end, I’m pretty sure I’ve just been accused of publishing a paper that I have doubts about whether it is right, b) completely ignoring a large body of literature containing meta-analyses showing alpha diversity is declining (please cite this body of work ), c) just aiming to be sensationalist, and d) being irresponsible. On top of that you have made sweeping claims about our dataset’s fllaws without actually spending much time looking at our actual datasets and where they came from.
Have I got that right?
You complain a lot about how wrong our analysis is and how you aren’t given a platform to critique it. But you are more than welcome to do the hard work of doing your own metaanalysis, subjecting it to peer review and disproving us. I would welcome such a scientific contribution! Whether Science will take it as a rebuttal letter or not isn’t up to me, but I’m certain you can get it in a very good journal.
Heck, I’d even welcome a concrete suggestion of what data you think should be analyzed. To be a good scientist there has to be some set of data that would convince you you are wrong if it points the other way. Please tell me what data, if analyzed, would have that effect?
Until then, if you don’t have something more substantive (you know, like scientifically based with data or at least multiple, non-cherry-picked citations) and can’t stop being ad hominem as you already were to Mark Vellend in this blog and now have been to me and my coauthors,, please take it somewhere else.
I hope you are aware of the irony in posting this comment on this blog post?
Hi Brad,
As I’ve indicated in the past in private correspondence, I can’t for the life of me understand why you’re so upset about these two papers. As best I can tell, your substantive concerns are just the garden-variety sorts of concerns and questions that most everybody has about most every paper that’s ever published in any selective journal. And I’m afraid your comment doesn’t make things any clearer. Nor do I see why you think public personal attacks on your colleagues are justified. And this is the second time now you’ve done this on Dynamic Ecology.
“There is a tendency among journals, especially those with high impact factors like Science, to prioritize controversy and readership over scientific rigor.”
This from a guy with 11 papers in Nature, Science, and PNAS, if I counted correctly. Do you even listen to yourself?
“I would, in fact, argue that journals like Science and Nature would be highly unlikely to publish another paper on extinction even though they would jump at the opportunity to publish something that stirs the pot and stimulates controversy.”
If that’s right, then it’s a decision that those journals just made in, oh, the last month or so:
http://bit.ly/1f7b9Kh
http://bit.ly/1hoT1tX
Plus, Nature rejected Vellend et al. Which I’m guessing you’d just cite as evidence of Vellend et al. being fatally flawed or something. I think I’m seeing the principle here. When a journal rejects a paper that you personally don’t like, it’s being rigorous. When a journal accepts a paper that you personally don’t like, it’s publishing sensationalist crap.
Perhaps you’d like to put your money where your mouth is and make a wager as to the number of news articles and papers Nature, Science, and PNAS will publish on extinction and its negative consequences over, say, the next 12 months, vs. news articles and papers taking contrarian positions on those topics?
“Both your Science paper, and the one by Vellend et al. have been highly controversial.”
Proof by authority much?
Oh, and what should we make of the fact that multiple papers of yours have been the subject of published comments, and so have been “controversial” by your own standards? Should people doubt those papers of yours purely on the grounds that somebody out there disagrees with them? Again, I’m seeing a pattern here: controversy surrounding papers you don’t like is a sure sign those papers are fatally flawed. Controversy surrounding your own papers presumably is…just nitpickers carping from the sidelines, I guess?
“And many individuals, including myself, have questioned whether or not the studies chosen to populate those datasets are actually representative of biodiversity change.”
Oh, please. Two studies compile *every high quality, long-term dataset they can find*, and you complain that the data aren’t representative? This from someone who’s published numerous meta-analyses on biodiversity-ecosystem function relationships and other topics—all of which were limited in scope by the available data, which in every case comprised a highly non-representative sample of all species and systems on earth (e.g., being skewed towards experiments with grassland plants in the case of much BDEF work). And none of which went out of their way to harp on the non-representativeness of the available data (which I know in part because I’m a co-author on two of them). It really is one rule for you, another rule for everybody else, isn’t it?
“Big claims that overturn considerable bodies of data also require big proof.”
Um, you did read the parts of Brian’s paper and his post where he notes that his results *don’t* overturn the claim that, at a global level, lots of species are going extinct rapidly? But you didn’t specify the work you think these papers are contradicting or overturning, so I can’t really tell what you’re talking about here.
And while you don’t specify what you mean by “big proof”, one plausible meaning in this context is “analyzing all available datasets of sufficient length and quality, comprising hundreds or thousands of datasets from all over the world.” Which is exactly what these two papers did. Now, are the data so extensive as to comprise the last word? Obviously not, but no one (including the authors) ever claimed that. As other commenters have noted, these results now provide a jumping-off point for further research. That’s how science works.
In your final paragraph, you falsely accuse the authors of these two papers of not checking the correctness of their own results, ignoring the previous literature, being wilfully sensationalist, choosing to submit to Science and PNAS in order to avoid rigorous peer review, and not mentioning caveats and limitations of their work. Apparently you think that publicly accusing one’s colleagues of being incompetent and unethical, with zero evidence, is an appropriate way to conduct what ought to be a professional discussion. At least, I don’t any other plausible reading of your final paragraph. And I totally disagree. With respect, these sorts of implied personal attacks are out of line.
You’ve now published two comments on Dynamic Ecology in which you’ve made very strongly-worded yet ungrounded attacks on work you don’t like, including implicit attacks on the competence, motives, and professional ethics of those who performed the work. This is after we had to block you from *explicitly* attacking the competence, motives, and ethics of others. Your correspondence with me and others has been similarly problematic, but we’ve let you have your say because we believe in open and vigorous debate, and in my case because I know you personally and consider you a friend. We’ve given you a lot of rope–and you’ve used it to hang yourself. At this point, you’ve had your say, we’ve had ours, and it’s clear we’re going to have to agree to disagree both on the science, and about who is behaving in a professional and ethical way. We’re going to default to blocking any further comments from you on this topic (which is why I removed the comment you made a few moments ago, while I was drafting this comment). We may decide to let through further comments from you on a case-by-case basis, but you’ve now got a high bar to jump.
Oh, and if you’re worried about your ability to publicly criticize individuals and work you don’t like, there’s a solution to that.
I’m not an ecologist, but I lurk here frequently because you’re blog has many excellent posts that relate to science in general. This is one such post. I’m encouraged to see people chipping in to support the idea that “disconcerting” results need to be discussed explicitly.
A couple of other reasons for doing so from my experience in my discipline (policy relevant or not):
First, burying “disconcerting” results in the discussion won’t hide them. If a paper is relevant to a controversial topic, someone will find those results and out them for what they are.
Second, your private thoughts the prevailing opinions and social context in which you write won’t be visible after a few years, but your paper will be. The literature is forever. Burying something contrary is what people will see when that temporary social context is no longer apparent.
Third, when people bury “disconcerting” results, it begs the question of whether or not they even understood the importance or relevance of those results. There have been several instances in my career where I’ve found an older paper with a glaring omisson or downplay in the discussion and wondered how the authors could have possibly missed such an obvious point of critique.
Great blog, folks. Cheers.
Welcome! and thanks for thoughtful comment.
Don’t think you could have stated it any better Brian, great post. In fact you more or less took the words right out of my mouth.
I have no use whatsoever for the “united front” approach to science and in fact few things make me more suspicious than when I detect it is occurring. It’s frequently (but not always) subtle, reinforced by the frequent use of certain terminology that overstates the degree of actual agreement between scientists. The fact that a reviewer would openly say something like what you quote there is disturbing and just depresses me greatly. It shows the underlying attitude that we are up against.
We shouldn’t be contrarians just for the sake of it, but we should definitely be so if we think something important has been gotten wrong in some field, and in that respect we can be perceived as renegades. But science can be viewed as a renegade activity when viewed in the larger scope of various human belief systems.
You called the climategate incident largely correctly IMO. However, there is in fact some truth to the suspicion that some scientists over-stated (and continue to) the certainty of the state of knowledge and attempted to bend the IPCC message in its various reports. Nor do I really trust the CRU as a whole, and definitely do not trust a couple of individual scientists who work there. I can name names but I won’t. A whole number of climate scientists and policy advocates appear not to understand the importance of this general issue, and they continually get themselves in trouble because of it.
Pingback: Friday links: what’s flipped learning anyway?, bad null models, peer reviewers vs. lightbulbs, and more | Dynamic Ecology
Pingback: Mule Deer Migration, Fearless Fish, and Pirate Naturalists
Pingback: Models, modesty, and moral methodology | Theory, Evolution, and Games Group
Pingback: Flump – the space-time continuum in the tropics; National Irish Biodiversity Week! | BioDiverse Perspectives
Pingback: Happy second birthday to us! | Dynamic Ecology
Pingback: Thursday links: bird papers > insect papers, the genealogy of theoretical ecology, and more | Dynamic Ecology
Pingback: Species richness – what is it good for? | Ecology for a Crowded Planet
Pingback: #ESA100 – big concepts and ideas in ecology for the last 100 years | Dynamic Ecology
Pingback: Can bad reviews be useful? | Ecology for a Crowded Planet
Pingback: Tropical deforestation causes dramatic biotic homogenisation | Ecology for a Crowded Planet
Pingback: Biodiversity and pizza – an extended analogy leading to a call for a more multidimensional treatment of nature | Dynamic Ecology
Pingback: Discussion on Biodiversity trends in the Anthropocene | biologyforfun
Pingback: Book Review: Merchants of Doubt | Dynamic Ecology
Pingback: Science, advocacy, and honesty | Dynamic Ecology
Pingback: Does the media seize on cases where humans benefit nature? | Dynamic Ecology
Pingback: Are Numbers of Species a True Measure of Ecosystem Health? - Yale Environment 360 - Viral Things
Pingback: Friday links: Covid-19 vs. BES journals, Charles Darwin board game, and more | Dynamic Ecology
Pingback: Friday links: Epstein fallout continues at Harvard, memes vs. intro biostats, and more (includes quick poll) | Dynamic Ecology
Pingback: Systematic biodiversity change, not loss – BioTIME