Friday links: RIP Not Exactly Rocket Science, ecologists vs. multiple working hypotheses, and more

Also this week: The Trump administration’s war on facts begins, does “question first” science lead to bandwagons?, David Attenborough on your phone, great teaching vs. great research, RIP Beall’s list of predatory publishers, and more.

From Jeremy:

The latest evidence that blogs are dying: Ed Yong is shutting down Not Exactly Rocket Science, at one time one of the most popular science blogs in the world (especially among other science bloggers). And if you say “No, blogs aren’t dying, they’re just being replaced by other things that are sort of like blogs in some ways but very different in others,” well, you say potato, I say potahto…

Speaking of Ed Yong, here’s his piece in The Atlantic on a newly formed group to help scientists run for political office in the US (ht Stephen Heard, via Twitter). Jacquelyn Gill is quoted. Interesting initiative. Back when I was in grad school at Rutgers, former physicist and now AAAS President Rush Holt was the Congressman for the next district down the road. It seemed like he did a really good job, and not just on scientific issues. I do worry that the group will only back Democratic candidates. I worry for various reasons, one being that there are outstanding ecologists whom I greatly respect, scientifically and personally, and for whom I would consider voting if they ran for office even though I don’t necessarily agree with them on all political issues, who also happen to be Republicans (for instance see some of the posts here). Semi-related: there are various ways for scientists to get involved in politics that go beyond things like voting, contacting one’s representatives, and making donations, but that are nonpartisan and that are compatible with maintaining a scientific career. My Canadian colleague Rees Kassen is a good example of this sort of involvement. Not saying that sort of political involvement is better than running for office–it’s just different.

In the unlikely event you couldn’t guess why I linked to that Ed Yong piece, here’s a rundown of the new Trump administration’s gag orders to US government scientific agencies, and a second piece specific to the gag order at the EPA. On a more hopeful note, here’s a reminder that previous administrations that attempted to manipulate or silence the EPA backed down in the face of public outcries, here’s a rundown of how scientists are organizing to protest the Trump administration’s policies, and here’s The Ringer (of all places) on small gestures of resistance to Trump from the social media accounts of the National Park Service and other government agencies heavily involved in science.

Canada recently went through something like this under the Harper administration. Here’s a good thread from a leading Canadian scientist on what went down there, with some ideas on how to push back (ht Meg, via Twitter). The argument that taxpayers and voters have the right to know what government agencies are doing (particularly the results of research that the taxpayer paid for) strikes me as a winner both on the merits and politically, though what do I know? I’ll add one more small bit of (tentative) practical wisdom from the Canadian experience of resistance to the Harper government’s policies on gathering, dissemination, and use of factual information: I suspect it helped a bit that the resistance came from academics across the political spectrum. For instance, even many conservative social scientists were appalled when the Harper government axed the long form census, which helped to undermine the Harper government’s attempt to tar opposition as mere left wing political partisanship (e.g. here, here, here).

Betini et al. (2017; open access) randomly sample ecology papers to show that ecologists rarely have multiple working hypotheses. Even when they do they typically test very few predictions per hypothesis. Interestingly, “pattern-motivated” studies tend to test particularly few predictions per hypothesis (usually just one), confirming an anecdotal impression of mine. Betini et al. have a great discussion why ecologists don’t often use the method of multiple working hypotheses, with lots of ideas for how to do better. Here are old posts from me and Charlie Krebs on the same topic. Indeed, the timing is such that I’m wondering a little if Betini et al. were prompted to look into this by one of those blog posts? If so, that would be awesome. 🙂 Also, I’m really glad to see this paper because I was literally just about to embark on the exact same exercise as background research for the book I’m working on. Now I don’t have to go to all that trouble. 🙂 (ht Caroline Tucker, who has some good discussion).

Mark McPeek with an interesting argument that encouraging graduate students to do “question first” science has two serious drawbacks: it fails to set them up for a sustainable long-term research program (because it sets them up to have to switch systems every time they change questions), and it encourages bandwagon-jumping. Mark definitely has a point, though not an unanswerable one. I’d say that he’s correctly identified two of the main “failure modes” of question-first research (the third being that you’ll make a serious mistake because you don’t really know your chosen system), but that “system-first” research has its own failure modes. Some of which, ironically, are the same as those for question-first research. For instance, trendy bandwagons in ecology often are based on some new, broadly-applicable approach that purportedly allows one to infer process from pattern. (aside: those bandwagons basically never pan out) They become bandwagons in the first place not just because “question first” researchers jump on, but because “system first” researchers also jump on. System-first researchers always are on the lookout for things they can do in their own system that will be of interest to ecologists working in other systems. An approach that you can easily apply in any system, that purportedly lets you infer something about processes all ecologists care about, is just the ticket. The phylogenetic community ecology bandwagon, for instance. (sorry Mark, couldn’t resist that example!)

Biology For Fun on prediction in ecology.

A new Brookings report uses 7 years of data on first year undergrads at Northwestern University, the scholarship of their 170 tenured profs, and the undergrads’ subsequent choices of major and academic performance, to find that there’s no correlation between a prof’s scholarship quality and teaching ability. The report argues that the estimated correlation is precisely zero, not merely an artifact of low power. I haven’t read the full report but based on the executive summary I don’t think the results are very informative, The measures of teaching quality are very crude and affected by many biases and confounding variables, and I question whether statistically controlling the confounds fixes these issues. But I pass on the link in case you want to have a closer look. (ht Economist’s View)

And finally, this kind of thing is why I only got a smartphone recently and reluctantly. On the other hand, next time you have trouble falling asleep, you can console yourself with the thought that at least what happened in the linked story won’t happen to you. 🙂

From Meg:

The BBC has released a Story of Life app, letting you “explore more than 1000 of Sir David Attenborough’s most memorable moments from his 60-year career exploring the natural world”.

Librarian Jeffrey Beall has taken down his list of predatory journals, for reasons that aren’t totally clear.

Given the events of the past week, American scientists might be interested in reading up on what life was like for scientists in Canada under the Harper administration, and how scientists fought back. Here’s a tweet thread, this has a detailed chronology and many, many links, this post and this article both focus on what Americans can learn from Canada’s war on science, and this is a survivor’s guide to being a muzzled scientist. Finally, I know that many folks are finding this guide on how to keep yourself healthy (physically and emotionally) while resisting to be helpful. (I see that Jeremy and I have overlapping links this week. Not surprising, given this week!)

Hoisted from the comments:

Brian and I had an interesting discussion this week of the “creative ambiguity” at the core of macroecology. Intruigued? Start here. Includes a bit early on where we kind of talked past each other, but stick with it, we sorted it out and the most interesting bit is at the end.

12 thoughts on “Friday links: RIP Not Exactly Rocket Science, ecologists vs. multiple working hypotheses, and more

  1. Re: Betini et al. (2017; open access)
    I would like to first apologize (as one of the authors) for not acknowledging Charlie and your posts on this topic – if we would have came across them we surely would because both echo a lot of what we said (in fact, our MS was written while both Gustavo and I were in the mids of our PhDs and then found its way to the bottom of the to-do list for a few years…).
    Personally, I believe the strongest driver of our failure to evaluate multiple hypotheses is publication bias (we touch on that in the paper); a paper describing and evaluating multiple hypotheses is (on average) more complex, more laden with details, and less conclusive than one that does not – not exactly Science/Nature/PNAS material (and fairly tricky to publish in any ‘high-impact’ ecological journal). We teach our students they should evaluate multiple hypotheses, but our system for sharing our findings and evaluating our performance as scientists (i.e., based on peer-review publications) is biased against this approach.

    • Thanks for the comments Tal, and no apologies needed–I didn’t mean to imply that I think you should’ve cited those blog posts. Sometimes people independently think of the same idea, which is totally fine.

      Re: publication bias being the biggest obstacle to publishing papers evaluating multiple hypotheses, hmm. Not sure about that. I doubt that if you were able to do the same analysis you did on randomly chosen *submissions* to Ecology, Evolution, et al., that you’d find different results. I don’t think lots of papers testing multiple hypotheses are getting rejected from those journals for not telling simple conclusive stories. Back when I was an editor at Oikos, I can tell you I wasn’t handling many papers with multiple competing hypotheses. And I don’t think lots of people are avoiding writing such papers, or avoiding submitting them to leading journals, out of fear they’ll get rejected. But I’m just going on my own experience here, I don’t have data to go on.

      • Not serving as an editor, I might have a narrower perspective, but is seems to me that the mere word-limit set by most journals would serve as a strong filter against the lengthier multiple-hypotheses MSs (the ‘fallacy of factorial design’ gets you twice – once in attaining the data, and another in publishing its necessarily longer description).
        As a side note, in your post on this topic form June 1st you mentioned that “It would be very interesting to enlarge this sample and write a paper about what you found. That paper would definitely have a shot in a top journal”; this might be just the content of our particular paper, but every ecological journal we submitted this to rejected it as being ‘too philosophical’…

      • “seems to me that the mere word-limit set by most journals would serve as a strong filter against the lengthier multiple-hypotheses MSs”

        Nah, not with online supplements (https://dynamicecology.wordpress.com/2015/10/26/online-supplements-have-ruined-nature-and-science-papers/)

        Disappointed to hear that leading ecology journals didn’t want this paper. In retrospect my assessment that it would have a shot in a top journal may have been optimistic. But that depends on the journal too. It’s true that Ecology, Ecology Letters, Am Nat, JAE, JEcol, GEB, and Ecography don’t really publish this sort of paper. But TREE does, and so does Oikos.

  2. Thanks for taking the time to read our paper. One of our goals when preparing the manuscript was to write something that ecologists and evolutionary biologists would read. And I am glad we are having this conservation.

    I think it is difficult to find a single main factor to explain why most ecologists and evolutionary biologists do not test multiple hypotheses. It seems to be a classical multifactorial problem. Publication is one, but I believe that cognitive bias also plays a big role. As scientists, we should talk more about how to avoid these biases. Besides the classical “null model”, I particularly think that “work with the enemy” and “blind analysis and crowdsourcing” are very useful approaches.

    • Re: “work with the enemy” papers, that’s an idea I’ve had in the past too. But in practice, I’m not sure how well it works. I think the example you cite of proponents vs. opponents of niche construction is probably the example that’s worked best (so far). But Abrams and Ginzburg’s “here’s where we agree, here’s where we disagree” review of ratio-dependent predation theory has fallen apart. Ginzburg co-authored a book in which (according to Abrams) he willfully misrepresented what that “work with the enemy” paper actually said, and so they’re back to arguing the same points with no resolution in sight. There’s a famous “consensus” review paper in Science on BEF from the early oughts, co-authored by people who’d been at odds. Some of the authors immediately backslid on it, though, continuing to stick to the positions they’d taken before the consensus paper was written. And in psychology a while back, we linked to a case of opposing researchers who agreed in advance what decisive experiment would settle their differences (including every detail of the methods, how the data would be analyzed, etc.), and collaborated to do the experiment. Except that when the experiment clearly supported one researcher’s hypothesis, the other one backslid and insisted on writing a separate discussion section for their joint paper. In which he said that, no, on further reflection this actually wasn’t the decisive experiment at all. And further back, think of the Ehrlich-Simon wager. Which Ehrlich lost, to which he responded by claiming (falsely) that he was “snookered” into taking the wager in the first place, and that the wager was on matters of trivial importance (one wonders if he’d have thought it so trivial if he’d won).

      My conclusion is that “work with the enemy” papers basically have no power to resolve disagreements between committed intellectual opponents. At best, they can clarify some issues for other, “neutral” researchers, so that the rest of the field can move forward. And even then, they may not be able to do that, not if the authors backslide and try to muddy the waters retroactively.

      Some old posts on this:

      https://dynamicecology.wordpress.com/2012/12/03/want-to-bet/
      https://dynamicecology.wordpress.com/2015/01/22/book-review-the-bet-by-paul-sabin/
      https://dynamicecology.wordpress.com/2015/04/09/peter-abrams-on-ratio-dependent-predation-as-a-zombie-idea/

      • This is true, collaboration is hard, specially with people you disagree. I believe that some of the problems you experienced/reported could be minimized if we built long-term collaborations. A nice example is the work done by psychologists Gary Klein and Daniel Kahneman. They have opposite views about whether or not we should trust our intuition to make decisions. They collaborated for 7-8 years and, according to Kahneman, they “almost blew up more than once” (Fast and Slow, Chapter 22, p. 235). But they eventually published a paper together (Conditions for intuitive expertise: A failure to disagree. http://dx.doi.org/10.1037).

      • Yes it can work if the intellectual “enemies” are personal friends. Andy Gonzalez and Andrew Hendry are friends who got a paper out of their friendly disagreement about local adaptation.

  3. As a confirmed luddite, I still have my candybar phone. And I have a 6-year-old. (Well, actually, today he’s 7, but whatevs.) So I can use that link to repel my family and friends who insist I need a smartphone. Thanks for that.

  4. I’m hopeful that Beall’s list will be replaced with a list generated using objective and transparent criteria, with potentially the posability for journals to be removed from the list after a period of good behavior. I think his motives were in the right place and it served its purpose well for a great deal of time, but it is also time to move on to something better. The article seems to suggest there will be a replacement, which hopefully will pick up where Beall left off. I’m curious what other academics think.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s