Friday links: RIP Daniel Dennett, rewilding the internet, and more

Also this week: reviving cybernetics, Spinoza vs. Meghan, no one buys books (because they don’t want to), no one buys future onions (because they’re not allowed to), McSweeney’s makes fun of Jeremy, and much, much more. Oh, and 1957 called, it wants its panic about public trust in science back.

Continue reading

On the joy of field work–and lab work

Chris Mantegna has a nice piece in Nature on the joys of field work, and the importance giving students from historically underrepresented groups opportunities to experience field work themselves. The piece resonates with the growing literature on positive field experiences as key to attracting and retaining students from historically underrepresented groups into ecology (e.g., Race et al. 2021, Armstrong et al. 2007, Bingham & Torres 2008; note that “positive” is a key word there, because without that it’s possible for field experiences to function as a barrier rather than a gateway, see Bowser & Cid 2021, Morales et al. 2020, Woods et al. 2023) (update Apr. 25, 2024: citations updated to remove a miscitation, and add additional citations).

At the risk of not doing justice to the entire piece, I wanted to talk about one small bit of it from early on in the piece. It’s a bit that I disagreed with when I first read it. But then I read on and discovered that, taken in the context of the whole piece, I don’t actually disagree with it after all (at least, not enough to be worth talking about). I’ve seen versions of this bit expressed many times in other pieces by other ecologists that I do disagree with. So I’ll quote the bit, talk about why I (thought I) disagreed with it, and then talk about why it turns out that I don’t disagree.

Here’s the bit in question:

Continue reading

Live peer review, leadership roles, and handling hard better

Last week, I had a post about the nuts and bolts (and emotions!) of responding to reviewers. In it, I talk about how my initial reaction to constructive criticism of my manuscripts is for my brain to fall out, leaving me unable to process anything for a while. I still think of myself as someone who is not very resistant to criticism, but who is resilient – I often feel pretty flattened by negative feedback, but bounce back fairly quickly. My approach is to give myself some time, then use a variety of strategies (post it notes, responding to X number of comments before taking a break, lots of chocolate, etc.) to revise the manuscript in response to the reviewer feedback. And, as I hope I made clear in that earlier post, it always ends up better as a result. The process is totally worth it in the end, but it’s definitely a process. 

What I want to cover in this post is something that I’ve been trying to figure out now that I have an administrative role: when getting feedback on something (e.g., a revised policy related to teaching), it feels to me like a live version of peer review. I’m realizing that many of the strategies I have used successfully to deal with the emotional parts of responding to peer reviews of manuscripts don’t work as well in the live setting, so I’ve been trying to develop some additional approaches. This is definitely still a work in progress for me, and I’d love to hear how others deal with this! 

Around the same time that I was thinking about this, I stumbled across the Handle Hard Better impromptu speech given by Kara Lawson, the coach of the Duke women’s basketball team. It’s less than 3 minutes, so it won’t take long to watch it. This is the key part (to me, at least):

Make yourself a person that handles hard well, not someone that’s waiting for the easy. Because if you have a meaningful pursuit in life, it will never be easy.

I had a whole variety of reactions when I first watched this – some of them complicated – but my overall takeaway is that this idea of trying to figure out how to handle hard better is a really helpful framing. The key then is figuring out how to actually do that! I’m still figuring it out, but right now one thing that is helping me is thinking about potatoes.

Continue reading

The nuts and bolts (and emotions!) of responding to reviewers

Almost 10 years ago, I wrote a post about writing a response to reviewer comments. It focused on the overall structure of a response to reviewers, with suggestions on what to include and how to address things like if reviewers disagree. That post – which I think is still relevant – focused on the response itself. In this post, I want to focus more on my process for actually writing the response to reviewers and making the revisions. As I said in the earlier post, I’ve generally had the good fortune of responding to reviews that are thoughtful and constructive. Even with that, it can be…. an emotional journey. So much so that, when I saw this cartoon by Liz Fosslein* it immediately made me think of responding to reviewers:

Continue reading

A very brief interview with ecoevojobs.net organizer Anonymous Potato

ecoevojobs.net is the crowdsourced jobs board for faculty positions (and some other positions, such as postdocs) in ecology and evolution. It’s a Google Sheet with various tabs. It’s nearly comprehensive in its coverage of EEB faculty job listings in the US and Canada, and has many listings in other countries too. It’s also a popular forum for anonymous discussion of topics related to the EEB faculty job market. It’s a tremendously useful resource for EEB faculty job seekers, although you’d surely get different answers from different people as to how useful some bits of it are.* It’s also a useful resource for others to build on. Without ecoevojobs.net, I wouldn’t have been able to compile all the data I’ve compiled on the ecology faculty job market.

The organizer of ecoevojobs.net is the pseudonymous Anonymous Potato (“AP”). AP sets up a new Google Sheet every year, sets up the form by which users can add new job listings to the sheet, troubleshoots the sheet (e.g., when material accidentally gets erased), moderates the comment threads, implements new features (often at the suggestion of users), and more. This is even more work than it probably sounds like. ecoevojobs.net is underpinned by a lot of complex spreadsheet functions, that push Google Sheets coding about as far as it’ll go.

I know who AP is.** For a little while now I’ve been trying to get AP to do an email interview. I’m guessing I’m far from the only one who is curious about the history of ecoevojobs.net and AP’s involvement in it, AP’s motivations, AP’s workflow, and more. AP is amenable to an interview, but also busy. So far I’ve only received brief answers to a few questions. They’re questions that AP suggested, although they’re questions I would’ve asked anyway. Here they are. My hope is that posting this brief interview, and then getting some comments from y’all, might prompt AP to say a bit more in future.

The questions are in bold, AP’s answers are italicized. Both the questions and answers have been slightly edited for clarity.

Continue reading

Highlights from recent comment threads

Even our regular readers often don’t read the comments on our posts. But if you don’t read the comments, you’re really missing out. Not just on insightful discussions of the posts, but also on interesting side conversations, funny jokes, and more. Our commenters are the best! So, to encourage you to read the comments, here’s the first of an occasional series of posts linking to some highlights from our recent comment threads.

harisridhar points us to the fascinating story of Carel Ten Cate’s replication of Niko Tinbergen’s classic animal behavior experiments.

Bethann Garramon Merkle (‘CommNatural’) shares tools to help you say ‘no’, so you don’t just reflexively say ‘yes’ to everything someone asks you to do.

Stephen Heard reveals that the genus Magnolia is named after a person. Betcha didn’t know that!

Andre de Roos has some spicy opinions on population ecology.

cmacmac shares one person’s experience as a remote postdoc.

Bri Ollre on the personal factors that shape students’ choices of whether to go to graduate school, and what to study there.

Jeff Clements shares some data suggesting that maybe the best way to spur future research on topic X is to…undermine previous research on topic X.

Angela Moles on the reasons for the decline of big ideas in ecology–and also a reason to question whether it’s a real decline.

Jeff Ollerton and Stephen Heard identify some great (and steamy!) novels by ecologists.

Carl Boettiger and Derek Jones both push back (in different ways) against my claim that, as a researcher, you should emulate Bill Murray in Groundhog Day.

Poll results: contrary to what most of our readers think, sample sizes in ecology have not increased over time

Recently, I polled y’all on whether ecological studies have improved over time in one specific, quite basic respect: sample size. Here are the poll results, along with the answer. Both of which are given away in the post title: most poll respondents think that sample sizes have increased over time in ecology. Most poll respondents are wrong. (Sorry most poll respondents!)

Continue reading

How rigorous are the arguments in favor of “rigor-enhancing” scientific practices?

This was going to be a linkfest item, but I decided to turn it into a mini-post.

Here’s Jessica Hullman on whether “rigor-enhancing practices” (e.g., preregistration, large sample sizes, open data) are a distraction–or at least, an ineffective gateway–to thinking hard about tougher problems like “what are you even measuring, really?” Includes a link to Devezer et al. 2021, which claims that most arguments for improving methodological rigor in the sciences aren’t themselves all that rigorous.

I have mixed feelings on this.

On the one hand, I’m sympathetic to the argument that lack of statistical or methodological rigor isn’t all that big a deal in the grand scheme of things. I agree that the most widespread and important problems in scientific research are hard to fix, precisely because they’re not amenable to narrowly-focused technical or procedural fixes. There’s no checklist you can follow to do good science.

On the other hand, I’m reflexively suspicious of arguments against incremental, doable reforms, on the grounds that incremental, doable reforms are a distraction from attacking more important and challenging problems. I’m reflexively suspicious for two reasons. One, these sorts of arguments make the perfect the enemy of the good. Two, I think it’s generally just not true that effort being put into narrow, doable reform X can be redirected towards solving big, intractable problem Y. Those two things aren’t interconvertible substitutes, I don’t think. At least, not to most people. I’m reflexively suspicious when someone assumes that time, effort, or money being put towards X could be put towards Y instead. You need to establish that X and Y are in fact substitutes in most people’s eyes. Or else show that somebody has the power to incentivize or force people to substitute Y for X.

Does publication of a meta-analysis (or other review paper) encourage or discourage publication of further studies? The cases of local-regional richness relationships and metacommunity variance partitioning.

Shorter title, that contravenes Betteridge’s Law: Is spurring research interest in topic X like Daffy Duck’s magic trick?

Yes, of course you’re going to have to read on to find out what I mean by that! 🙂

***

A few years ago, I asked whether publication of a meta-analysis (or really, any review paper) encourages or discourages publication of further studies on the topic. One could imagine it going either way.

On the one hand, authors of meta-analyses and other review papers often pitch their work as identifying important gaps in the literature, that ought to be filled by future research. And in ecology specifically, meta-analyses tend to find enormous variance in effect size, the vast majority of which reflects heterogeneity–true among-study variation in mean effect size–rather than sampling error (Senior et al. 2016). That implies that ecologists ordinarily will need a lot of primary research papers in order to get reasonably precise estimates of both the mean effect size, and the variance around the mean. They’ll also need a lot of primary research papers to have any hope of identifying moderator variables that explain some of that heterogeneity in effect size. But they don’t usually get a lot of papers. The median ecological meta-analysis only includes data from 24 papers (Costello & Fox 2022). So as Alfredo Sánchez-Tójar suggested in the comments on an old post here, it seems bad if publication of an ecological meta-analysis discourages publication of further studies. We ecologists need all the studies we can get!

On the other hand, publication of a meta-analysis or other review paper usually (not always!) indicates that there are already enough papers on the topic to be worth reviewing/meta-analyzing. Further, meta-analyses and other review papers do ordinarily draw at least some tentative scientific conclusions. Nobody ever publishes a meta-analysis or other review paper for which the only conclusion is “there aren’t enough studies of this topic yet to draw any conclusions whatsoever; everybody please publish more studies!” So it seems only natural if publication of a meta-analysis, or other review, is read by others as a signal that the topic in question is now pretty well-studied. At least, well-studied compared to other topics that haven’t yet been subject to meta-analysis or other review. So researchers who want to do novel work might prefer to avoid working on topics that have recently been reviewed.

On the third hand, most researchers ordinarily have their own future research plans. Those plans usually reflect all sorts of factors. They aren’t ordinarily going to be changed by publication of any one paper, be it a review paper or something else. Especially because any gaps in the literature identified by a meta-analysis or other review probably exist for a reason and so aren’t easily filled. Common example in ecology: a comparative paucity of studies from Africa. Not ideal, obviously–but equally obviously, not a gap that’s likely to be filled just because a meta-analyst calls for future studies to be conducted in Africa. On this view, publication of a meta-analysis or other review won’t have much effect either way on publication of further studies.

As I noted in my old post on this topic, it’s difficult to study causality here, because controlled replicated experiments are impossible. How can you tell how many primary research papers would’ve been published on a given topic, if only a meta-analysis or other review hadn’t been published?

So we’re stuck with the next best thing: eyeballing unreplicated observational time series data and speculating! Which is what I’ll do in the rest of this post. I’ll walk through two case studies in which we have good time series data on the publication of primary research papers on a given topic, extending from well before until well after publication of at least one meta-analysis or other review paper. This is at least a bit of an improvement over time series data I’ve looked at in the past, that only extended up until the publication of a meta-analysis, not after.

The first case study concerns local-regional richness relationships, as a tool for inferring whether local species interactions limit local community membership. Here’s how I summarized the idea in an old post:

Continue reading