Anyway, in case you’re interested, I have a new paper out in Ecology and Evolution. It uses meta-meta-analysis–meta-analysis of meta-analyses–to ask “how much does the typical ecological meta-analysis overestimate the magnitude of the mean effect size?” Then answer is “by about 10%, but occasionally by much more if it’s a small meta-analysis”. Some of you will recall an old post in which I trailed some of the ideas in this paper. The comments on that post really helped me flesh out and implement my ideas, so thank you again to our commenters.

Coincidentally, there’s a new post at Data Colada criticizing a recent high-profile meta-analysis in psychology for being too broad–lumping together unrelated studies in the same meta-analysis, and so estimating a scientifically meaningless mean effect size. If this argument is right, it applies even more so to my new paper. After all, my meta-meta-analysis lumps together studies of almost every topic ecologists have ever studied! How could it possibly be scientifically meaningful, or statistically useful, to combine unrelated studies into the same analysis? That’s a very good question, for which I think I have a very good answer. You’ll have to read the paper to see my answer, and decide if you buy it.

p.s. It’s only after I thought of writing this paper that I remembered that there’s an xkcd cartoon making fun of it.

]]>I don’t have any inside info as to why the deadline has been extended. I do know that the search committees haven’t yet started evaluating applications. So if you’ve already applied, (i) thanks! and (ii) don’t worry. If I had to guess, I’d speculate that the deadline has been extended because we haven’t gotten that many applications yet. Whatever the reason for the extension, I’d encourage you to take advantage of it and throw your hat in the ring, even if you aren’t sure if you’re a fit, or aren’t sure if you’d take the position if offered.

Iâ€™m not on either search committee, but if anyone wants to ask questions about the positions, the department, the city, etc., email me and Iâ€™ll do my best to answer them (jefox@ucalgary.ca).

One thing I will say is that, while weâ€™re legally obliged to give preference to Canadian citizens and permanent residents, that does *not* mean that others shouldnâ€™t bother applying! I wasnâ€™t a Canadian citizen or permanent resident when I was hired at Calgary back in 2004. Just last year, we hired a non-Canadian. And weâ€™ve hired other non-Canadians over the years. So donâ€™t take yourself out of the running on the mistaken assumption that weâ€™re sure to hire a Canadian. You canâ€™t predict what the applicant pool will look likeâ€“neither can we! So if you think you might want one of these jobs, apply!

I’m not on either search committee, but if anyone wants to ask questions about the positions, the department, the city, etc., email me and I’ll do my best to answer them (jefox@ucalgary.ca).

One thing I will say is that, while we’re legally obliged to give preference to Canadian citizens and permanent residents, that does *not* mean that others shouldn’t bother applying! I wasn’t a Canadian citizen or permanent resident when I was hired at Calgary back in 2004. Just last year, we hired a non-Canadian. And we’ve hired other non-Canadians over the years. So don’t take yourself out of the running on the mistaken assumption that we’re sure to hire a Canadian. You can’t predict what the applicant pool will look like–neither can we! So if you think you might want one of these jobs, apply!*

*p.s. If you want the job, but aren’t sure whether it’s worth the effort to apply, given how long it takes you to put together an application, well, are you sure you need to do all that customization of your application materials? We have an old post with data on how much customization EEB faculty job applicants do, and how much customization search committee members *want *EEB faculty job applicants to do.

The deadline is tomorrow (Aug. 25, 2022), but my understanding is it might be extended for a week. Sorry for the super-short notice, but I only just heard about it. If I’d heard about it earlier, I’d have posted the ad earlier.

]]>I’m at the ESA-CSEE joint meeting in Montreal right now. If you’re interested and are at the meeting as well, please reach out! jefox@ucalgary.ca

]]>I’m struck by both the similarities and differences to the Pruitt case.

An incomplete list of similarities:

-repeated data fabrication across numerous papers over many years, often taking the form of duplicated sequences of observations indicative of copying and pasting data

-current and former trainees of the accused were crucial to the investigation, going above and beyond to reveal the truth.

An incomplete list of contrasts:

-Dixson was given away in part because of the physical impossibility of her methods. It just wasn’t physically possible for her to have collected the data she claimed to have collected, in the time frame she claimed to have collected it, using the methods she claimed to have used. In contrast, I’m not aware of any instances of the Methods sections of Pruitt’s papers describing any physical impossibilities.

-Pruitt had no public defenders of any consequence, save for his own lawyers. In contrast, Dixson has–indeed, continues to have!–very vocal public defenders, including her own doctoral and postdoctoral supervisors and other prominent marine ecologists. Those defenders have defended Dixson not by addressing the specifics of the allegations against her (e.g., “Here’s why duplicated data X in paper Y don’t actually indicate fabrication”), but rather by (i) imagining that the whistleblowers have bad motives and attacking them for those purported bad motives, and (ii) talking about how hard-working, dedicated, and smart Dixson is. It’s immensely to the credit of Pruitt’s many former friends, trainees, and collaborators that all of them followed the evidence where it led.

-The University of Delaware’s institutional investigation into Dixson was *much* faster than McMaster University’s investigation into Pruitt.

I don’t know what larger lessons to draw from these similarities and differences, or even if any larger lessons should be drawn. I just find them striking.

]]>In the unlikely event that you have no idea what this is about, start here and say goodbye to your day.

I may blog about this later, or maybe not.

UPDATE: Nature has a new piece on the ongoing consequences of the Pruitt case for Pruitt’s trainees and collaborators. The linked piece illustrates that institutional investigations of scientific misconduct and other bad behavior aren’t designed to give closure to the main victims of misconduct (here, Pruitt’s current and former trainees and collaborators). I wish I had good ideas about how to change that, but I don’t. The piece also contains a bit of news that’s surprising to me–McMaster is going to continue the formal hearing process that surely would’ve resulted in Pruitt being fired, even though Pruitt has already resigned. The linked piece also has some new details on Pruitt himself, in case you care (personally, I don’t). Apparently he’s a high school science teacher at a Catholic school in Florida now. If you feel the urge to joke sarcastically about what he’ll do if he catches a student cheating on a test, well, you’re not alone. And, hilariously, Nature claims that it’s still investigating Pruitt’s Nature paper. That paper has yet to be retracted (it carries an expression of concern), despite overwhelming evidence of data fabrication. Yeah, *sure *you’re still investigating. /end update

Outstanding undergraduate Laura Costello and I decided to revisit the prevalence of decline effects in ecological research, using my quite comprehensive compilation of all the data from 466 ecological meta-analyses. We’re very excited that the paper is now online at Ecology. You should click through and read it (of course, I would say that!). But the tl;dr read version is that the only common decline effect in ecology is in the decline effect itself. The truth no longer “wears off” in ecology, if it ever did. Decline effects might’ve been ubiquitous in ecological meta-analyses back in the 1990s, but they aren’t any more. Only ~3-5% of ecological meta-analyses exhibit a true decline in mean effect size over time (as distinct from regression to the mean, which happens even if effect sizes are published in random order over time). Read the paper if you’re curious about our speculations as to why decline effects are now rare in ecology.

This is the third paper of mine that grew out of a blog post, which is my tissue-thin justification for sharing news of the paper in a blog post.

]]>I think we need shrinkage estimation for mean effect sizes in ecological meta-analyses. That is, I think many ecological meta-analyses provide very imprecise estimates of the unknown “true” mean effect size. So that, in aggregate, those estimated mean effect sizes would be improved if they were shrunk towards the mean. Here, see for yourself:

The x-axis of that figure shows the mean effect sizes from every meta-analysis in my pretty-comprehensive compilation of over 460 ecological meta-analyses. The y-axis shows their standard errors.* Notice the funnel shape. Precisely estimated mean effect sizes (so, low on the y-axis) are small in magnitude; they’re clustered near zero on the x-axis. The funnel gets wider as you go up the y-axis. Imprecisely estimated mean effect sizes vary from massively negative to massively positive. And as you’d expect, the imprecisely estimated mean effect sizes come from small meta-analyses (i.e. those based on few primary research papers):

Figure 2 shows that the mean effect sizes from small meta-analyses are all over the map, whereas those from large meta-analyses are clustered much closer to zero. Which indicates that, if ecologists were to conduct additional studies on the topics of the small meta-analyses, the mean effect sizes would tend to shrink towards zero. Of course, we can’t actually go and conduct many hundreds of additional studies on every topic on which ecologists have already published a meta-analysis. At least, we can’t do so very quickly or easily. But surely there’s some statistical way to estimate what would happen if we did. Surely there’s some statistical “shrink ray” we could use to shrink all these meta-analytic means towards zero by some appropriate amount.**

But how, exactly? The most obvious way, at least to me, is via a hierarchical random effects meta-*meta-*analysis. That is, take all effect sizes from all ecological meta-analyses that use (say) the log-transformed response ratio as the effect size measure, and throw them all into a massive hierarchical random effects model estimating variation in effect size among meta-analyses, among primary studies within meta-analyses, and among effect sizes within primary studies. The hierarchical random effects structure of the model will shrink estimated meta-analytic mean effect sizes towards the grand mean.

Or rather, it would if you could actually fit the model, but you can’t. At least, I can’t. I tried to do it on my reasonably new and powerful laptop, and R gave me a hilarious error message I’ve never seen before. Something about being unable to allocate a 92 Gb vector or something. Apparently, you can’t fit a hierarchical random effects model involving many tens of thousands of effect sizes from thousands of primary research papers, at least not when the effect sizes and their sampling variances vary over such huge ranges.

My googling has not turned up any workable alternative solutions. If fitting the model with REML is computationally infeasible, it’s sure as hell not going to be feasible with MCMC. Is there some other approach I’m unaware of, that would make it computationally feasible to fit this model? (empirical Bayes? some other Bayesian approach?) But the only other computationally feasible approaches that occur to me are all quite ad hoc, and mostly involve throwing away a lot of data.

For instance, one thing I’ve done is use actual linear regressions to quantify regression to the mean in the small subset of ecological meta-analyses that include 100+ primary research papers. For those meta-analyses, you can work out the estimated mean effect size based only on the data from the first 100 primary research papers included in the meta-analysis (i.e. the first 100 to be published). Do the same thing for the first X primary research papers, where X is some integer <100. Then regress mean effect size after publication of the first 100 papers on mean effect size after publication of the first X papers. The slope of the regression will be <1 due to regression to the mean, just like in the parent-offspring regressions that led Galton to coin the term “regression”. The flatter the regression line is, the more regression to the mean there is. As you’d expect, the smaller X is, the more regression to the mean there is.*** One could use those estimated regression lines to shrink all the mean effect size estimates from all the meta-analyses. But those regression lines are estimated from only a small subset of all ecological meta-analyses. And you’d be assuming that all meta-analyses involving the same number of primary research studies should be regressed to the mean using the same regression line, which is a pretty crude assumption.

Another ad hoc approach that involves throwing away a lot of data: fit a hierarchical random effects meta-meta-analysis to only a randomly chosen subset of meta-analyses. Repeat with lots of different randomly chosen subsets. Somehow average together the answers.

If anyone has any ideas or pointers, put them in the comments or drop me a line (jefox@ucalgary.ca). I’m all ears!

*Both sets of numbers are from hierarchical random effects meta-analyses estimating variation in effect size due to variation among primary research papers, variation among effect sizes within research papers, and sampling error.

**The appropriate amount of shrinkage would of course vary among meta-analyses, for various reasons. For instance, different meta-analyses include different numbers of effect sizes from different numbers of primary research papers, and those effect sizes have different sampling variances.

***And of course, there could be further regression to the mean even beyond 100 primary research papers. But it’s hard to get at that possibility, because so few ecological meta-analyses include >100 primary research papers.

]]>Relatedly, Jeff Clements and colleagues just published a major new meta-analysis of effects of ocean acidification on fish behavior, revealing an absolutely massive decline effect. That is, early studies reported big effects, but subsequent studies have found basically squat. Further, those early studies reporting big effects are all by the same lab group, of which Danielle Dixson was a member. Drop the studies from that one lab group, and you’re left with studies that mostly report small or zero effects. Speaking as someone who just co-authored a paper that looks systematically for decline effects in 466 ecological meta-analyses, and mostly fails to find them (Costello & Fox, in press at Ecology), I can tell you that the decline effect in Clements et al. is *enormous*. I couldn’t find anything close to a comparable decline effect anywhere else in ecology. Nor do any of the other, weaker decline effects I found have such a strong association with the work of one lab group. Clements et al. is a great paper. It’s very thorough; they check, and reject, a bunch of alternative explanations for their results. Even if you’re not a behavioral ecologist, you should read it.