Friday links: a remarkable cv, why be an EiC, Andrew Gelman vs. Richard Levins, and more

Also this week: the de-internationalization of US higher education, realized (blogging) niches, and more.

From Jeremy:

So, you know how your US college or university is looking to international students (and the high tuition they’re often charged) to balance its books? Good luck with that. At least until Trump isn’t President any more. (ht @noahpinion)

Andrew Gelman, following Uri Simonsohn, argues that robustness checks (alternative ways of translating the same scientific hypothesis into a statistically-testable claim) are a joke. Basically, the argument is that, if your scientific hypothesis is too vague to really be testable, robustness checks just paper over the vagueness and fool you into thinking it’s testable. As someone who, following Richard Levins, very much believes in (and uses) robustness checks in a different context, I found this interesting. Now I’m thinking about the circumstances in which robustness checks are helpful vs. harmful. A key issue seems to be that “robustness check” can mean lots of different things; Gelman’s and Simonsohn’s criticisms apply to only one sort of robustness check.

The field of US academic economics remains heavily male-skewed and progress toward gender balance (at least in top departments) has stalled in the last 20 years. Here’s a deep dive into the relevant data (unreviewed preprint). As an aside, the paper documents considerable heterogeneity among fields in how gender balance has changed over time. Here’s a bit of relevant data for recently-hired ecology profs. I wish we had as much data about people’s individual career trajectories in ecology as economists seem to have about individual career trajectories in their field.

Following on from the previous, here is a link to economist Kasey Buckles’ data-based review of what interventions work to attract and retain women in economics at every career stage. Emphasis is on focused, practical interventions implementable by individuals, departments, and faculties. Note that not all of these necessarily generalize to other fields. For instance, her review highlights the importance of interventions at the K-12 level, since in economics the male skew starts before college and doesn’t change all that much (in either direction) after that. In contrast, in the life sciences in the US undergraduate degree recipients are about 60% women and have been for many years. In ecology, the proportion of women subsequently drops slightly at every stage through the postdoc stage, but then rises back up to close to 60% among recently-hired asst. profs.

Laura Deming with a very interesting remark (ht Marginal Revolution):

One of my biggest personal fears is working in the wrong field to achieve the goal I care about. If you were around pre-1900s, and wanted to contribute to biology, you should have been a physicist (Robert Hooke, a physicist discovers the first cell, making a better microscope is a major driver of progress). In which field should you work to maximize progress in biology today?…

But something interesting happened around the 1950s. If you look at the most important techniques in biology, in the second half of the 1900s, they’re all driven by tools discovered in biology itself.

Why agree to be the EiC of a leading journal?

Stephen Heard encroaches on my niche by reviewing Deborah Mayo’s new book on the philosophy of statistical inference.

From Meghan

My goodness, this is a remarkable CV. Here’s the tweet that led me to find it, but it’s really worth clicking through for the whole CV:

2 thoughts on “Friday links: a remarkable cv, why be an EiC, Andrew Gelman vs. Richard Levins, and more

  1. It was an odd piece from Gelman, as many commenters noted. As I wrote in the comments, I wouldn’t even consider what Simonsohn criticized as a robustness check because if you think that checking the stability of a regression coefficient in an observational design by including/excluding a few covariates is a way to estimate the robustness of the estimate against confounding, then you don’t understand omitted variable bias.. This may have been Simonsohn’s point (“the hypothesis is wrong”). Raftery developed model averaged coefficients partly for this specific goal (despite the many protests that I get from this statement that model averaging is only properly used for prediction), but for the reasons I give in the first link, model averaged coefficients are also a false path to “robustness”.

    • i should add that a sensitivity check by including/excluding predictors in the context of explanatory modeling might make sense only in a case when one has a a small number of similar models with extremely good *a priori* evidence that the models are approximately correct. Sander Greenland and Tim Lash have done sensitivity analysis of this sort in epidemiology.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.