Friday links: the incredible shrinking George Washington University, The Far Side is coming back (?!), and more

Also this week: wildlife photographer of the year, the history of peer review, modeling scientific progress, a rough month for machine learning, tell me again what a statistical population is, quantitative theory vs. replicability, the greatest 19th century scientist you’ve never heard of, and SO MUCH MOAR. Stick around until the bitter end to read Jeremy’s half-silly, half-serious idea for improving graphical abstracts.

From Jeremy:

I’m months late to this, but here’s Anne Marie Whelan’s and David Schimel’s very detailed report on author gender and peer review outcomes at ESA journals. Largely encouraging results, very similar to those reported last year by Charles Fox and colleagues for BES journals and some evolution journals. One result that puzzles and bothers me is that women continue to be substantially underrepresented among all submitting authors to ESA journals (and among first authors too), relative to their representation among ecology graduate students, postdocs, and faculty. IIRC, that same result holds at BES journals and Evolution too. I don’t know why that is and look forward to learning from your comments.

Some of the best images from the Wildlife Photographer of the Year competition.

The latest on the efforts of Plan S funders to assuage concerns that a mandatory switch to open access publication will damage scientific societies by eliminating subscription revenues.

Charley Krebs makes a very good point about randomly sampling from the population of interest, one that I try to drum into my intro biostats students:

Defining the ‘population’ under consideration should perhaps be rule # 1, but that is usually left as a vague understanding in many statistical studies. As an exercise consult a series of papers on ecological field studies and see if you can find a clear statement of what the ‘population’ under consideration is.

George Washington University plans to shrink undergraduate enrollment by 20% over the next five years–even though its enrollment and finances currently are healthy. The stated rationales are to (i) improve academic, residential, and social experiences for the remaining students, particularly those who aren’t currently well-supported, and (ii) expand STEM programs. I’m speculating here, but I wonder if the university isn’t also trying to get out ahead of projected declining demand for higher education in the US. I also wonder if the university is trying to get out ahead of the trend for fewer students to enroll in humanities and social science fields that aren’t viewed as leading directly to a well-paying career. The university denies the changes are a way to climb the US News & World Report rankings, but it’s hard not to wonder about that, too. No announcement has yet been made as to how the university plans to address the lost tuition revenue, but the university has refused to rule out faculty layoffs.

This looks super-fun: a toy model of the dynamics of a scientific community trying to discover how the world really works. Among the interesting conclusions are that replicability of results is insufficient to guarantee convergence on the truth, and that “epistemic diversity” among scientists aids the scientific process. I can imagine lots of ways to extend this model. (ht Dan Simpson, who comments).

I learned a lot from this news piece about the rapidly growing private sector and government demand not just for climate forecasts, but for evaluations of climate vulnerability and adaptive capacity. I hadn’t realized it had already grown so far beyond insurance companies forecasting the odds of extreme events and disasters. My question is: how do the end users evaluate the accuracy and value of the information and advice they’re paying for? There are areas of life in which consultants are infamous for not being worth their high cost; how do end users make sure “climate solutions” isn’t one of them? The linked article addresses this a bit at the end, but I found myself wanting to hear more.

The history of peer review. Very interesting interview, even though I knew the broad outlines already. Two choice quotes from the interview subject, Melinda Baldwin:

I think my peer review project started when I discovered something really unexpected about Nature: that it hadn’t employed systematic external refereeing until 1973!

[P]eer review also helps secure a certain amount of autonomy for the scholarly community, which I think is an underappreciated function…

A reader of Andrew Gelman’s blog, Sandro Ambuehl, asks a very interesting question:

I was wondering whether there’s…any empirical evidence on whether empirical investigations based on precise theories that simultaneously test multiple predictions are more likely to replicate than those without theoretical underpinnings, or those that test only isolated predictions.

My first instinct is to say, yes, there is, but it comes from physics and chemistry. For instance, general relativity makes various quantitative predictions, has passed all the tests to which it’s been subjected, and has repeatedly passed the same tests. That’s a much better track record of replicability than a typical vaguely-theorized hypothesis in psychology or ecology or whatever. But if the question is really about comparisons among studies within the same field, rather than across fields, I don’t know the answer, will need to think more about it. Related old post. And another.

Tim Harford’s list of good popular books on statistical bullshit. I have not read them, but Tim Harford has and he’s a sharp guy, so I bet it’s a good list.

I’m always interested in essays by people who’ve changed their minds or developed mixed feelings about some core idea or belief of theirs. Even when it’s about an idea or belief I don’t share. I just like trying to “get” where people different than me are coming from, even though I’m sure I don’t always manage it. And purely anecdotally, I find that people who’ve changed their minds or have mixed feelings about a belief or idea often are good at helping others “get” why someone might find that belief or idea attractive. That preamble explains why I found this interesting. (ht @mattyglesias)

THE FAR SIDE IS COMING BACK!!!!11!!11!! Well, apparently. Maybe. The announcement seems to be deliberately cryptic. I wonder how many people reading this are too young to know what the Far Side even is. Let’s find out!

A former grad student in theoretical physics shares his experience in grad school. Thoughtful, articulate, brave, and moving piece, which I’m sure will speak to people in other fields besides theoretical physics. Though whether it mirrors your own experiences will of course be a very personal matter. (ht Emanuel Dearman)

Charles Sander Peirce, the greatest(?) 19th century scientist and thinker you’ve probably never heard of. Very philosophical, but there are some bits of interest to the practicing scientist. For instance the suggestion that Peirce’s ideas correct misinterpretations of recent experiments on animal cognition. (ht Emanual Dearman)

The viral equation mathematicians hate. An anecdotal illustration of the challenges of going “viral” with some bit of science or math in a way that also draws people into deeper engagement with the subject. I’m sure that many of you have more experience than me talking about science to the general public on social media (because I have none). Very curious to hear your thoughts on this one.

A review of a new, very realistic board game in which you play a queen bee charged with shepherding the hive through the year. Now I know what to get Jeff Ollerton for his birthday. 🙂

Why you can’t use machine learning to discover the laws of nature.

Heck, before machine learning methods start taking on the laws of nature, maybe they should quit getting trounced at time series forecasting by dead-simple classical statistical methods. Machine learning methods overfit the data. Here’s what should’ve been the graphical abstract.* 😛 See Brian’s old post for discussion of a closely-related result in an ecological context. (ht @noahpinion)

*I’m only half-kidding. I personally don’t find “graphical abstracts” useful (is that just me?). So if an abstract has to have a visual, I would find a meme more entertaining, and maybe even more useful.

14 thoughts on “Friday links: the incredible shrinking George Washington University, The Far Side is coming back (?!), and more

  1. With 48 votes so far, should I be horrified that 19% of respondents don’t know what the Far Side is, or thrilled that 81% do know what it is? #glass19percentempty #glass81percentfull

  2. Okay, not an ecologist, but we see something that might be similar in Astronomy, where women stay in astronomy as much(ish) as men, and get hired for faculty positions at the same(-ish) rate as men.

    But, one study that was mainly about citations, but assembled a lot of data on people & gender & publications (it’s her: ) also found that, on average, women were publishing fewer papers than men (about 20% fewer). And it’s almost certainly a submitting papers thing, because the acceptance rate for papers in astronomy (outside of Nature/Science) is pretty much 100% (not quite, and not from cranks, but I’ve authored 30 papers – first or otherwise – and never had a paper rejected, which is pretty normal. I don’t think I know anyone who’s had more than one paper rejected).

    It’s possible it’s an in-bin effect (if say, women are more likely to be instrumentalists, and observers publish more than instrumentalists, this might explain it). But I don’t *think* that’s likely. The only thing I can figure that makes any sense is that women are (on average) ending up at more teaching-focused institutions (and correspondingly publishing and applying for money and such less). But I don’t think the data exists among the astronomy data sets I’ve seen to test it – maybe your job market data has that for ecology?

    • Yes, in ecology, women are more likely than men to end up as faculty in more teaching-intensive institutions (that’s true in all scholarly fields, IIRC). And newly-hired women faculty in ecology publish less than men on average, but that difference goes away almost completely once you control for the teaching-intensiveness of the hiring institution.

    • The figure I’ve seen/heard for acceptance rates at the main astronomy journals is anywhere from 75 to 90%.

      For example, Helmut Abt looked at the numbers for about 250 submissions to The Astrophysical Journal in 2006, and found an acceptance rate of 88%, pretty much identical to what it had been in 1984 for that journal, The Astronomical Journal, and Publications of the Astronomical Society of the Pacific.

      I seem to recall reading a 2005 American Institute of Physics report on women in astronomy and physics which suggested that there was a higher female faculty percentage at institutions that did not have Ph.D. programs in physics, which is potentially consistent with your suggestion. But I don’t know what the current situation is.

      • That sounds about right for ApJ, and I think AJ, MNRAS, and A&A (which publish the vast majority of astronomy research) are around the same. I *think* this makes us very outlier-y as an academic discipline, but the important point is that it makes it very tough for publication rates to be driven by rejection, when papers rarely get rejected.

        If you’ve both seen this result, then perhaps it’s correct, and it’s why women are underrepresented among submitting authors. I’m always very cautious when sociology-type research gives “the expected” result, though. It’s very easy to consciously or unconsciously let your biases and expectations creep into the experiment design (or guide respondents to the answers you’re expecting).

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.