Also this week: wildlife photographer of the year, the history of peer review, modeling scientific progress, a rough month for machine learning, tell me again what a statistical population is, quantitative theory vs. replicability, the greatest 19th century scientist you’ve never heard of, and SO MUCH MOAR. Stick around until the bitter end to read Jeremy’s half-silly, half-serious idea for improving graphical abstracts.
I’m months late to this, but here’s Anne Marie Whelan’s and David Schimel’s very detailed report on author gender and peer review outcomes at ESA journals. Largely encouraging results, very similar to those reported last year by Charles Fox and colleagues for BES journals and some evolution journals. One result that puzzles and bothers me is that women continue to be substantially underrepresented among all submitting authors to ESA journals (and among first authors too), relative to their representation among ecology graduate students, postdocs, and faculty. IIRC, that same result holds at BES journals and Evolution too. I don’t know why that is and look forward to learning from your comments.
Some of the best images from the Wildlife Photographer of the Year competition.
The latest on the efforts of Plan S funders to assuage concerns that a mandatory switch to open access publication will damage scientific societies by eliminating subscription revenues.
Charley Krebs makes a very good point about randomly sampling from the population of interest, one that I try to drum into my intro biostats students:
Defining the ‘population’ under consideration should perhaps be rule # 1, but that is usually left as a vague understanding in many statistical studies. As an exercise consult a series of papers on ecological field studies and see if you can find a clear statement of what the ‘population’ under consideration is.
George Washington University plans to shrink undergraduate enrollment by 20% over the next five years–even though its enrollment and finances currently are healthy. The stated rationales are to (i) improve academic, residential, and social experiences for the remaining students, particularly those who aren’t currently well-supported, and (ii) expand STEM programs. I’m speculating here, but I wonder if the university isn’t also trying to get out ahead of projected declining demand for higher education in the US. I also wonder if the university is trying to get out ahead of the trend for fewer students to enroll in humanities and social science fields that aren’t viewed as leading directly to a well-paying career. The university denies the changes are a way to climb the US News & World Report rankings, but it’s hard not to wonder about that, too. No announcement has yet been made as to how the university plans to address the lost tuition revenue, but the university has refused to rule out faculty layoffs.
This looks super-fun: a toy model of the dynamics of a scientific community trying to discover how the world really works. Among the interesting conclusions are that replicability of results is insufficient to guarantee convergence on the truth, and that “epistemic diversity” among scientists aids the scientific process. I can imagine lots of ways to extend this model. (ht Dan Simpson, who comments).
I learned a lot from this news piece about the rapidly growing private sector and government demand not just for climate forecasts, but for evaluations of climate vulnerability and adaptive capacity. I hadn’t realized it had already grown so far beyond insurance companies forecasting the odds of extreme events and disasters. My question is: how do the end users evaluate the accuracy and value of the information and advice they’re paying for? There are areas of life in which consultants are infamous for not being worth their high cost; how do end users make sure “climate solutions” isn’t one of them? The linked article addresses this a bit at the end, but I found myself wanting to hear more.
The history of peer review. Very interesting interview, even though I knew the broad outlines already. Two choice quotes from the interview subject, Melinda Baldwin:
I think my peer review project started when I discovered something really unexpected about Nature: that it hadn’t employed systematic external refereeing until 1973!
[P]eer review also helps secure a certain amount of autonomy for the scholarly community, which I think is an underappreciated function…
A reader of Andrew Gelman’s blog, Sandro Ambuehl, asks a very interesting question:
I was wondering whether there’s…any empirical evidence on whether empirical investigations based on precise theories that simultaneously test multiple predictions are more likely to replicate than those without theoretical underpinnings, or those that test only isolated predictions.
My first instinct is to say, yes, there is, but it comes from physics and chemistry. For instance, general relativity makes various quantitative predictions, has passed all the tests to which it’s been subjected, and has repeatedly passed the same tests. That’s a much better track record of replicability than a typical vaguely-theorized hypothesis in psychology or ecology or whatever. But if the question is really about comparisons among studies within the same field, rather than across fields, I don’t know the answer, will need to think more about it. Related old post. And another.
Tim Harford’s list of good popular books on statistical bullshit. I have not read them, but Tim Harford has and he’s a sharp guy, so I bet it’s a good list.
I’m always interested in essays by people who’ve changed their minds or developed mixed feelings about some core idea or belief of theirs. Even when it’s about an idea or belief I don’t share. I just like trying to “get” where people different than me are coming from, even though I’m sure I don’t always manage it. And purely anecdotally, I find that people who’ve changed their minds or have mixed feelings about a belief or idea often are good at helping others “get” why someone might find that belief or idea attractive. That preamble explains why I found this interesting. (ht @mattyglesias)
THE FAR SIDE IS COMING BACK!!!!11!!11!! Well, apparently. Maybe. The announcement seems to be deliberately cryptic. I wonder how many people reading this are too young to know what the Far Side even is. Let’s find out!
A former grad student in theoretical physics shares his experience in grad school. Thoughtful, articulate, brave, and moving piece, which I’m sure will speak to people in other fields besides theoretical physics. Though whether it mirrors your own experiences will of course be a very personal matter. (ht Emanuel Dearman)
Charles Sander Peirce, the greatest(?) 19th century scientist and thinker you’ve probably never heard of. Very philosophical, but there are some bits of interest to the practicing scientist. For instance the suggestion that Peirce’s ideas correct misinterpretations of recent experiments on animal cognition. (ht Emanual Dearman)
The viral equation mathematicians hate. An anecdotal illustration of the challenges of going “viral” with some bit of science or math in a way that also draws people into deeper engagement with the subject. I’m sure that many of you have more experience than me talking about science to the general public on social media (because I have none). Very curious to hear your thoughts on this one.
A review of a new, very realistic board game in which you play a queen bee charged with shepherding the hive through the year. Now I know what to get Jeff Ollerton for his birthday. 🙂
Heck, before machine learning methods start taking on the laws of nature, maybe they should quit getting trounced at time series forecasting by dead-simple classical statistical methods. Machine learning methods overfit the data. Here’s what should’ve been the graphical abstract.* 😛 See Brian’s old post for discussion of a closely-related result in an ecological context. (ht @noahpinion)
*I’m only half-kidding. I personally don’t find “graphical abstracts” useful (is that just me?). So if an abstract has to have a visual, I would find a meme more entertaining, and maybe even more useful.