Also this week: there’s no such thing as very old people, the USDA vs. science, betting on replication, Ecography switches to author-pays OA, diversity vs. groupthink, and more. Lots of good stuff this week!
Michelle Smith and Natasha Holmes at Cornell are recruiting a postdoc who will “take a leading role in the design and evaluation of a new assessment for students’ critical thinking skills in ecology lab and field courses”. I think having that assessment would be really neat (and am on the advisory board for the project), so wanted to make sure folks know about the position! Here’s the link to the full job ad.
Are supercentarians mostly superfrauds? Usually we think of removing untrustworthy observations from your data as something you do before you do statistics. But here’s an interesting example of using statistics to detect untrustworthy observations. Good fodder for an intro biostats course, I’ve added it to my list of statistical vignettes.
Sticking with Andrew Gelman, this looks fun: predict which of thousands of social and behavioral science experiments will replicate. 200 of the candidate experiments will then be randomly selected for a preregistered replication. If your predictions are accurate enough to place you in the top 500 contestants, you win money! Prediction markets have previously been shown to predict the outcomes of preregistered replications in psychology, but this looks like by far the biggest such prediction market yet. Based on previous prediction market results, my understanding is that “bet against priming studies” is one good rule of thumb. I continue to be fascinated watching the norms and practices of psychologists change so fast.
Speaking of confronting beliefs with data…The Gates Foundation just spent 5 years and $500 million dollars, in 3 big US school districts and four charter school organizations, to measure teaching effectiveness, help teachers improve, and take teaching effectiveness into account in teacher hiring, firing, and promotion. The result of which was…precisely zero student improvement. Kudos to the Gates Foundation for doing this and not burying or soft-pedaling the results. Anyway, the lesson I take away from this as a teacher is that I should teach as best I can, but not be too hard on myself if whatever pedagogical changes I implement don’t seem to make a difference to student mastery of the material. Because what I do in the classroom is only one among many factors affecting learning outcomes, and often far from the most important one. (UPDATE: Brian read the report, and knows a lot about the broader issues here. See his great comments.)
Lord’s paradox. If you’re testing for a difference between a treatment and a control in terms of their effects on weight gain, should you do an ANCOVA on final weight, with initial weight as a covariate? Or should you first calculate weight gain/loss for each subject and then test for a difference in mean weight gain between treatments? The two approaches don’t always give the same answer! So what’s the right approach? Statistician Stephen Senn does a deep dive into a simple-seeming but tricky inferential problem.
ESA President Laura Huenneke on ESA’s increasing efforts to fundraise from members, to support ecology students and for other goals.
Athene Donald on diversity vs. groupthink.
Longtime USDA crop plant physiologist Lewis Ziska has quit, because department officials questioned his work on the consequences of climate change for rice nutrient content and tried to minimize publicity surrounding the paper.
Tell me again what exactly “tenure” is, and whether British academics have it? And how exactly does “tenure” (whatever it is) map onto “job security”? Useful piece both for British and non-British academics. I knew most of this, having done a postdoc in London, but I’m guessing it will be news to many of you. And even I didn’t realize just how much the details varied among British universities. (ht Jeff Ollerton)
Matthew Holden with a bunch of tips for organizing an inclusive academic conference. Includes a new (to me) way to encourage attendees to not only go to the poster session, but visit many posters.
An experimental approach to understanding why statistical methods behave as they do. Unreviewed preprint; looks interesting. (ht Andrew Gelman)
I hadn’t realized that genome sequencing costs stopped plummeting years ago. (ht Marginal Revolution)
Ecography is switching to author-pays OA; here’s the accompanying editorial from the editors. I have a bunch of thoughts and questions:
- It’s striking that the editorial board is going along with this change only reluctantly.
- It sounds like the main (?) motivation for the change is to align the journal with Plan S, on the assumption that Plan S is going to pass in some form. My sense from conversations with people who know better than me is that NSF and NIH are very much not on board with Plan S (anyone heard differently?). And I haven’t heard anything about the Tricouncil agencies in Canada joining Plan S (again, anyone heard differently?). So it looks like there is a bifurcation in scientific publishing coming, with Europe going one way and N. America another. No idea which way funding agencies outside N. America and Europe are leaning.
- It’s interesting that Ecography will offer discounts on the OA fee to people who review for the journal. That’s a single-journal, real-money version of Owen Petchey’s and my old idea of PubCreds. And of course, many others before and after us have independently proposed variants on this idea.
- Ecography also will offer discounts on the OA fee to anyone without the means to pay, though few details are provided. I wonder if that would include me? I have a research grant that could in principle be used to pay OA fees; I’ve used it to do so in the past. But if I say (truthfully!) that at the moment all the remaining money is already committed to pay research costs, would I qualify for an OA fee discount?
- Between the discount for authors without means to pay, discounts for reviewers, and the fact that Ecography is a fairly selective journal, I wonder how the economics of this are going to work out. You can’t get OA fees from rejected papers or discounts. Which is why the OA fees at other selective journals are sky high, and why discounts at other selective journals often are designed so that few authors qualify for them. I guess Ecography’s relying heavily on Plan S passing in Europe, so that European authors won’t qualify for discounts due to inability to pay? Or maybe some of the discounts won’t be very big?
- Ecography is only one of multiple Nordic Society Oikos journals, all of which the society contracts with Wiley to publish and distribute. So I don’t quite grok why the editorial says that “subscription income from Ecography is insufficient to cover the costs of publication.” How can they isolate the subscription income and publication costs of Ecography from those of Oikos and the other Nordic Society Oikos journals?
There may now be life on the moon. Specifically, dormant tardigrades. At least when I drop a culture vessel in my lab, the microbes just end up on the floor, not on the moon. 🙂 The linked story contains some eye-opening (to me) information about private space rocket…hobbyists? Entrepreneurs? Visionaries? Cowboys? I’m not sure what the right word is, honestly. Question: is there any governing legal framework here, or is anybody with enough money free to launch whatever payloads they want to the moon? In the comments, tell us: if you could afford it, what would you launch to the moon? 🙂 +1000 Internet Points for funny/interesting answers. (ht @dandrezner)
And finally, ending on a serious note, I know this is way outside our usual domain, and linking to it feels a bit like an empty gesture in the wake of the mass shootings in Dayton and El Paso. But here’s rituals of childhood.
Senn’s point that the DAG of Lord’s paradox is not sufficient for inferring cause because pseudoreplication is spot on. It is also precisely the criticism of the two-species comparison for inferring adaption from Garland and Adolph: https://www.journals.uchicago.edu/doi/abs/10.1086/physzool.67.4.30163866?journalCode=physzool
Per your comments on Ecography, I think the 4th bullet (waiver policy for lack of funds) will be key.
Most waiver policies are very weak (e.g. they only apply to countries that combined make up less than 1% of the publications – i.e. parts of Africa, Asia, Central America & Oceania but not e.g. most of South America, South Africa, South Asia).
On the other hand Diversity and Distributions has a very robust need based waiver as do many society journals (typically for page charges rather than OA but same basic idea).
This question is I think the crux point of how painful or not the growing OA journal base will be for authors.
Agree, the analogy to page charge waivers seems apt.
Or the peer review coupons. One could, for instance, ask authors to commit to reviewing manuscripts in the near future in exchange for the waiver.
Yes. I wonder what would be the enforcement mechanism, though, to make sure people do the reviews they’re promising to do in the near future? Probably you wouldn’t often need enforcement; most people would probably do what they promised. But when they don’t, I guess maybe you ban them from submitting any mss to the journal (even as a co-author), until they do the review they promised to do?
Regarding Ecography’s flip from subscription based to author-funded:
If Nordic Society Oikos is losing money on Ecography, flipping a selective journal to author-funded isn’t going to help the economics, unless Ecography becomes non-selective, much like Ecosphere in the ESA portfolio. This relates to the discussion on Brian’s post on OA being a lower priority for most authors than other considerations.* There’s not much transparency on publishing costs and revenues, even from nonprofit society publishers. Interesting remark in the linked editorial, that
“under Plan S, it should ultimately be the funders, not the individual authors, who pay the Open Access fees.” My belief is that the Plan S architects mostly had the biomedical literature in mind, where study costs are high enough that OA fees may be lost in the rounding error. In constrast, in ecology and other natural science ‘ology grad students, the US $3K for OA fees could as much as their summer field budget
Thanks again for these good reads.
Re: Ecosphere being unselective, ESA just released data showing that Ecosphere rejects almost 50% of submissions. That seems high for an “unselective” journal.
My guess is that Ecosphere *is* unselective, but that as an unselective journal it attracts a high proportion of technically-flawed submissions, hence the surprisingly high rejection rate.
Re: the people driving Plan S mostly thinking of the biomedical literature, I don’t know but I suspect that’s right.
The math on selectivity and OA charges is nonlinear, so the drop from 100% to 50% is not as big as further drops.
Take a ballpark number of $500 to get a paper through peer review (that number is probably good for the labor – editorial assistants – and their overheads but not the review software which would add more). Then OA APC charges have to be greater than $500/accept rate I say greater because you have to cover per published paper costs too like production and archiving). So Ecosphere covers review for $1000. But something with a 10% rate would need to charge $5000 to cover that same process. And many society journals (and wild speculation for ecography) at about 25-33% you would need $1500-$2000.
If you start comparing those numbers to actual APC charges (and throw in a healthy chunk of profit at for-profit journals) you will see that this review-cost piece is a healthy chunk of total publication costs which does mean selectivity is a central part of APC (as both Chris & I have pointed out before).
Need a better word than “non-selective” for journals such as Ecosphere and PLOS One, since “non-selective” could imply junk. They publish technically sound papers but aren’t supposed to take into account much novelty or vague judgments of advancing the science. I was on a paper that recently got rejected by Eco Apps for failing to advance the science but was accepted by Ecosphere without further review. There were no complaints on the technical merits, just that we weren’t advancing the science enough for Eco Apps’ rarefied pages. PLOS1 and Ecosphere (and a lot of mid-IF subscription journals, society or otherwise) are certainly selective with reviews of whether the science was technically sound, the writing and presentation are appropriate, and at least for PLOS, whether data are available. So what’s a good shorthand for all the solid citizen journals that screen for quality but aren’t super selective on notions of novelty or supposed major advances?
Hmm. I kind of feel like most readers know that “unselective” means “unselective regarding novelty/interest/importance”. But maybe not? So yeah, I can see where it might be safer to have a different term.
But I’m having trouble coming up with a good term off the top of my head. “Soundness-only” journals? Ick. Anyone have a better idea?
Brian, I disagree with the idea that the most selective journals need to charge 5k OA fees to be sustainable. I think when you start getting to 10% or less, a lot of papers are being desk-rejected after only a few minutes of consideration by the EIC, and not accumulating the same costs as papers being seriously considered for review. Conservation Letters, which is probably pretty close to a 10% acceptance rate, has an OA charge of $1850 for non-society-members and $1450 for members.
Do you think a desk reject that never even makes it to a handling editor costs $500? I’d suspect, no matter the rejection rate of the journal, most journals accept more than 25% of papers that make it to a handling editor. So no journal needs to be charging OA fees higher than $2,500, no matter how selective.
Matthew – i agree an editorial reject is cheaper than a full review. But its not free. Among other things editorial assistants perform checks on the manuscript for conformance to basic journal standards before an editor ever sees it. Also most senior editors are paid. At my journals they’re an academic and not paid that much. But at other places like Science or Nature they are a highly skilled professional who is paid well. And the submission still has to get uploaded and start its path through software that has to be purchased/developed and maintained I didn’t include that cost in my $500. There are plenty of ways to argue down but plenty of ways to argue up as well.
But I agree $5000 is harder to understand than $1500-$2500.
Re the gates foundation report on teaching,
Key issues were:
1) Teaching effectiveness (the whole point) was primarily evaluated by student test scores and principal walkby evaluations on a rubric. The latter are known to be of limited predictive value, especially because in many states the teacher has to be notified in advance. Test scores the goal was to find how much a teacher elevated test scores (not raw test scores at end of year), which is a statistically challenging problem. Obvious alternatives like peer, student and parent evaluations were used some but given very little weight in the overall score (0-15%). An example (from outside the gates system found that the scale used for merit pay had 98% of teachers rated effective or highly effective thereby undermining the whole notion of merit pay. So If you can’t assess something with high consistency, how can you manage for it.
2) Only about 1% of teachers were fired. Meaning all this teacher evaluation primarily only affected additional professional development and merit pay incentives. That’s not going to hugely change the quality of teachers in 5 years (unless you think a little on the job professional development can achieve much more than 4 years getting a teaching BA and internship).
The one conclusion NOT to draw from the Gates report is that quality of teachers we put in classrooms should be a low priority. Major meta-analyses show that aside from student home-life, teacher effectiveness is about the largest factor, and is a much larger effect than curricular or pedagogy innovations. The conclusion one SHOULD draw from the Gates report is that evaluating teachers and then retaining good teachers and shedding bad teachers is much harder than one might think.
Great comment, will update the post to point to it. That’ll teach me to just link to something based on reading the summary.
FWIW the data from Hattie that I cited in the last paragraph largely confirms your take home for ones own teaching. It basically says effect size of student background>>teacher quality>>pedagogy techniques & curriculum.
So the “failure” of a new pedagogical technique not showing substantial test score improvement is to be expected. You should just focus on being a great teacher (knowledgeable, excited, kind, motivating) which I know you already are. And one should innovate pedagogy & curriculum because it keeps you fresh and engaged, not because it is going to have large direct effects for students. The latter doesn’t appear to be how education works.
The Gates program didn’t tackle the wrong problem; short of fixing poverty and inequality, they were tackling the right problem (to a much greater degree then eg when they got involved in the common core or class or school size). It is just the problem they took on is extraordinarily recalcitrant in modern educational systems in the US, even more so in the urban large district systems that they were working in.