Also this week: signing science, why Stephen Heard should’ve been a philosopher, gender and racial diversity of economics seminar speakers, “that baboon does not love magic”, and more. Lots of good stuff this week!
From Jeremy:
Years ago Meghan suggested two-stage peer review: review of the introduction and methods before the study is conducted, review of the results and discussion after the study is conducted. Some Nature journals have now implemented a version of this in the form of “registered reports“. The main difference to Meghan’s suggestion is that registered reports drop the second stage of review: the journal commits to publishing the paper based only on review of the proposed introduction and methods.
Interesting story on how HHMI and the Chan Zuckerberg Initiative are scaling up the Myerhoff Scholars Program, a very successful–and very expensive–program to prepare minority undergraduate students for careers in academic research. The program’s success seems to reflect both the kind of students they enroll–they target high-achieving students already interested in academic careers–and the support and opportunities it provides those students. I bet y’all have a…wide range of opinions on whether that combined approach–selectivity plus support–is a Good Thing or a Bad Thing. Speaking personally, and emphasizing that I only know nothing about the Myerhoff Scholars program beyond what’s in the linked article, I think the combined approach makes sense, given the goals of the program.
The story of the deaf British undergrad who has invented over 100 new British Sign Language signs for technical scientific terms.
Time series data on the gender and racial diversity of invited economics seminar speakers at 11 US colleges and universities. Since 2014, the percentage of women among economics seminar speakers has bounced around between 20-30% (i.e. the same or a touch higher as their representation among full time US economics faculty), with no long-term trend. The percentage of black/Hispanic/Native American speakers is very low (even lower than their representation among all full-time US economics faculty), but seems to be increasing slowly. Interesting project, I’d be curious to see a similar exercise in ecology. In fact, I’m so curious I’ve already started working on done it! Keep your eyes out for a future post asking you to guess the results… (ht Marginal Revolution)
Am Nat’s instructions to its editorial board members. You should read this even if you’ll never submit to Am Nat. I wish more editorial boards worked this way. Am Nat’s editors (I’m one) are thoughtful and act like additional reviewers, even on papers that don’t get sent out for review.
The 2019 Shanghai global rankings of universities in various subject areas have been published. Here are the ecology rankings, the University of Montpellier is #1. I link to this mainly to prompt y’all to share your thoughts on these ranking exercises. How well do the Shanghai ecology rankings line up with your own mental ranking of the world’s “top” ecology programs? Are you surprised how highly many European universities rank (contrast the very US-dominated sociology rankings)? Perhaps more importantly, do you even have a mental ranking of the world’s “top” ecology programs? If so, do you think it’s useful, for any purpose? For instance, in some social science fields program reputation matters a lot for faculty hiring decisions. But that’s not the case in ecology, so is there some other useful purpose that program rankings serve in ecology? (ht Org Theory)
While Stephen Heard has been fighting valiantly to get scientists to allow single whimsical footnotes into their papers, philosophers have been writing entire papers intended to be sung to the tune of “My Favorite Things”. I leave it to you to decide if this is a point in favor of science, or a point in favor of philosophy. 🙂 (ht @kjhealy)
The Harvard prof and the paternity trap. I hesitate to link to this. It’s a wild story and I worry that people might try to draw broader lessons from it. (ht @dandrezner)
This week in opinions that I share, that are obviously, inarguably correct. 🙂 (ht @dandrezner)
And finally, this week in public science outreach:
🙂 (ht seemingly everyone on the intertubes)
Any purposes? Getting people to comment under a blog post worked ;).
No, I do not have a mental list and I do not see the usefulness. When I see this correctly it is more or less the number of publication in a certain topic.
But if someone would have asked me were a lot of good ecology (outside Germany) is done my first two responses would be UC Davies and Wageningen and then probably CNRS & INRA to Institutes placed in Montpellier. If only German universities are considered the ranking is more or less consistent with the reputation for ecology I remember from a few years ago when I was a student.
Well, Am Nat is just reiterating what should have been a standard of operation. What they and other journals should be aiming for is to suppress cognitive bias. An effective way of doing this is triple-blinding; where the editors, reviewers and the authors have no knowledge of one another including authors’ affiliations. The submission would have a separate box for authors’ names and institutions. These are not made visible to the editors (including chief editors) until the final decision has been made. Unknowns make people cautious and attentive. Let’s practice that for a decade and see what happens.
I’m not stating something new here, of course. My suspicion is why journals are not moving in that direction instead of parroting those ineffective pointers over and over again.
Looking forward to a link to the evidence that this would make a difference.
I’m also unclear how papers can be assigned to reviewers if the editors don’t know the reviewers’ identities. You aren’t suggesting assigning reviewers at random, or via an algorithm, are you?
Here’s the evidence on the effects of various author attributes on peer review outcomes at Functional Ecology and other EEB journals, along with reviews of the evidence on the effects of double-blind review at other journals. I don’t see any problems here that triple blinding would fix. Note that these studies have a lot of statistical power to pick up even very small effects:
https://dynamicecology.wordpress.com/2015/11/18/gender-and-peer-review-at-functional-ecology/
https://onlinelibrary.wiley.com/doi/pdf/10.1002/ece3.4993
Functional Ecology also is running a randomized experiment on double-blind review, which should be very useful: https://besjournals.onlinelibrary.wiley.com/doi/10.1111/1365-2435.13269
Can I be controversial? I like the HHMI program. But the real hope for minorities has to come from a much broader effort.
Under-repped groups would benefit substantial if their superstar athletes and musicians would stop selling them music and sports camps and star pushing education. LeBron & Russel are doing the right thing but they could use alot more help, especially from entertainers. IMO theres a sense abroad in these groups that their options are limited. Celebs need to change that
@Jim
You are not controversial. I’ve heard that line of reasoning before. Caring about others has nothing to do with your membership or perceived membership in the society. Things are the way they are because of the common attitude that it’s their problem and the way we uncritically judge people who are less fortunate than us.
People abroad are no dunce. Have you read the news lately? Isn’t psychological torture a part of the problem?
Jim, are you seriously suggesting that sports camps and concerts are important causes of historical black underrepresentation in academia? Because come on.
“are you seriously suggesting that sports camps and concerts are important causes of historical black underrepresentation in academia?”
I’m claiming – not suggesting – that the celebrities that are idolized by young people in underrepresented communities have done little or nothing to encourage or support education. And yes, I do think that’s a significant contribution to underrepresentation of some communities throughout society, not just in academia.
There’s nothing wrong with music or sports per se. They build character and cooperation. I play guitar. I love music. But if your family has very limited resources, investing time and money in sports is an extremely poor investment. Chances are almost zero that it will pay off. Even for people who have reduced opportunities, education is much better investment of both time and money.
The celebs that these kids look up to need to tell them that: sports/music? Cool, fun. Education? An absolute necessity.
LeBron has probably done more than anyone in the NBA or NFL to encourage people to get educated. That’s what these communities need.
Jim, with respect, drop it. I don’t know what your intent is, but at a minimum you are skirting the edge of some very nasty stereotypes. I’m not going to have that in our threads. Further comments along these lines will be blocked.
My intent is to help disadvantaged people. I’m not sure what your intent is in misguiding them.
Ok, we’ve had enough. We’ve given you a lot of rope over the years to make unproductive comments. But in this thread you’ve stepped over the line and then pushed back after being informed you’ve stepped over the line. So you’re banned. Don’t ever comment again.
@Jeremy
My mistake. It’s not really important if the editors know the identity of the reviewers.
“Looking forward to a link to the evidence that this would make a difference”
Absence of evidence is not evidence of absence. You are and I can’t pretend that biases from pedigree or lack of it do not creep into peer review processes. My point is, there are cases, where manuscripts have been hastily dismissed for lack of “novelty” etc because the authors are from lesser known institutions or labs while others in comparison generally have easier time because of who they are or the institutions they affiliate with. I call it “cognitive bias” because it’s not something that we are conscious of and being human is a part of the problem.
I note the post and those studies. My point is not along gender line. It’s about fairness for all. Triple blind will fix cognitive bias that may result in hasty dismissal, which is becoming more common. For instance, some journals claim that they accept about 20% of the manuscripts that they receive. I find it hard to believe that 80% of manuscript submitted are not worthy and is there an objective way of deciding who and what is worthy of being read? Removing those identifiers that influence editors and reviewers would make things a little fairer or give people a sense of fairness. That is, every paper has a blind chance of being accepted or rejected.
Off-topic: The “Nominalistic things” paper is flawed. The scene is c. 1937, yet Maria speaks of “towering trees by Galadriel planted” – even though Galadriel was created much later! Wonder if any of the reviewers had noticed this. 🙂
Heh.
On the Shangai Ecology ranking, I was also surprised to see on the 19th position Montpellier III which is an art and social science university: https://en.wikipedia.org/wiki/Paul_Val%C3%A9ry_University,_Montpellier_III. But then that university is also part (via one research unit) of the CEFE the large ecology research center with massive publication output.
I don’t maintain a mental ranking of ecology program, and I think that university using that ranking to attract undergrads would be rather misleading, research output and teaching quality do not necessarily relate (at least in my experience of European university). For PhD candidate wanting to pursue an academic career these rankings might be useful to ensure that you get a decent enough load of publication to get you started.
Yes, I’m sure that’s the right explanation for Montpellier III’s ranking.
I’m sure you’re right that many prospective grad students do use rankings of some sort in identifying candidate programs to apply to. Profs working in programs with strong reputations attract more inquiries from prospective grad students, and inquiries from prospective grad students with stronger on-paper qualifications.
p.s. Whether prospective grad students *should* care about graduate program ranking, or some correlate of program ranking, is another question. I bet you’d get different answers if you ask different people.
There are several reasons why prospective grad students might care about program strength. In a “strong” graduate program, there might be more opportunities for collaboration and side projects. More strong students with whom you share interests and with whom you can share ideas. There might be more/better funding or funding opportunities. Also more and better backup plans. If you’re in a smaller or less “strong” program, you might worry that if you get along badly with your supervisor, there won’t be anyone else whose lab you could switch into.
All of this is much like the reason why we see clustering in many industries–think Silicon Valley for the tech industry, or Hollywood for films, or New York City for fiction publishing, finance, and banking. So at the broadest-brush level, one might think of program rankings as a way to identify “clusters” of good people working in a given field. And I do think there are good reasons why people working in a given field might want to be at the same place as a critical mass of other people in the same field.
Shanghai Rankings: why even bother giving them oxygen?
Like all rankings, it doesn’t really make much sense to discuss them without also discussing the underlying methodology. Shanghai ranks are based on: 1) alumni and staff winning Nobel Prizes and Fields Medals, 2) number of highly cited researchers in the previous calendar year, 3) papers published in Nature and Science in prior 5 years (with a pretty silly method for fractional allocation of credit), 3) papers indexed in Science Citation Index-expanded in the prior year, and 4) the above weighted by number of academic staff (what they call the “per capita academic performance of an institution”).
If anyone thinks those are the best (and only) criteria needed to rank institutions, well then…shrug. I lean towards “meh, bordering on useless.”
The Complete methodology is here: http://www.shanghairanking.com/ARWU-Methodology-2018.html
“Shanghai Rankings: why even bother giving them oxygen?”
I thought I answered that question in the post.
“If anyone thinks those are the best (and only) criteria needed to rank institutions, well then…shrug.”
I don’t think that. I doubt many people do. But for instance, if you follow the link to the commentary on their sociology rankings, you’ll find a smart sociologist saying that the Shanghai rankings correlate pretty well with his own informal sense of the “top” sociology programs. That seems to me to call for some reflection. Like you, I find the methodology behind the Shanghai rankings pretty arbitrary and debatable. But yet, it does seem to capture *something*. So I think it behooves us to reflect on why that is, rather than pretending that it’s not the case, or that it doesn’t matter that it is the case.
As for whether these rankings have any use, well, one use is by prospective grad students, to decide where to apply. Many–far from all, but many–prospective grad students in ecology decide where to apply by identifying “top” programs and then looking for potential supervisors within those top programs. Now, maybe they’re not using the Shanghai rankings to identify “top” programs. Maybe they’re using some other ranking, or asking their advisers to suggest “good” programs, or using their own informal sense of the university’s “overall” reputation (e.g., in Canada, there’s McGill/Toronto/UBC, then every place else is widely regarded as at least one step down). One can certainly bemoan that any prospective grad students go about their searches this way. I bemoan it for purely selfish reasons–I’m sure I’d find it easier to attract graduate students if I was at McGill/Toronto/UBC! But I also have no idea how one would engineer a world in which no prospective grad student ever uses institutional rank/reputation as a first cut when deciding where to apply.