Friday links: View From the Park lives (!), gifs vs. the government shutdown, and more

Also this week: data vs. campus free speech, no sign yet of peak research university, and more.

From Jeremy:

VIEW FROM THE PARK IS BACK! Well, for one month anyway. It’s John Lawton’s reflections on his famous monthly column, 20 years on. John, if you’re reading this: I devoured View From The Park in grad school. My ambition for this blog was that it would be a modern day View From The Park. I hope Dynamic Ecology has lived up to your example.

Measuring how well US colleges and universities enroll low income students. Excellent commentary here.

Time series data on various measures of “free speech controversies” on US college and university campuses. Useful context for the next time a free speech controversy hits the news. Note that I’m not familiar with the organization that compiled the data, so can’t vouch for the quality of their data. FWIW, the author of the linked post looked for and corrected a few errors in the dataset. Note as well that I’m wary of over-interpreting time series counts of rare extreme events (here, events like speaker disinvitations and faculty being fired for political speech). There’s probably more than one plausible story one could spin about the linked data. Finally, data on other variables–some of which would be harder to compile–would be useful to have, to get a more complete picture. But still, some context is better than none. Perhaps the most useful take-home message to draw from these data is that free speech controversies extreme enough to make the news remain quite rare. Which doesn’t make them unimportant. But in general, it’s easy to read too much into changes in the frequency of rare events.

The 2018 Carnegie classifications–widely used by US colleges and universities, and others, to define groups of “peer” institutions–are out. The definitions of the classes have changed since 2015. Because of the new definitions, and because of institutions changing their degree offerings, there are now many more “research universities” than there used to be even a decade ago.

Always good to see college students making good life choices. 🙂

From Meghan:

An entertaining and informative twitter thread about what life was like for one US National Science Foundation program officer on the first day back at work after the shutdown:

A new preprint reports the results of a study finding that the number of scale points used for student evaluations of teaching influences the size of the gender gap: there’s a notable gap on a 10-point scale but not on a 6-point scale. I haven’t read the study carefully yet, but it’s notable that it was a pre-registered study:

Before conducting the study, we preregistered these predictions as well as the planned sample size, the exclusion criteria, and the intended statistical analyses (https://aspredicted.org/blind.php?x=r6hz6x)

Here’s a twitter thread by one of the authors explaining the findings:

2 thoughts on “Friday links: View From the Park lives (!), gifs vs. the government shutdown, and more

  1. “there’s a notable gap on a 10-point scale but not on a 6-point scale”

    I read parts of the article and many of the points that first come to mind, such as attitudes changing over time, seem to have been addressed pretty well.

    Just the same, the idea that simply changing the ratings scale has a dramatic impact on evaluations of competence seems preposterous. It’s bothersome that the researchers put it forth so confidently (“demonstrate…can powerfully affect …”), without the least acknowledgement of how unlikely it is. My inclination is to find some other unidentified source of systematic bias, which I would presume is there until an exhaustive search and multiple additional studies have been done.

    One potential approach to verification would be to seek instances that should show the opposite effect – what might happen if an institution went from six choices to ten choices? Or can we compare institutions that offer six choices with those that offer ten choices and identify an effect? If this is real, we should be able to.

    Forgive me for thinking that the authors would suggest approaches for further verification of such a surprising outcome. They don’t. Instead, they confidently assert the implications for evaluations (“showing that rating systems are also important drivers of workplace inequalities”). That’s hugely premature. This has to be validated through multiple studies and approaches before it’s accepted as fact much less implemented. Suppose this practice is implemented but there *is* an unidentified systemic bias in this study that’s inadvertently *not* repeated during implementation – then the apparent jump in evaluated competence disappears and it has to be viewed as real, since the method is “known” to give reliable results.

    IMO the confidence with which these researchers assert their conclusions is emblematic of the larger problem in social science that Gelman is targeting. The efforts to control for other factors are excellent at face value, but with such a vast array of potentially complicating factors, most of which are almost certainly unidentified, it’s hard to imagine that this study is the final word, and just as hard to fathom how the researchers could put it forth with such confidence.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.