Friday links: adverbs loom large in science writing (unsurprisingly), and more

From Jeremy:

Quantifying the use of adverbs (and adjectives) in scientific abstracts. Curious which journal publishes the most “remarkable” results (at least according to the authors of its abstracts)? Or the most “unfortunate” ones? Ever wonder which adverb is used most often to start an abstract? Want to gape in awe at some of the words that scientists have converted to adverbs by sticking “ly” on the end? Click through for the answers! (HT Chris Klausmeier)

This is old, but I forgot to link to it at the time. And it’s still timely, because we’ve been talking a lot about how easily our statistical inferences can be compromised (e.g., by confounding exploratory and hypothesis-testing analyses). Steve Walker has an old post talking about this problem, which can be broadly termed “model selection bias”. There is data from physics on how serious it is: over the years, the 98% confidence intervals that physicists have put on the values of fundamental physical constants like the speed of light have failed to contain what is now known to be the true value 20-40% of the time. That is, those “98%” confidence intervals were really 60-80% confidence intervals. As Steve suggests, the problem seems likely to be worse in ecology. But there may be a solution: Steve points to a recent paper in theoretical statistics claiming to derive a correction for all forms of model selection bias. That is, it tells you how exactly much to widen your confidence intervals (or increase your P-values) to correct for the model selection bias involved in your study. As Steve notes, this would be completely game-changing if it worked, which in itself is a reason to be skeptical that it can work. Unfortunately, the paper is highly technical (if the math is beyond Steve, it’s definitely beyond me). Any mathematically-inclined reader care to take a shot at “translating” (and evaluating) it for the rest of us?

The NSF Division of Environmental Biology blog crunches the numbers on award size and duration, in the process dispelling several “myths” about what sort of size or length of award the NSF DEB likes to fund.

Terry McGlynn asks whether one side effect of changes in how people work and parent is a decline in “field station culture”. He suggests that the days when (almost invariably male) senior researchers could head off to a field station for months at a time while leaving their spouses home with the kids are mostly gone. His post isn’t a lament–he himself prefers to spend only brief periods at La Selva Biological Station in order to spend more time with his family, even at some cost to his research. He’s just suggesting that the way field stations worked in the past may have been a reflection of the times, and that those times are gone (with some exceptions he discusses).

Writing in Slate, statistician Andrew Gelman on how scientific practices widely considered to be reasonable turn science into “a sort of machine for producing and publicizing random patterns.” He uses a paper from psychology as an example, but I’m sure you could think of similar examples from your own field.

Research vs. teaching in universities. Has the former been improving at the expense of the latter? In Australia at least, one can argue that they’ve actually both been improving hand in hand. Any Australian readers care to comment? Anyone know of similar datasets for other countries? (HT Economist’s View)

3 thoughts on “Friday links: adverbs loom large in science writing (unsurprisingly), and more

  1. another important paper that I’m trying to currently digest that is in the same vein as the (broken) linked paper on error http://www.stat.columbia.edu/~cook/sander2.pdf, which attempts to improve interval estimation by modeling sources of error other than the i.i.d random error modeled by OLS. These other sources of error not only grow the intervals but also (potentially) shift them (because of biased estimates).

    • After a probably way too brief of a scan, steve’s linked paper seems to address a much more limited kind of error than that addressed in the Greenland paper that I linked to (that is the Greenland paper attempts to model ALL sources of error and not simply that arising from model selection).

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.