Friday links: what to do when a journal calls you “Miss”, pointless altmetrics, and more (UPDATED)

From Meg:

For American readers: NSF-DEB has issued a new program solicitation for its core programs. There don’t appear to be any real changes — they are keeping the preproposals, and the limit on two preproposals per year.

And, for people who get frustrated about being called by the wrong title (which definitely includes me at times), here is perhaps the perfect response. The last line of this (written in response to an academic being called “Miss” by a journal) is excellent.

From Jeremy:

Bit late on this, but still wanted to link to it. Bob O’Hara has a nice post asking what’s the point of altmetrics? “Impact”, “influence”, and “quality”, important as they all are, are just not the sorts of things that one can easily summarize in one number. But that doesn’t mean they can be summarized in several different numbers either. I love numbers, but there are lots of things in life that you can’t put numbers on, not because of lack of the right data or the right summary metrics, but because they’re just not the sort of thing you can put numbers on. At least not good enough numbers to be worth caring all that much about. My love for my family and friends, my worth as a person–and the quality and influence of my science. And no, imperfect or incomplete numbers are not necessarily better than no numbers at all–they could well be worse, for various reasons. (For a contrary view, here’s the website of a recent PLoS altmetrics development workshop.) (UPDATE: My remarks on altmetrics are deliberately provocative, but on re-reading the post I think they’re also a bit unclear and imprecise. Let me try to clarify a bit. What I, and Bob, are basically questioning is what “latent variable”, if any, citation metrics are measuring, either individually or collectively. It’s fine to measure how often a paper’s been cited, or downloaded, or tweeted, or whatever. It’s fine to measure those things, and I can imagine reasons why you might want to measure those things. What I have a problem with is thinking of those measures, individually or collectively, as some kind of index of some very important but very loosely-defined latent variable such as “influence”, “quality”, or “importance”. Things like “scientific influence”, important as they are, can’t be defined precisely enough to be usefully thought of as “latent variables” that we might capture with one or a few numerical indices. So when talking about altmetrics, I prefer to just think about them at a literal level–as measures of how often papers are cited, or downloaded, or tweeted, or whatever.)

SpotOn London (Science Policy, Outreach, and Tools ONline) is holding its annual conference this weekend (Nov. 11-12, 2012). Many of the sessions will be streamed live online, including this interesting-looking one (co-organized by the above-mentioned Bob O’Hara) on the future of scientific journals. The session asks whether the advent of “megajournals” like PLoS ONE is likely to lead to a two-tier journal system, how to filter the literature in a world where journal “brand” no longer helps, and more. The first question is something I wondered about last week. For reasons I also discussed last week, I suspect the answer to the second question is “journal ‘brand’ will always be an important filter, but in the future only the top-tier journals in any field will have ‘brands’ anyone cares about”.

This one’s rather far from our usual territory here at Dynamic Ecology, but too provocative to resist. Think linking disastrous weather events like Hurricane Sandy to climate change will greatly affect US public opinion or policy on climate change? Well, as these two pictures illustrate, good luck with that

Also far from our usual territory, but too amazing too resist. The winners of the annual Wildlife Photographer of the Year contest, run by London’s Natural History Museum, have been announced. I have fond memories of going to see the annual exhibit of the winning pictures when I was a postdoc in London. To see some of the winning images, go here, here, or here (the last one is a link to the full gallery of winners, which should be in a single big slideshow or image gallery but isn’t). This image from the exhibit is an illustration of what happened when I expressed skepticism about the value of live-tweeting talks. 😉 And here’s what it would have looked like if someone had photographed me while I was writing my first zombie ideas post. 😉

From the archives:

What’s the funniest scientific talk you’ve ever heard?

13 thoughts on “Friday links: what to do when a journal calls you “Miss”, pointless altmetrics, and more (UPDATED)

    • Actually I have no idea who he is–climate change research and politics isn’t really my specialty. I thought it was entertainingly provocative, and flagged it as such in the post. I mostly threw it up there for fun and don’t want to get into a massive debate. But for balance, and so no reader gets the wrong idea and takes it more seriously than intended, I’d welcome a link or two that pushes back against Pielke Jr. here. Can you suggest something? This is your area much more than mine.

      • Just do a search for Pielke Jr. at RealClimate or Rabett Run and you will get all kinds of stuff to read.

        In short:
        He’s fancies himself as the “Honest Broker” who “objectively” looks at the social science of climate change issues on which everyone else is wrong. He doesn’t understand even basic statistical principles, which results in him very publicly and repeatedly accusing those who actually do, of “cherrypicking” data, pre-conceived political agendas, etc. Among other issues. You cannot have an honest discussion with the guy.

    • Yes, I’ve seen that. I don’t quite get the joke, and I think that’s because it’s a rare case of xkcd getting something wrong. I think, though I’m not sure, the joke on frequentists here is based on a widespread fallacy (well, widespread among Bayesians of a certain stripe) concerning the logic of frequentist inference. At some point I’m going to do a post summarizing Deborah Mayo’s work on “severe testing” as the central idea of frequentist statistical inference (and indeed, all scientific inference). Part of that post will talk about the fallacy that I think is the basis of that cartoon.

    • And I see that Bayesian Andrew Gelman also thinks the cartoon is unfair and based on a misunderstanding of how frequentist statistics works:

      Don’t get me wrong, I love xkcd just like everyone else I know. And as I can attest from personal everybody who posts lots of stuff of the internet is going to make some mistakes. But this cartoon is a good illustration of how easy it is to make (feeble) jokes making *anything* sound stupid, just by summarizing it badly enough. Think of the silly joke that a Bayesian is someone who, expecting to see a horse and seeing a donkey, decides he has seen a mule. Or imagine saying to Charles Darwin, “So Mr. Darwin, you’ve shown that…things change. Hardly a very profound insight!”

      UPDATE: Gelman’s post was up briefly, without a title, but now seems to have been taken down. Don’t know why–maybe to fix the title? In comments on another post at his blog today, Andrew did say he agrees with me 100% that the cartoon is unfair, so perhaps he just decided his agreement wasn’t worth devoting a whole post to?

      UPDATE #2: Ah, further comments from Andrew indicate his post will go up tomorrow. So my link about probably won’t work, but folks can go to to find the post once it’s up.

  1. Ok Jim, Roger, thank you both for providing some context for any readers who want it.

    As I noted above, the link in the post just struck me as entertainingly provocative, the sort of fun thing I sometimes like to toss out in Friday linkfests. As I’m sure you both (and other interested readers) recognize, this comment thread isn’t the right place for folks to engage in a substantive debate on the topic or any related issues.

  2. Pingback: Climate change confusion: an example | Ecologically Orientated

  3. When I was an editor at Oikos, I’d refer to every corresponding author as “Dr.”, even if I had strong reason to suspect that the author was a student. I figured any student I mistakenly called “Dr.” wouldn’t be offended.

  4. Pingback: Friday links: grant writing advice, skill vs. luck, and more | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.