From Jeremy:
Missed this last week: Nature has a great profile of ecology PhD student Diane Orihel, who put her nearly-complete PhD on hold to protest the closure of the Experimental Lakes Area. And not “protest” in the sense of “start a Facebook group” or “write a few blog posts” or “sign a petition” (and I confess the latter two are pretty much all I did). “Protest” in the sense of “contact local politicians, write press releases, set up a website, brief the opposition political parties, do grass-roots organizing, do media interviews, travel the country giving speeches, and more”. All from a self-described shy introvert with no experience in activism who says “I made myself act unlike myself.” Wow! ELA isn’t fully saved–but it still has a shot, for which Diane Orihel deserves a lot of thanks.
How did we first figure out that birds migrate? The Lab and Field has the fascinating history. It involves birds that survived being speared, and hypothesized migrations to the moon!
The frequency with which the phrase “marginally significant” occurs, as a function of the associated P-value. Yes, this figure is real; the data are from a Google Scholar search (although really the data should be displayed as a histogram). The comments embedded in the figure are of the “it’s funny because it’s true” variety (“Wait, P=0.1 is a level of significance, right?”) And it’s potential fodder for a discussion of how to do and report statistics. Comments from Andrew Gelman here. He thinks that people consciously tweaking their stats or how they discuss them in order to turn “marginally significant” results into “significant” ones is a real but minor problem. Far more worrisome in his view is the frequency with which reasonable-seeming analyses can yield statistical significance when nothing at all is truly going on, even when the researcher isn’t trying to put a thumb on the scale. If I understand correctly (and it’s quite possible I don’t, so what follows is very tentative), he thinks the big problem is with our goals of making discoveries and drawing firm conclusions (e.g., this post of his). In a noisy world in which strong effects that always run in a particular direction are rare, trying to discover real effects and draw firm, final conclusions about their direction and magnitude just isn’t the best way to learn about how the world works. There are just too many researcher degrees of freedom, and the statistical populations of interest are just too heterogeneous. I think I see his point, but I’m not sure I’m willing to go so far as that. I’m just not sure what the scientific enterprise would look like if it comprised nothing but suggestive exploratory analyses, with no one ever declaring discoveries or drawing firm, final conclusions. I doubt even Brian, who’s argued eloquently for the value of exploratory statistics, would go that far. Because as soon as you’re prepared to countenance any hypothesis-testing and conclusion-drawing at all, aren’t you right back to the issue of how best to do it? Which means talking about how best to prevent researcher degrees of freedom from compromising our hypothesis tests and conclusions (e.g., this old post of mine, and this one, and this one and its excellent comment thread). Deborah Mayo also has some relevant philosophical ideas here, on how to test hypotheses and draw conclusions using frequentist statistics, but without treating frequentist statistics as a routinized means of making accept-reject decisions (e.g., this post) May write about this more in the future, once I have more coherent thoughts. (HT Jeremy Yoder, via Twitter)
Hope Jahren sure can write–but she won’t write the word “f*cking”. (UPDATE: link fixed) Because it would be contrary to her religious faith. We’ve been talking a lot lately around here about the hidden biases scientists (and others) have. But one of the not-at-all hidden biases many scientists have is against religious faith of any sort, so it’s good to see a scientist talking about her own faith. I very much agree with Hope’s post and like her I’m quite content for everyone to find their own Path. Though I confess that, for me, learning about evolution wasn’t quite as moving as the other extremely-moving experiences to which she compares it. π
Semi-relatedly: I liked this post from Ingrid Robeyns at Crooked Timber on epistemic humility. About how difficult it is to walk a mile in someone else’s shoes and really understand where they’re coming from. Even someone seemingly not all that different from you. Relevant to a bunch of recent posts here. Reminded me a bit of a very thoughtful book by Sam Fleischacker, a wonderful former prof of mine.
Joan Strassmann is currently chairing an evolutionary biology search committee at Washington University. On her blog, she’s been posting a lot of information about how the search committee is operating, and a lot of general advice to applicants for faculty jobs. See here for one recent post, and then scroll down and up for many more. Here’s my own recent post on this topic.
Has the notion of humanity’s “ecological footprint” outlived its usefulness? Writing in Plos Biology, Blomqvist et al. say yes (note that Nature Conservancy chief scientist Peter Kareiva is among the “et al.”) Coverage of the resulting debate here. Looks like this debate would be good fodder for undergraduate ecology and conservation biology classes. (HT Trevor Branch)
How blogs can help you develop ideas.
Ethan Perlstein with a data-based snapshot of the current status of science crowdfunding. (HT NeuroDojo)
Somebody has built a website that tracks the h-indices of 35,000 scholars, relative to the averages for their fields. (UPDATE: link fixed) Which seems to me like a pretty silly thing to do. If you think this is a good idea in principle, and that the only problem is figuring out how to appropriately normalize the index, then I’m sorry, but I think you’re solving the wrong problem. And I say that in full awareness that universities and funding agencies often have to make difficult decisions based on comparing apparently-incomparable things–deciding to buy guns vs. butter, or fund ecology vs. something else. I just don’t think those decisions get made any more easily or any better if we base them on these sorts of indices. Let’s just come out and admit we have to rely on professional judgment to make such decisions and take responsibility for our judgments. Rather than trying to come up with indices and then pretending that we’ve thereby made things “objective” (as if the decisions to use indices, and what indices to use, weren’t themselves professional judgments).
And finally, just as you weren’t bitten by a brown recluse spider, a false widow spider did not eat your dog. π
The link to the h-index database leads to a login page for the University of Calgary Library (the link to the Nature article is hidden in the url though).
Thanks, fixed. That’s the second one I messed up in this post! #amateurhour π
With the Gelman post on false positives, I’m taking from it 2 main points (that are similar but slightly different to your points):
1) Our question about effects in many (but not all) situations shouldn’t be “Does this exist?” but instead “Does this generalize?”
2) The more implicit point (that I pick up more from reading his other stuff) is that the issue with the false positive framework is that it suppresses the expression of uncertainty in your findings.
But I don’t know where that leaves me on hypothesis testing. I know that to me, you definitely are asking the right questions about it!
Thank you for the link to the blogs as catalysts post. I think we can go one step beyond that. It feels to me like a lot of academics think that the purpose of science is to produce papers, or at least they act this way. The goal as far as I understand is to produce knowledge, and it can just as well be communicated through blogs or Q&A sites as through preprints and published papers. I hope to see it become a bigger part of researchers work flows in more fields (beyond economics, math, and cstheory).