From Meg:
I really liked this post from Potnia Theron on when to decide something is “good enough”, even if it’s not perfect. It includes the fantastic line that “Scientific papers are never finished, they are merely abandoned to publication.” I love that! And I also like the more general message that there are non-linear relationships between effort in and quality out, and that the shape of the curve and the optimal place to call it quits will depend on the particular project.
Based on conversations I had with some colleagues this week, I also wanted to link to the idea of stereotype threat. I am hoping to write a post on this at some point, because I think it’s important for us, as educators, to realize that there are things that influence how our students do on exams that are not related to how much they actually know. This affects all students, but, in science classes, would be expected to have the strongest negative impact on students who are women or underrepresented minorities.
And, for the US grad students, the DEBrief blog has a reminder that the DDIG deadline has been moved earlier this year, to October 10.
From Jeremy:
Will piracy cause the textbook industry to go the way of the music industry?
Old but good: I missed this at the time, but Larry Wasserman of Normal Deviate discusses an interesting study showing that, in each of three leading psychology journals, published papers report a statistically-significant excess of P values just below 0.05, given the frequency of P values in other ranges. One obvious explanation for this result is publication bias. But it could also represent a signal of “researcher degrees of freedom”: researchers consciously or unconsciously tweaking their data handling and analysis so as to get P values that are (barely) nominally significant. Anybody up for doing the same analysis for ecology journals?
Also old but good: hard truths about university economics and central administration. From a prof turned associate dean, who’s seen university operations from multiple angles. Will at least give you a better appreciation for what the dean’s job is and why it’s so difficult. Best line: “It’s not enough (in some cases) to put the carrot in front of the donkey. You have to point to the carrot, tell the donkey it is a carrot, and that he can eat it. And work out marginal revenue and marginal cost for the donkey too. And repeat this several times.”
Psychological and human behavioral studies by US researchers are more likely than those by non-US researchers to report extreme effects in the predicted direction. But the same is not true for non-behavioral studies, which allows one to rule out some potential explanations, such as a file drawer effect. Over at Retraction Watch, the study authors speculate that it’s down to unintentional, unconscious biases on the part of researchers (“researcher degrees of freedom” again), interacting with the long-standing publish-or-perish culture in the US. As I’ve said before, I think the analytical approach the authors are using here is quite an interesting one. Somebody ought to try it out with ecological datasets.
Survey data show that most scientists talk to reporters at least occasionally, and most that do find the experience to be a positive one. So where did the myth that “serious scientists don’t talk to reporters” come from? Reading this got me wondering about how many other stereotypes about scientists (or specific subsets of scientists, like ecologists) not only aren’t true, but are actually the opposite of true, so that those who fit the stereotype are actually a small minority. This relates to an old post on the culture of ecology.
Rich Lenski on a classic evolution paper (“the single greatest experiment in the history of biology”), Luria and Delbrück (1943). If you don’t know, that’s the paper that demonstrated that mutations are random with respect to their fitness effects, thereby killing off Lamarkianism in microbiology and launching the research that eventually led to the deciphering of the genetic code. Rich doesn’t just summarize the paper, he talks about his personal reactions. How being taught about the paper gave him an “Aha!” moment as an undergrad. How he didn’t actually read it himself until as a postdoc he became frustrated with his inability to ask the questions he wanted to ask in his study system, and began casting about for a different direction. And how the paper became famous and influential in evolutionary biology (as opposed to genetics) only after a lag, due to evolutionary biologists’ preference for working with macroscopic organisms. Here’s hoping for more such reflections on classic papers, both from Rich and others (Actually, to judge from the comments on Rich’s post, such reflections are common in the evolutionary blogosphere. But maybe not so much in ecology? Or am I embarrassingly unaware of ecology bloggers writing such posts?)
Hoisted from the comments:
Speaking of Rich Lenski, here he is on his “experimental” foray into blogging and tweeting. Comes at the end of a good discussion between Simone Vincenzi, Meg, and myself on whether Rich’s decision to blog and tweet will change how many ecologists and evolutionary biologists view social media. We quickly branched out into a broader discussion of fame, authority, and “leadership vs. independence” in science.
Commenter Sean notes and links to Platt 1964, a classic statement of how to do rigorous science. Platt describes and argues for an approach he calls “strong inference”. Everyone should know this paper. Looking for something for your reading group to read? Something that will really get people thinking and talking (and that’s an easy read as well)? Look no further!
Meg, thanks for linking to our post. We appreciate getting the word out and also want to share a critical update note:
We’re aware and are have the NSF web support team looking into an error with the DDIG opportunity posted on Grants.gov where both the current revision (NSF 13-568) and the old version with the outdated November deadline (NSF 12-590) are listed as open calls. Please, DO NOT attempt to submit to NSF 12-590. This error appears to be only in Grants.gov and not in Fastlane.nsf.gov. If you know people using Grants.gov for this, please share this info. H/T to the grad student who saw this and wrote in to us.
Meg et al., I wasn’t aware of the term “stereotype threat” but I’m fully aware of the concept. My university is both a Hispanic-serving and Minority(Af-Am)-serving institution. When anybody steps on our campus for the first time, you can’t help but notice that it’s exceptionally diverse, and also filled with students who arrived on campus underprepared from our local high schools. I can’t even come close to imagining how hard it might be to perform well considering the reduced expectations that many people harbor. That’s got to be a lot of stress to consistently defy expectations both locally and on a societal scale.