Also this week: the ethnography of Wikipedia, why statistically significant parameter estimates are biased estimates, and more. Plus a hilarious prank instructors can play on their TAs! For some value of “hilarious”.
From Jeremy:
Lots of sensible discussion in the paleo blogosphere this week about the need for clear policies on live tweeting of conference talks, after a speaker asked the audience not to live tweet her talk and a late-arriving audience member did so. See here, here, here, and here. I don’t have much to add, except that this seems to me to be one more example of how we’re living through culture clashes. (full disclosure: I personally am fine with people live-tweeting my talks, taking the view that it’s no different than people talking about my talks or journalists writing about them, and that it’s vanishingly unlikely that anyone would try to scoop me on the basis of tweets, or be able to if they tried. But I’m an ecologist, if I were in some other field I might feel differently.) I do think it’s interesting to see people who like live tweeting nevertheless calling for conferences to impose some rules for everyone to abide by. I haven’t often seen this sort of call in other areas in which new online tools are unsettling established expectations and practices. In my admittedly-anecdotal experience, it’s more common for advocates of new online tools to downplay the importance of agreed rules for the appropriate use of those tools. Not sure why.
Interesting piece on the need for more theory in neuroscience, along with suggestions for how to promote theory and theory-data linkages. I always like reading about how folks in other fields see issues that also crop up in ecology. (ht Not Exactly Rocket Science)
Speaking of the need for theory, here’s a really nice post on the dangers of “measurement before theory”. If you don’t know exactly what you’re trying to measure, so you just go with some plausible-seeming index, there are going to be tears before bedtime. I once tried to get at this in an old post, but didn’t say it as well. Also provides a nice cautionary tale, suitable for undergraduate introductory stats courses, on the limitations of trying to use covariates to try to control for extraneous sources of variation. Note that the linked post is about economics, but it’s totally accessible and you’ll be able to think of the ecological analogues very easily. For instance, think of the fruitless debate over different indices of the “importance” of competition in community ecology. (ht Economist’s View)
Here’s a really nice figure from Andrew Gelman, illustrating the expected distribution of estimated effect sizes for a low-powered study in which the true effect size is positive but only slightly different than zero. Statistically significant estimates are those that are much larger in absolute magnitude than the true effect, and often have the wrong sign. I’ve read some of Gelman’s writings about “type M” (magitude) errors and “type S” (sign) errors before, but this really clarified his point for me. Still mulling it over.
Apparently we all need to check our Google Scholar profiles for fake papers. Yes, really. Call me old fashioned, but this is an illustration of why I prefer to rely on Web of Science.
The ethnography of Wikipedia. Confirms my impression from previous discussions we’ve had. (ht Marginal Revolution)
And finally, a teaching prank: start an analogy, and then leave the TA to finish it. 🙂
From Meg:
I love this post from SciCurious in response to my post on keeping perspective and the #myworstgrade hashtag. I wish all the students struggling in my class right now would read it!
Most popular link so far: the need to check your Google Scholar profile for fake papers.