Lots of good stuff this week. C’mon, click all the links! Like you have anything better to do!🙂
From Brian (!):
R is the most popular language for data scientists (61%) and still growing. Python is 2nd followed by SQL. Matlab and the low level languages C and Java are at around 10%.
And a great find by Steve Walker from Ellner & Guckenheimer’s book on modelling (I agree with Steve it is one of the best). To make a good model you should lie, cheat and steal.
I loved this post by sciwo at Tenure She Wrote on moving from one tenure track position to another. This is something I did a year ago, and I agree with much of what she wrote. Some of the specifics are different (especially that I did not need to reset the tenure clock when I moved, for which I am very grateful!), but overall I found myself nodding along in agreement. It can be hard to explain some of the oddities of moving to a new place (such as that I am both very happy to be in my new position, but also very much miss friends and colleagues at my old one), and I think she does a very good job. It also contains information that would be of interest to folks just starting their first tenure track position.
Terry McGlynn had a post this week on how research institutions are better suited to mentoring undergraduates than are undergraduate institutions, which includes his thoughts on “multi-level mentoring”. With this approach, faculty mentor postdocs who mentor senior grad students who mentor junior grad students who mentor senior undergrads who mentor junior undergrads . . . all the way down to toddlers mentoring infants. (Okay, I’m kidding about that last part.) I don’t have quite that system set up in my lab, but I definitely agree with his point that postdocs and grad students should be genuine mentors to undergrads, and that faculty should help their lab members learn how to mentor others.
And, finally, there was a commentary in the Chronicle of Higher Education calling for academics to charge publishers for their reviews and for publishers to pay authors to publish their articles. (It specifies that these fees could be waived for society journals.) It’s an interesting idea (and not a new one), but it seems unlikely to be adopted any time soon, in my opinion.
And, I almost forgot, Lego has introduced a female scientist figure! It’s kind of sad that that is newsworthy, but it is. (UPDATE from Jeremy: Want more female Lego scientists? Who look like ecologists rather than like chemists? Terry McGlynn and his 10 year old son have you covered!)
Liberal Arts Ecologists is a new blog from three ecologists who teach and do research at small liberal arts colleges. As a graduate of a small liberal arts college myself, and a strong proponent of their virtues, I’ll be looking forward to following what they have to say. Judging from their first post, it looks like they’ll be talking a lot about their research–how it’s real research as opposed to just “hobby science” or training exercises for undergrads, how research and teaching can be integrated, how one goes about running a real research program without grad students and while teaching 2-3 courses/term, and more. (p.s. You might be surprised how many active researchers in ecology and evolution, including some really famous ones, got their undergraduate degrees from small liberal arts colleges. Rich Lenski for instance.) (HT Terry McGlynn, via Twitter)
Did the US drop Colorado potato beetles on East Germany in the 1950s to sabotage East German crops? No–but the East German government waged a major propaganda campaign claiming they did. And while the propaganda wasn’t true, it wasn’t totally implausible–at various times governments have at least briefly considered using crop pests as weapons. The BBC has the fascinating story. And I predict that every reader who works on crop pests or invasive species is going to start seeking out copies of the posters the East German government produced as part of the propaganda campaign (this one is my favorite, but these are good too).
Statistician Stephen Senn has a nice post on researcher degrees of freedom (which he calls “multiplicity”) and how to deal with it in the context of drug trials. The problem has long been widely recognized in this context, which is why government regulations require that analyses be completely pre-specified before data are unblinded. Senn isn’t a fan of this, seeing it as an attempt to circumscribe the inferences scientific posterity might make. He thinks pre-specification of analyses is very valuable, but suggests that researchers be obliged to report every analysis they conduct rather than obliged to only conduct pre-specified analyses. This issue of registries vs. disclosure requirements is one I’ve discussed before. Senn goes on to note that the exact same issues crop up with analyses of openly-shared data: if you don’t pre-specify your analyses, and/or disclose the results of every analysis you conducted, your reported results will be biased, whether you’re analyzing shared data or newly-collected data. Senn further notes that, at least in the context of drug trial data, it’s hard to see how to require pre-specification of analyses from those wishing to re-analyze openly-shared data. But I think disclosure requirements could still work. As discussed in that old post of mine, disclosure requirements would be pretty easy for journals to implement, I think. And I don’t see any reason why journal disclosure requirements wouldn’t work equally well for papers based on newly-collected data and shared data. Of course, one big limitation of disclosure requirements associated with published papers is that they don’t address publication bias.
Economics graduate student Carola Binder says there are four ways to answer questions:
- categorically (e.g., yes or no)
- analytically (e.g., by defining or redefining terms, by saying “it depends” and then elaborating)
- with a counter-question
- by ignoring the question or declining to answer
She then has some fun answering common economics questions in each of the four ways. I thought it might be amusing to try the same thing in ecology. For instance, in response to Tony Ives’ question (“Should ecology be about the study of general laws?”), one could answer:
- No. (That’s what 2/3 of Tony’s audience answered)
- Well, it depends what you mean by “law”. If you mean a statistical pattern or regularity, such as the species-area curve, then ecology has many laws which everyone agrees are central to the discipline. But if by “law” you mean…
- Why ask this question at all? Does the answer even matter for how we do ecology?
- [rolls eyes, goes back to thinking about data]
Or take the question “Should the intermediate disturbance hypothesis be abandoned?”:
- It depends what you mean by the intermediate disturbance hypothesis, and what it means to abandon a scientific theory as opposed to further developing and modifying it.
- There’s no point in asking this question. People are just going to do whatever research they want, no matter what anyone else thinks they “should” do.
- [shambles away]
Post your own questions and sets of answers in the comments!🙂
At the recent annual meeting of the American Political Science Association, there were two panels on women in political science. The panels addressed issues that seem relevant to ecology too. As in ecology, women in political science are underrepresented among senior faculty. They’re also cited less than men, even after controlling for seniority, paper topic, and other factors. That’s in (small) part because male political scientists self-cite more than women. Click the link for a good report on the panel discussions, in particular regarding the issue of who should change. That is, given that there are some practices in which men engage more than women, should women seek to emulate those practices (e.g., self-citation)? Or seek to change those practices, and the men who engage in them? And there was discussion of other issues, ranging from the adequacy of “stop the clock” policies for tenure, to whether the relative prestige of different subfields reflects gender bias. (HT The Monkey Cage)
Andrew Gelman with a story of post-publication peer review. Specifically, how for the vast majority of papers it mostly consists of people just linking to the paper without commenting, or only commenting casually. It’s very rare for someone to actually dig into the methods, in this case leading to the discovery of clear-cut and very serious statistical flaws. I have an old post that draws the same conclusion (and in a follow-up post Andrew was kind enough to quote that old post of mine). Pre-publication review isn’t perfect. But with rare exceptions, the only time people read like pre-publication reviewers is when they’re acting as pre-publication reviewers. So that exposing papers to the “crowd” post-publication hardly ever results in meaningful “review”.
Larry Wasserman says that, for a minority of statisticians who use Bayesian methods (and he does emphasize it’s a minority), Bayesian inference is like a religion. I think Larry’s choice of words is unfortunate (he says “religion” when really what he means is something like “cult”; plenty of religious people aren’t cultists). And I’m not so interested in the behavior of a minority of Bayesians specifically. But Larry’s post does raise some larger issues which I think are interesting, and which are voiced by a commenter on his post. First, can you find small groups of “fanatics” associated with any approach or idea in science, or do certain ideas or approaches tend to attract fanatical adherents while others don’t? Second, do such “fanatics” serve a useful purpose in science, by pushing unconventional ideas and by making clear the compromises and trade-offs made by more pragmatic types?