Lots of good stuff this week. C’mon, click all the links! Like you have anything better to do! 🙂
From Brian (!):
R is the most popular language for data scientists (61%) and still growing. Python is 2nd followed by SQL. Matlab and the low level languages C and Java are at around 10%.
And a great find by Steve Walker from Ellner & Guckenheimer’s book on modelling (I agree with Steve it is one of the best). To make a good model you should lie, cheat and steal.
From Meg:
I loved this post by sciwo at Tenure She Wrote on moving from one tenure track position to another. This is something I did a year ago, and I agree with much of what she wrote. Some of the specifics are different (especially that I did not need to reset the tenure clock when I moved, for which I am very grateful!), but overall I found myself nodding along in agreement. It can be hard to explain some of the oddities of moving to a new place (such as that I am both very happy to be in my new position, but also very much miss friends and colleagues at my old one), and I think she does a very good job. It also contains information that would be of interest to folks just starting their first tenure track position.
Terry McGlynn had a post this week on how research institutions are better suited to mentoring undergraduates than are undergraduate institutions, which includes his thoughts on “multi-level mentoring”. With this approach, faculty mentor postdocs who mentor senior grad students who mentor junior grad students who mentor senior undergrads who mentor junior undergrads . . . all the way down to toddlers mentoring infants. (Okay, I’m kidding about that last part.) I don’t have quite that system set up in my lab, but I definitely agree with his point that postdocs and grad students should be genuine mentors to undergrads, and that faculty should help their lab members learn how to mentor others.
And, finally, there was a commentary in the Chronicle of Higher Education calling for academics to charge publishers for their reviews and for publishers to pay authors to publish their articles. (It specifies that these fees could be waived for society journals.) It’s an interesting idea (and not a new one), but it seems unlikely to be adopted any time soon, in my opinion.
And, I almost forgot, Lego has introduced a female scientist figure! It’s kind of sad that that is newsworthy, but it is. (UPDATE from Jeremy: Want more female Lego scientists? Who look like ecologists rather than like chemists? Terry McGlynn and his 10 year old son have you covered!)
From Jeremy:
Liberal Arts Ecologists is a new blog from three ecologists who teach and do research at small liberal arts colleges. As a graduate of a small liberal arts college myself, and a strong proponent of their virtues, I’ll be looking forward to following what they have to say. Judging from their first post, it looks like they’ll be talking a lot about their research–how it’s real research as opposed to just “hobby science” or training exercises for undergrads, how research and teaching can be integrated, how one goes about running a real research program without grad students and while teaching 2-3 courses/term, and more. (p.s. You might be surprised how many active researchers in ecology and evolution, including some really famous ones, got their undergraduate degrees from small liberal arts colleges. Rich Lenski for instance.) (HT Terry McGlynn, via Twitter)
Did the US drop Colorado potato beetles on East Germany in the 1950s to sabotage East German crops? No–but the East German government waged a major propaganda campaign claiming they did. And while the propaganda wasn’t true, it wasn’t totally implausible–at various times governments have at least briefly considered using crop pests as weapons. The BBC has the fascinating story. And I predict that every reader who works on crop pests or invasive species is going to start seeking out copies of the posters the East German government produced as part of the propaganda campaign (this one is my favorite, but these are good too).
Statistician Stephen Senn has a nice post on researcher degrees of freedom (which he calls “multiplicity”) and how to deal with it in the context of drug trials. The problem has long been widely recognized in this context, which is why government regulations require that analyses be completely pre-specified before data are unblinded. Senn isn’t a fan of this, seeing it as an attempt to circumscribe the inferences scientific posterity might make. He thinks pre-specification of analyses is very valuable, but suggests that researchers be obliged to report every analysis they conduct rather than obliged to only conduct pre-specified analyses. This issue of registries vs. disclosure requirements is one I’ve discussed before. Senn goes on to note that the exact same issues crop up with analyses of openly-shared data: if you don’t pre-specify your analyses, and/or disclose the results of every analysis you conducted, your reported results will be biased, whether you’re analyzing shared data or newly-collected data. Senn further notes that, at least in the context of drug trial data, it’s hard to see how to require pre-specification of analyses from those wishing to re-analyze openly-shared data. But I think disclosure requirements could still work. As discussed in that old post of mine, disclosure requirements would be pretty easy for journals to implement, I think. And I don’t see any reason why journal disclosure requirements wouldn’t work equally well for papers based on newly-collected data and shared data. Of course, one big limitation of disclosure requirements associated with published papers is that they don’t address publication bias.
Economics graduate student Carola Binder says there are four ways to answer questions:
- categorically (e.g., yes or no)
- analytically (e.g., by defining or redefining terms, by saying “it depends” and then elaborating)
- with a counter-question
- by ignoring the question or declining to answer
She then has some fun answering common economics questions in each of the four ways. I thought it might be amusing to try the same thing in ecology. For instance, in response to Tony Ives’ question (“Should ecology be about the study of general laws?”), one could answer:
- No. (That’s what 2/3 of Tony’s audience answered)
- Well, it depends what you mean by “law”. If you mean a statistical pattern or regularity, such as the species-area curve, then ecology has many laws which everyone agrees are central to the discipline. But if by “law” you mean…
- Why ask this question at all? Does the answer even matter for how we do ecology?
- [rolls eyes, goes back to thinking about data]
Or take the question “Should the intermediate disturbance hypothesis be abandoned?”:
- Yes.
- It depends what you mean by the intermediate disturbance hypothesis, and what it means to abandon a scientific theory as opposed to further developing and modifying it.
- There’s no point in asking this question. People are just going to do whatever research they want, no matter what anyone else thinks they “should” do.
- [shambles away]
Post your own questions and sets of answers in the comments! 🙂
At the recent annual meeting of the American Political Science Association, there were two panels on women in political science. The panels addressed issues that seem relevant to ecology too. As in ecology, women in political science are underrepresented among senior faculty. They’re also cited less than men, even after controlling for seniority, paper topic, and other factors. That’s in (small) part because male political scientists self-cite more than women. Click the link for a good report on the panel discussions, in particular regarding the issue of who should change. That is, given that there are some practices in which men engage more than women, should women seek to emulate those practices (e.g., self-citation)? Or seek to change those practices, and the men who engage in them? And there was discussion of other issues, ranging from the adequacy of “stop the clock” policies for tenure, to whether the relative prestige of different subfields reflects gender bias. (HT The Monkey Cage)
Andrew Gelman with a story of post-publication peer review. Specifically, how for the vast majority of papers it mostly consists of people just linking to the paper without commenting, or only commenting casually. It’s very rare for someone to actually dig into the methods, in this case leading to the discovery of clear-cut and very serious statistical flaws. I have an old post that draws the same conclusion (and in a follow-up post Andrew was kind enough to quote that old post of mine). Pre-publication review isn’t perfect. But with rare exceptions, the only time people read like pre-publication reviewers is when they’re acting as pre-publication reviewers. So that exposing papers to the “crowd” post-publication hardly ever results in meaningful “review”.
Larry Wasserman says that, for a minority of statisticians who use Bayesian methods (and he does emphasize it’s a minority), Bayesian inference is like a religion. I think Larry’s choice of words is unfortunate (he says “religion” when really what he means is something like “cult”; plenty of religious people aren’t cultists). And I’m not so interested in the behavior of a minority of Bayesians specifically. But Larry’s post does raise some larger issues which I think are interesting, and which are voiced by a commenter on his post. First, can you find small groups of “fanatics” associated with any approach or idea in science, or do certain ideas or approaches tend to attract fanatical adherents while others don’t? Second, do such “fanatics” serve a useful purpose in science, by pushing unconventional ideas and by making clear the compromises and trade-offs made by more pragmatic types?
Speaking of small colleges and science, Jeremy: I went to Oberlin College, class of 1977. Oberlin had, as I recall, about 2800 students then, with ~800 in the conservatory of music and ~2000 in the college. So perhaps there were 500 or so in the college’s graduating class. Biology was a large major, but so were history, sociology, government, English, psychology, etc., etc. I doubt there were even 100 biology majors in my class. But from that small pool, there are many who are now on university faculties. Off the top of my head, and just in the fields of ecology and evolution, here are four from the class of 1977: Deborah Gordon (behavioral ecology, Stanford), Joe Graves (life history evolution, NC A&T), Kurt Schwenk (functional morphology, U Conn), and myself. Ruth Shaw (quantitative genetics, U Minnesota) was one year ahead in the class of 1976. I don’t know how many of us had research experiences; I, for one, did not. However, I had some great teachers and classmates. And by the way, I got hooked on biology after a non-majors course.
On the other side of things, here at MSU, the classes are huge and professors don’t get to know most of the students in their classes. Still, there are opportunities for undergrads to get research experience, though they have to be motivated and probably lucky, too. Right now, a half-dozen undergrads are in my lab doing real research. Two former undergrads were co-authors on a 2012 Science paper (Meyer et al.) demonstrating the evolution of a bacteriophage’s ability to exploit a new host receptor, and many others have co-authored papers over the years.
Both settings – small colleges and major universities – have their strengths and weaknesses creating opportunities and problems.
One more thought, this one related to Meg’s post on mentoring undergraduates: To make an undergraduate research project a “win-win” for the undergraduate and the grad student or postdoc who usually does the hands-on mentoring at research universities, it’s just too late to start a project in the senior year. The grad student or postdoc will make a large investment in training, but there won’t be enough time to recoup that investment given the learning curve, errors, distractions, etc. that are a part of science and student life. So I’d never (or rarely) take on a senior to do research; in fact, we often get freshman and sophomores, who – if they like it and work out – might then work with us for 3 years. Those are the real “win-wins” for the hands-on grad or postdoc mentor!
My own experiences resonate with yours Rich. I went to Williams College (and considered Oberlin; I remember the campus visit well…). Williams was about 2200 undergrads. Biology was a large major, but there were other large ones. I recall English, economics, political science, and chemistry were big, though I could be misremembering. And my class and the nearby classes produced several ecologists and evolutionary biologists, despite the fact that most biology majors were premed. Besides me, there was Paul Hohenlohe a year ahead of me, Dan Bolnick and Justin Wright a year behind me, another fellow a year behind me who became an ornithologist, and others a bit further ahead or behind me. We did all have research experiences, as far as I can recall. My very first publication is part of my undergrad honors project. And now that I’m at Calgary I have undergrads in my lab every summer, and I have multiple papers with undergrad co-authors.
Likewise, a number of my fellow students from Occidental College have moved on to do great research in ecology and related fields, too. We had some terrific mentors, and a great environment to be able to focus on learning.
That said, I had an even higher ‘yield rate’ from the students that worked with me in grad school, compared to what I’ve done as a faculty member at a teaching institution.
I imagine that the students entering the Oberlin class of ’77 probably were going to become seriously great scientists the moment they stepped their foot in the door – and their decision to go to Oberlin might have meant just as much as what happened there. I am familiar with Oberlin, and recognize that it’s a spectacular school, so of course the school helped.
As you say, it’s both the students and their environment, and the interaction between them (and of course, a key part of the environment for any given student is the other students).
As noted in that old post of mine, graduates of certain liberal arts colleges go on to obtain PhDs at even higher rates than graduates of Ivy League universities. That points to an effect of environment, since the students entering both sorts of institutions are comparable.