Elliot Sober on the present and future of philosophy of biology

Back in Sept. I was fortunate to be able to attend a philosophy of science “summit” at the University of Calgary, with talks by a bunch of the world’s top philosophers of science. I thought I’d share my notes from Elliot Sober’s talk, on the present and future of philosophy of biology. As I’m sure most of you know, Sober is a top philosopher of evolutionary biology, his book The Nature of Selection is a classic. I found his talk very interesting for several reasons. He talked about the state of philosophy of biology and its place within philosophy more broadly. I always have an anthropological interest in hearing about how people see the state of their own fields. He had a lot of advice about how to do philosophy of science, much of which encouraged philosophers to engage in scientific debates. And he made some passing remarks on how scientists in various fields perceive philosophers (apparently we ecologists are unusually receptive to philosophical input!) I don’t know enough about philosophy to evaluate all of Sober’s remarks, but I enjoyed mulling them over.

My notes follow. I did the best I could, but obviously any errors or omissions are mine.*


Philosophy of biology today seems to have less and less connection to the rest of philosophy, and seems to have little to contribute to science itself. Talking about science is science journalism; it’s not the same as contributing to science. Worried that philosophy departments will stop hiring philosophers of biology.

Philosophers seem to think that philosophy of science, and philosophy of biology, are now less central to philosophy than was the case 20-30 years ago. Why?

Public controversies about biology which had philosophical elements (e.g., sociobiology) used to have a high public profile. Not so much anymore. Gifted popularizers of biology also used to talk about philosophical issues (Dawkins, Gould, Lewontin). Again, not so much anymore.

Sociologist Kieran Healy (aside from me: hey, I’ve heard of him, I read his blog!) has done citation analyses of changes in philosophy, rankings of philosophy depts., centrality of different disciplines. Philosophy of science is not central to philosophy (though not peripheral either).

Biology is relatively hospitable to philosophy of science. Half-joking: 99% of physicists think philosophy is bullshit, it’s only 95% for biologists. (aside from me: Wonder what the number is for ecologists? I bet it’s fairly low, which maybe that means ecology is especially fertile ground for philosophy of science? But in that case, why does ecology seem to get much less attention from philosophers than evolutionary biology? Presumably because evolution has an agreed-on core set of ideas and questions that give philosophers a handle to latch onto? Whereas ecology is kind of a mess, so that it’s hard for outsiders to develop a road map of the field and figure out where they might profitably contribute philosophical insight?)

Overspecialization and the “regionalist turn” in philosophy of science (Jean Gayon–the view that nothing of interest in philosophy of science can be done except in within-discipline work). Reason to doubt this—methods of reasoning and inference are not subject-matter specific. Philosophers of biology sometimes unaware that their questions had been addressed in general philosophy of science. Unfortunate because a good trick for developing your career is to use well-developed ideas from one area to solve problems in another area (aside from me: Yup! That’s not just good advice for philosophers, it’s good advice for anyone. That’s what a good chunk of my own career consists of, anyway–shamelessly stealing ideas from one area and applying them to a different area: applying the Price equation to ecology, applying modern coexistence theory to the IDH, half the blog posts I write. Or think of neutral theory in ecology, or MaxEnt, or using ideas from economics to understand resource trade mutualisms…)

Philosophers seem to be retreating from making normative statements about the practice of science. Describing science without critiquing it. We critique creationists, why not scientists? Scientists make normative judgements of one another’s methods, so why can’t philosophers do so too? Or think of statistics—gives normative advice on how to proceed, given epistemic goals and empirical facts.

Clarifying a concept or the logic of a line of argument is clearly recognized as “real” philosophy by philosophers outside of philosophy of science. That’s not just purely descriptive work.

Retreat from normativity also due to influence of history of science on philosophy of science? Historians think of normative judgements as anachronistic and hubristic.

Rational reconstruction of historical scientific arguments—is this legit? Helpful? If the scientists themselves didn’t use the logical and mathematical tools used in your reconstruction, isn’t that anachronistic? (aside from me: Deborah Mayo would say yes. She calls rational reconstructions–e.g., trying to show that a piece of pre-Bayesian scientific reasoning was “really” Bayesian–“painting by numbers”. Just because you can come up with a paint by numbers picture of the Mona Lisa, in what sense does that let you understand how the Mona Lisa was painted? But Sober likes the approach and uses it himself.)

One way for philosophers to make normative contributions without fear—find a scientific controversy and engage in it. Of course need to identify controversies that do have a philosophical component. Which means you need to reject or at least not take too seriously Quine’s claim that philosophy is continuous with science. (aside from me: speaking as a scientist, this is great advice. There are a lot of scientific controversies that are really philosophical, but that aren’t recognized by the participating scientists as philosophical. Or even if they are, the scientists lack the philosophical expertise to properly resolve them. If any philosophers are reading this and want some suggestions for ecological topics that could use some proper philosophical attention, drop me a line!)

Another way to be interestingly normative—find a proposition scientists accept uncritically, and identify scientifically interesting conditions under which it would be true or false. Note that these conditions need not be highly probable. For instance, think of Felsenstein’s demonstration that cladistics parsimony is statistically inconsistent. Turns out that parsimony does in fact make implicit substantive assumptions.

Practical tips to get scientists to pay attention—collaborate with scientists and publish in scientific journals. (aside from me: many of the philosophers I admire have done this. Sober himself. Deborah Mayo. William Wimsatt. Samir Okasha. It would be cool if someone like Chris Eliot were to publish some philosophy of ecology in an ecology journal. I’ll bet Oikos would take philosophy of ecology in its Forum section, and the right paper might fly as a Synthesis & Perspectives piece in Ecology. Ideas in Ecology & Evolution would go for it too, though they’re less widely-read.)

Normative problems are often discarded as dead ends when the question could be revised in a fruitful way. Example: best Bayesian definition of degree of confirmation? Change the question to “how do Bayesian and frequentist accounts of testing differ and which is better under which circumstances?” The latter question is of practical relevance for science—journals have policies on what statistics are appropriate. What’s the right criterion for empirical significance? Change the question to “What does it mean for a theory to be testable, relative to background info?” Does Ockham’s razor depend on the assumptions that nature is simple? Change the question to “Does cladistics parsimony work only when evolution is parsimonious?”

All research programs experience diminishing returns. But when it happens with normative problems, that doesn’t mean you should stop asking normative questions, you just need fresh ones.

After the talk, the discussion was kicked off by some designated commenters, some of whom were scientists (aside from me: I thought this was an interesting way to structure the symposium; I’d be curious to see a similar structure tried at an ESA symposium). Ford Doolittle remarked that molecular biologists and genomicists often don’t even realize they have a philosophy. Which leads to disputes that are thought to be empirical, but are not (think of the debate over ENCODE and whether 80% of the genome is ‘functional’). Another example: the debate over whether there’s a tree of life is really a debate over “what do you mean by ‘tree’”? As opposed to evolutionary biologists and ecologists, who are open to philosophy. To reach biomedical and molecular types, Doolittle suggested that philosophers will need to publish in Nature and Science and PNAS, presumably in collaboration with biologists.

*Sorry, no time for real posts at the moment, we’re all swamped. But my grant deadline will soon be past and I’m now done with my teaching for this term, so normal service from me will resume shortly.

What math should ecologists teach

Recently Jeremy made the point that we can’t expect ecology grad students to learn everything useful under the sun and asked in a poll what people would prioritize and toss. More math skills was a common answer of what should be prioritized.

As somebody who has my undergraduate (bachelor’s) degree in mathematics I often get asked by earnest graduate students what math courses they should take if they  want to add to their math skills. My usual answer is nothing – the way math departments teach math is very inefficient for ecologists, you should teach yourself. But its not a great answer.

In a typical math department in the US, the following sequence is the norm as one seeks to add math skills (each line is a 1 semester course taken roughly in the sequence shown)

  1. Calculus 1 – Infinite series, limits and derivatives
  2. Calculus 2 – Integrals
  3. Calculus 3 – Multivariate calculus (partial derivatives, multivariate integrals, Green’s theorem, etc)
  4. Linear algebra – solving systems of linear equations, determinants, eigenvectors
  5. Differential equations – solving systems of linear differential equations, solving engineering equations (y”+cy=0)
  6. Dynamical systems – yt+1=f(yt) variations including chaos
  7. Probability theory (usually using measure theory)
  8. Stochastic processes
  9. Operations research (starting with linear programming)

That’s 7 courses over and above 1st year calculus to get to all the material that I think a well-trained mathematical ecologist needs! There are some obvious problems with this. First few ecologists are willing to take that many classes. But even if they were, this is an extraordinary waste of time since over half of what is taught in those classes is pretty much useless in ecology even if you’re pursuing deep into theory. For example – path and surface integrals and Green’s theorem is completely irrelevant. Solving systems of linear equations is useless. Thereby making determinants more or less useless. Differential equations as taught – useless (to ecologists very useful to physicists and engineers). Measure-based probability theory – useless. Linear programming – almost useless.

Here’s my list of topics that a very well-trained mathematical ecologist would need (beyond a 1st year calculus sequence):

  1. Multivariate calculus simplified (partial derivatives, volume integrals)
  2. Matrix algebra and eigenvectors
  3. Dynamical systems (equilibrium analysis, cycling and chaos)
  4. Basic probability theory and stochastic processes (especially Markov chains with brief coverage of branching processes and master equations)
  5. Optimization theory focusing on simple calculus based optimization and Lagrange multipliers (and numerical optimization) with brief coverage of dynamic programming and game theory

Now how should that be covered? I can see a lot of ways. I could see all of that material covered in a 3 semester sequence #1/#2, #3, #4/#5 if you want to teach it as a formal set of math courses. And here is an interesting question. We ecologists often refuse to let the stats department teach stats to our students (undergrad or grad)  because we consider it an important enough topic we want our spin on it. Why don’t have the same feelings about math? Yet as my two lists show math departments are clearly focused on somebody other than ecologists (mostly I think they’re focused on other mathematicians in upper level courses). So should ecology department start listing a few semesters  of ecology-oriented math on their courses?

But I could see less rigorous, more integrative ways to teach the material as well. For example, I think in a year long community ecology class you could slip in all the concepts. Dynamical systems (and partial derivatives) with logistic/ricker models and then Lotka-Volterra. Eigenvectors and Markov Chain’s with Horn’s succession models or on age-stage structure, then eigenvectors returning as a Jacobian on predtor-prey. Master equations on Neutral Theory. Optimizaiton on optimal foraging and game theory Yes the coverage would be much less deep than a 3 semester sequence of math only courses, but it would, I think, be highly successful.

I say “I think” because, I don’t know anywhere that teaches the math this way. I teach a one semester community ecology grad class and try to get a subset of the concepts across, but certainly don’t come anywhere close covering everything that I wish were covered (i.e. my list above). And I know a lot of places have a one-semester modelling course for grad students. But teaching their own math courses, or teaching a math-intensive ecology sequence I haven’t come across.

What do you think? Have I listed too much math? or left your favorite topic out? How should this be taught? How many of our students (undergrads, just all grads, only a subset of interested grads) should this be taught to?.

Friday links: new p-hacking data, grant lotteries, things ecologists will never experience, and more

Also this week: a blogging anniversary, betting on replication, Shakespeare vs. dead animals, Brian and Jeremy have a link fight, and more. Also terrible philosophy puns.

From Brian (!):

Does which countries whose researchers you coauthor papers with affect the impact factor of the journal you get in? Apparently yes: in this piece from Emilio Bruna.

In the always entertaining and provoking Ecological Rants blog, there is a quote from Thomas Piketty’s book (setting the economic world on fire in the topic of income inequality for its careful empirical compilation of historical data). The quote is pretty harsh about economists’ obsession with little toy mathematical models that don’t inform about the real world.  Krebs  argues this critique applies to ecology as well (and cites no less than Joel Cohen one of the great theoretical ecologists who regularly chides ecologists for their physics envy). While I am an advocate for more math education in biology, I have to confess a certain sympathy with the quote. We’re so busy obsessing with equilibrium math models and small scale manipulative experiments we’re missing a lot of the story that is sitting in front of us in the massive amounts of data that have been and could be assembled. (There’s a controversial statement to make you sit up on a Friday)

Following up on my post about NSF’s declining acceptance rates there is a well argued blog by coastalpathogens suggesting we should just revert to a lottery system (one of my suggestions but not one that received a lot of votes in the poll).

From Meg:

Things ecologists are unlikely to learn firsthand: it’s hard to fly with a Nobel Prize. (Jeremy adds: is it hard to fly with the Crafoord Prize?)

The Chronicle of Higher Education had an article on increasing scrutiny of some NSF grants by Congressional Republicans (subscription required).

From Jeremy:

Link war! Brian, I’ll see your Thomas Piketty quote, and raise you Paul Krugman. Krugman’s long advocated the value of deliberately simplified toy models as essential for explaining important real-world data, making predictions, and guiding policy. See this wonderful essay on “accidental theorists” (and why it’s better to be a non-accidental theorist), this equally-wonderful essay on how badly both economists and evolutionary biologists go wrong when they ignore “simple” mathematical models, and this one in which Krugman explains his favorite toy model and how it let him make several non-obvious and very successful predictions about the Great Recession. Oh, and as important as Piketty’s empirical work is, it’s worth noting that even very smart and sympathetic readers have had a hard time figuring out what his implicit model is. If your model’s not explicit (and if you don’t care much for doing experiments), then your big data might as well be pig data. While I’m at it, I’ll raise you R. A. Fisher too.*

Statistician Andrew Gelman has been blogging for 10 years. I was interested to read his comments that there used to be more back-and-forth among blogs 10 years ago, and that these days that only happens in economics. I share the impression that economics is the only field that has a blogosphere. I also share Andrew’s view that Twitter is no substitute for blogs. Twitter has its uses. But “in depth conversation and open-ended exploration of ideas” is not one of them.

Speaking of Andrew Gelman, he passes on a link to a new preprint on the distribution of 50,000 published p-values in three top economics journals from 2005-2011. I’ve skimmed it, it seems like a pretty careful study, which avoids at least some of the problems of similar studies I’ve linked to in the past. The distribution has an obvious trough for marginally non-significant p-values, and an obvious bump for just barely-significant p-values. The authors argue that’s evidence not just of publication bias, but of p-hacking (e.g., choosing whichever of a set of alternative plausible model specifications gives you a significant result). They estimate that 10-20% of marginally non-significant tests are p-hacked into significance. The shape of the distribution is invariant to all sorts of factors–the average age of the authors, were any of the authors very senior, was a research assistant involved in the research, was the result a “main” result, were the authors testing a theoretical model, were the data and/or code publicly available, were the data from lab experiments, and more.

One more from Gelman: You can now bet real money on whether a bunch of replication attempts in psychology will pan out. I think it would be really fun, and very useful, to have something like this in ecology.

Most tenure-track jobs do not have 300+ applicants (and even the few that do tend to have an unusually-high proportion of obviously-uncompetitive applicants).

Speaking of tenure-track job searches: soil ecologist Thea Whitman with a long post on what it was like to interview (successfully!) for a tenure-track job. Go read it, it’s full of win.

Shakespearean insult or animal on display at Harvard’s Museum of Natural History?

Philosophy student karaoke songs.

*I’m guessing that Brian saw this response from me coming from 10 miles away, but I figure he (and y’all) would have been disappointed if I didn’t actually follow through and provide it. My boring predictability clockwork reliability is one of my most endearing features. That, and my refusal to to take second place to any ecologist when it comes to making half-baked analogies with economics. [looks over at Meg, sees her rolling her eyes, coughs awkwardly] :-) In seriousness, I actually do see what Brian means and probably don’t disagree with him that much here. And for what it’s worth, I think current trends in ecology are mostly running in the direction Brian would like to see them run (e.g., away from MacArthur-style toy models of a single process.)


How to get a postdoc position (guest post)

Note from Jeremy: This is a guest post by Margaret Kosmala, a postdoc in Organismal and Evolutionary Biology at Harvard. It’s the first in a planned series on life as a postdoc.


I did not start thinking about getting a postdoc position until it was almost too late. I was focused on my dissertation research and finishing up before I ran out of money. About six months from defending, I suddenly realized that I would be unemployed once I did defend. I knew that I had to start trying to find a postdoc position right away. And then I realized I had no idea how to go about doing so. This was at the beginning of last summer and so I spent the next months talking to as many people as possible. Here is what I learned.

There are essentially two ways of obtaining a postdoc. The first is to write your own. The second is to apply for job with someone who already has a project.

To write your own postdoc may be the best option if your objective is a future research career. However, you need to start early. Assuming you already know what sort of research you want to do, you have three potential methods of obtaining the funding to support yourself. You can co-write a proposal with your future postdoc mentor, you can look for fellowship opportunities, or you can look for a postdoc advisor with deep pockets.

If you know who you want to work with and what you want to do, co-writing a successful major grant proposal can be great experience and look stellar on your CV or in a letter of recommendation. If you want to try this route, you should start contacting prospective postdoc advisors a couple years before you expect to defend.

Yes, I said a couple years.

Why a couple years? Most organizations have just one or two funding cycles per year. For example, if you expect to defend May 2016, and you would like to be funded on an NSF DEB grant, you would need to have that grant funded by January 2016. In order to do that you would need to submit your pre-proposal in January 2015. And then order to submit in January, you will needed to start working on the proposal this fall (2014). Which means that should probably have established a rapport with your future postdoc advisor by now.

Defending before May 2016? Fellowships are your thing? You can look for postdoctoral fellowships offered by funding organizations such as NSF, by research centers like SESYNC and NIMBioS, and by private entities like the McDonnell Foundation. Generally speaking, you will need to have a postdoc advisor in mind.

A less well-known source of fellowship funding is universities themselves. Some universities offer institution-wide fellowships on a competitive basis. At other universities there are research centers focused on environmental issues that also offer fellowship opportunities. Finding out which universities provide these opportunities can be tedious however, so it’s often best to ask potential postdoc advisors what, if any, opportunities are offered at their institutions.

If you’re looking for postdoc fellowships offered through large agencies or foundations, they often have just one or two deadlines per year, which means that you may need to write a competitive proposal about a year in advance. When I started thinking about a postdoc position six months ahead of defending, I was too late for almost all postdoc fellowships.

Which brings me to the third method for writing your own postdoc. Some professors have, at times, a pot of money they can use to hire a postdoc. It may be in the form of an endowed professorship, start up funds, prize money, etc. If you’ve only got six months or so before defending, you might start asking around to see if anyone you know – or anyone those people know – expect to have money to fund a postdoc in the next year or so. Sometimes researchers get money they weren’t expecting and need to use it relatively quickly, so keep your ears open. You’ll want to be able to pitch an exciting idea to your prospective postdoc advisor and have a handful of references (friends of the prospective advisor are ideal) who are willing to attest to your awesomeness.

Finally, the remaining way of obtaining a postdoc: applying for advertised positions. I won’t say too much about this method, since it’s pretty straightforward and there are other websites which give guidance as to where to look for job ads and how to best position yourself. In a nutshell: you find a position that looks like it would fit you, send in an application, perhaps get an interview (often by phone or Skype), and sign a contract if you’re offered the position and accept it. In applying, you should do smart things like read the webpage(s) and some recent publications of the job offerer. If you’re offered the position, interview other postdocs and grad students in the lab before accepting; you should like your work environment as much as the research itself. And you might take a glimpse at the benefits package to make sure it’s sufficient.

Hurray! You’ve got a postdoc position. Now tell everyone you know, save up a couple thousand dollars or raise the limit on your credit card in preparation for your move, and say goodbye to your friends. Check out ESA’s new Early Career Ecologist Section. Oh, and definitely finish that dissertation.

Friday links: dump the Canadian CCV, unreclaiming zoology, billion dollar grant, and more

Also this week: new videos for teaching ecology, social media as professional development, the pluses and minuses of minority-focused conferences, the best ecology blog you’ve (well, I’d) never heard of, and more.

From Meg:

I added two fun, deep sea-related new videos to my collection of videos for teaching ecology: 1) a massive deep sea mussel bed; in the video, they have the robotic arm play with the solid methane hydrate that has formed near the mussels (ht: Deep Sea News), and 2) a video of a whale fall community, complete with footage of a shark tearing into the whale (ht: Joshua Drew). Fun! And, while we’re talking about whale falls, this is a neat article about them; among other things, it talks about snotworms (yes, there really are things called “snotworms”).

Conservation Biology is the latest journal to go to double blind peer review. I love the opening line of this announcement: “To have biases is human, to fight them, while not divine, is at least worth attempting.” (ht: David Shiffman)

SciWo had a Tenure, She Wrote post on social media as professional development. I really enjoyed it. As she summarizes near the end

Being on-line does take some time, but so does everything worth doing in life. I’ve never seen any convincing data to show that strategic use of social media is any worse investment of my time and energy than any other thing I could do with those random moments of brain weariness or distraction when I find myself refreshing my Twitter feed or reading a blog post. Instead, the benefits I’ve listed above seem to make a compelling case for engaging with your academic peers on-line – just as you would outline benefits if you encourage networking at in-person at conferences.

From Jeremy:

Here’s a petition I can get behind! Tell NSERC to dump the Canadian Common CV. For you non-Canadians: last year NSERC and other Canadian funding agencies started requiring researchers applying for grants to provide their CV’s using a ridiculous online form. The petition is not exaggerating–it literally is two weeks of work to enter all the information (I know because I just did it for my grant renewal application). Which the software then prints in a horrendously organized and butt-ugly format that makes it very difficult for the people who are evaluating your application to find the information they want. But hey, at least we get…um, actually there’s no upside. Unfortunately, researchers have already been protesting the CCV since it was introduced, and all we’ve gotten in response is minor software updates, so I doubt this petition will go anywhere. The next time an institution admits a mistake and drops enterprise software it had previously adopted will be the first time.

Caroline Tucker asks a good question: What would you do with a billion dollar grant? The inspiration is a billion dollar EU-funded project to recreate the human brain with supercomputers. As that example and others illustrate, the way to attract a big slug of money to a field these days often is with some very ambitious project. Click through for Caroline’s nice discussion of what a billion dollar ecology project might look like (a more expensive version of NEON isn’t the only possibility). Semi-related discussions of the trade-offs between centrally-coordinated science and individual investigator science, and between expensive science and cheap science, here, here, and here, and see here for a relevant historical discussion of the IBP.

Ray Hilborn on “faith-based fisheries“. An entertaining and provocative polemic from 2006. I’m not qualified to evaluate it, but thought it worth passing on. (ht a correspondent, via email)

Un-reclaiming the name “zoologist”. Just one of many interesting posts from the EcoEvo group blog from the ecology & evolution students (and faculty?) at Trinity College Dublin. It is long-running but I just stumbled across last week. Apparently I should’ve noticed them much earlier, as they’ve just been named “Best Science and Technology Blog in Ireland“. I added them to our blogroll.

For instance, here’s Natalie Cooper from EcoEvo on her experience of having to work very hard to organize a gender-balanced plenary session for a specialized conference. Includes lots of practical suggestions for overcoming the usual excuses for lack of gender balance (which as she notes aren’t merely excuses–they’re often real problems). Kudos to her for putting in all that effort and I’m glad it was rewarded in the end. Though had it not been rewarded, I hope she wouldn’t have beaten herself up. See this old post of Meg’s for related discussion.

Last year the NSF DEB and IOS surveyed the community about their views of the new preproposal system. The results are in. The headline result is that people like the preproposal system but don’t like being able to apply only once/year. Note that fears that the new system would disproportionately affect certain groups have not been borne out. Group representation among awardees is the same under the new system as under the old system. (ht Sociobiology)

Terry McGlynn is torn over whether it’s useful for minority students to attend a minority-focused conference if that conference doesn’t include many people working in their field. Somewhat related to my old post arguing that students mostly shouldn’t bother attending student-focused conferences.

Zen Faulkes wonders whatever happened to the annual Open Lab anthology of the best online science writing, and what its apparent demise says about the changing face of online science writing.

Amy Parachnowitsch on the many benefits of writing a review paper (or three at once, in her case!)

The Chronicle has picked up ecologist Stephen Heard’s piece (noted in a previous linkfest) on the value of humor in scientific writing.

In 2006 Germany started following the lead of many other countries and began charging tuition at its public universities. They’ve now reversed that decision.

And finally: Happy Canadian Thanksgiving! :-)

Poll results: what should ecologists learn less of?

Here, for what they’re worth*, are the results so far from yesterday’s poll asking readers to name the most important thing for ecologists to learn more of, and the thing they should learn less of in order to free up time. We’ve gotten 165 responses so far, and based on past experience the results won’t change much if we wait any longer.

Results first, then some comments:

more of

less of

  • No consensus for either question. And not only that, every topic got at least one vote as the most important thing for ecologists to learn more of, and at least one vote as the most important thing for ecologists to learn less of! I think this is a useful reminder of just how diverse ecologists are in terms of their background knowledge, motivations, interests, and expertise (which I think is a good thing, by the way).
  • Most popular answer to both questions was “it depends”, which I interpret as a vote in favor of flexible curricula that let different people specialize on different things according to their own needs and interests. That’s what many (not all) graduate curricula are like, of course.
  • Probably no surprise that the next two most popular choices for what ecologists should know more of were “programming” and “statistical techniques”. For obvious and very good reasons, there’s a long-term trend for all fields of science to become more quantitative, and to make heavier use of computers. Then natural history, math, and evolution.
  • Next most popular choices for what ecologists should know less of were “chemistry” and “physics”. I interpret that as a vote against the common North American practice of requiring all science majors (not just ecologists) to take introductory physics and introductory chemistry. Curious to hear discussion of this. After that was “economics” and “mathematical foundations of statistics”. Ecologists aren’t ordinarily taught anything about either of those, so I suspect that votes for these were just people’s way of identifying the least-important subjects on the list for ecologists to know, whether or not those subjects are actually part of current ecology curricula. Next was “genetics and molecular biology”, followed by “natural history” and then “philosophy of science”.
  • There were no obvious associations between what folks thought ecologists should learn more of, and what they thought ecologists should learn less of, except that most (but not all) people who said “it depends” for one question also said “it depends” for the other.
  • If for each topic you take the difference between the number of votes for more of it and less of it, you get a crude index of respondents’ net desire to see ecologists learn more of it. By this measure, the topics rank as follows: programming +20, statistical techniques +15, math +11, evolution +9, natural history +7…[skipping some]…economics -16, physics -19, chemistry -21. And the topics in the middle were those that received few votes for either question. There weren’t any hugely controversial topics that lots of people really  want ecologists to learn more of and lots of other people really want ecologists to learn less of.
  • Full disclosure: I’d have answered both questions “it depends”

*Probably not much

What should ecologists learn LESS of?

There are lots of things that it would be nice for ecologists to know more of. Natural history. Math. Programming. Statistical techniques. The mathematical foundations of statistics. Philosophy of science. Genetics. Evolution. Other things.

If you’re like me, you probably think ecologists should know more about at least one of those things, and don’t think ecologists should know less of any of them. After all, you often hear people say “Ecologists should know more about X”. But you never hear anyone say “Ecologists should know less about X”. Which is a problem. If you want ecologists to be trained in more of some things than they currently are, without being trained in less of anything else they are currently trained in, then you want the impossible. Well, unless you also think that undergraduate and graduate programs in ecology should last significantly longer than they do!

Don’t misunderstand, it’s fine for people to say what they think ecologists should know more of. That’s an essential part of revising curricula. But the other half–the less fun, but equally necessary, half–is deciding what to drop in order to free up time for the stuff you want to do more of. Anyone who’s taught a class has had the experience of agonizing over not being able to cover lots of fascinating and tremendously important material, because there’s just not enough time. But I think we sometimes forget that time constraints also operate at the level of entire curricula. So it’s fine to say that ecologists should know more of X. But if that’s all you say, well, that’s the curriculum design equivalent of wishing for a pony.*

Of course, when people say “Ecologists should know more of X”, they aren’t necessarily commenting on the design of ecology curricula. In my admittedly anecdotcal experience, sometimes it seems like they’re really saying, “I know a lot about X, and so it really bugs me when people who know less about X make mistakes that could’ve been prevented had they known more about X.” Of course, nobody ever continues, “On the other hand, I know nothing of Y, and so am totally unaware of all the mistakes people make due to their lack of knowledge of Y, and so can’t really judge the relative importance of knowing X vs. knowing Y.” And sometimes what they’re really saying is “I know more about X than the average ecologist, which is good because the optimal amount to know about X is whatever amount I personally happen to know.” And sometimes they’re really saying something else. But for purposes of this post, I want to take statements like “Ecologists should know more about X” at face value, and think about the hard choices of curriculum design that follow from such statements.

After all, the world is changing, technology is changing, etc., so maybe ecology curricula do need to change to keep up (they’ve certainly changed in the past). Maybe we really do all need to know more about X, in which case we need to make some hard choices and figure out how to free up the time for everybody to learn more about X.

So let’s talk about those hard choices. As a conversation starter and mind-focuser, below is a little poll. It asks you to name the one thing you think it’s most important for ecologists to learn more of, and the one thing you think ecologists should learn less of, in order to free up time for them to learn more of whatever it is you think they should learn more of. Both questions are required, so you can’t complete the poll by just wishing for a pony and saying what you think ecologists should learn more of. If you don’t think ecologists need to learn more of anything, there’s an option for that (in which case you’re allowed to say they don’t need to learn less of anything either). And if you think different ecologists need to learn more of different things, or less of different things, you have that option. That’s the option you’d pick if you think ecology should involve lots of collaboration among differently-trained specialists. But reasonable as that last option might well be, I’m hoping you don’t all chicken out and take it. :-)

Note that you can think of the poll as encompassing undergraduate and graduate training collectively (which is how I think of it), or as focusing on one or the other (e.g., because you think undergraduate curricula are fine but graduate curricula need revamping).

p.s. Before anyone complains about the way the poll is structured: yes, I obviously could’ve structured it differently. But no structure would’ve pleased everyone. I went with this poll because it seemed like a fun conversation starter, which is all it’s meant to be. It’s not a scientific sample from any well-defined population. Also, this poll was easy to write; you get the polls you pay for on this blog. If you don’t like the poll, no worries, just ignore it. You can still comment on what changes you’d like to see to ecology curricula–but no wishing for ponies! :-)

*Of course, you can also argue that ecologists should learn the same things, but better or differently than they currently do. See for instance Fred Barraquand’s comment on a recent post. That’s an important point, but it’s orthogonal to this post.

Should journal editors be anonymous?

Should journal handling editors be anonymous?

Editor anonymity used to be rare or nonexistent at ecology journals. But it seems to be more common now, at least for certain decisions and at certain journals. In particular, it now seems to be fairly common for rejections without review to be anonymous.

I can understand the reasons for this. The stakes are higher these days, or at least they’re perceived to be higher, which might amount to the same thing. Many authors probably feel like they have a lot riding on every ms, and editors don’t want authors to get upset with them over rejections. Both because it’s no fun to have to deal with irate authors, and because of the fear that an author might hold a grudge against you and give you a bad review on your next grant or something. I have friends and colleagues whom I hugely respect who serve as editors and are glad to have, or wish they had, the option to remain anonymous.

But while I can understand the reasons, I think they’re outweighed by other considerations. I personally don’t like editor anonymity. I served as an editor myself at Oikos for several years, starting before I had tenure. As far as I can recall, our names went on all our decisions, including rejections without review, and I wouldn’t have had it any other way. As an editor, I felt that since I was the one with decision-making power, I needed to take responsibility for my decisions. Which for me meant being willing to sign my name to them. This is unlike being a referee, whose job is merely to provide advice to the editor. And while the final decision officially rested with the Editor-in-Chief, in practice the EiC ordinarily just rubber-stamped the decisions of the editorial board members (that’s the way it is at most ecology journals). And if that led to a senior ecologist getting upset with me (as happened to me once at Oikos), well, if you can’t take the heat stay out of the kitchen.* Once in a while, a professional decision you make might upset someone. That’s unfortunate, but that’s life.

I worry that editor anonymity undermines trust in the peer review system. Authors are more likely to respect a decision if they know who it’s coming from. Editor anonymity feeds the perception that peer review is a crapshoot at best and a rigged game at worst. Journals and their editors should fight that perception, not encourage it.

The Committee on Publication Ethics (COPE) has criticized editor anonymity. Now, in fairness their criticism focuses on the practice of editors writing anonymous reviews of the mss they handle.** But COPE’s reasons for criticizing that practice apply to editor anonymity more broadly, I think. As COPE notes, editors are the overseers–there’s nobody to oversee and evaluate them. Overseers shouldn’t be anonymous.

But I bet this is an issue on which some folks (probably including some of my friends) will disagree with me, so let’s talk about it. As an author, do you mind editor anonymity, or not? As an editor, are some or all of your decisions anonymous, and if not do you wish they were? Why? Looking forward to your comments.

*Plus, is it really that common for scientists to hold serious long-term grudges against one another, and be in a position to act on them in a way that would materially affect someone else’s career? Or is the increased competition for jobs, grants, and space in leading journals just causing people to worry more about that unlikely possibility? For instance, in an old post on a related topic, Brian notes that his very first paper as a grad student was a very high profile paper that seems to have upset a very prominent ecologist. But Brian’s career has gone just fine. As I said in a different context, I think it’s pretty rare for one little thing–like say, one editorial decision you make–to materially affect your career one way or the other. But of course, I have nothing more than anecdotes to back this view.

**When I read that, I was stunned. There are editors who do that? I’d never heard of such a ridiculous editorial practice. But that’s not what this post is about.

Friday links: the cult of “too busy”, why research fails, a love letter to National Geographic, and more

Also this week: why academics write badly, haters gonna hate (and that’s a good sign), college enrollments are declining (and that’s a good sign), combinatorics vs. the h-index, the fishy exact test, chickens CHICKENS, and more. Oh, and leopard+gravity vs. impala. And leopard vs. Marmite.

From Meg:

Scicurious had a really nice post on her love of National Geographic, which started when her grandfather gave her a subscription. She talks about moving through the years, always moving all those boxes of NatGeo. I love the solution she came up with for preserving the memories in those golden covers. We also had National Geographic growing up, and I also loved looking through them. This is the cover that stands out in my memory. Our National Geographic subscription began as a prize my oldest sister won (I think for winning a school spelling bee). My father likes to joke that it was the most expensive prize we ever got, since he renewed the subscription for many years after that gift subscription was over.

From Jeremy:

Why research fails. A brief and broad discussion of the many ways a research program can go off the rails, but still a good one for beginning grad students to read.

And another from Claudia Sahm: smart people don’t always know the right answers, but they know the right questions. The examples are from economics, but the point is broader.

Graduate students should not fall into the cult of being “too busy”. Nice post on how to “deprogram”. I particularly like the advice on knowing how to stop once some bit of work is good enough, and the advice to seize (and create) opportunities rather than passing them up because you’re “busy”. In my experience, the best graduate students (by any metric you care to name) are the ones who attend seminars, meet with visiting speakers (even those whose work is unrelated to their own), organize reading groups, take classes just because they sound interesting, etc. Here’s another nice post making the same point. And no, you don’t have to work 80 hours/week to do all that. (ht @kerryecharles)

Self-described “alpha female” Cathy O’Neil (“Mathbabe”) on how she figures she must be doing something right if people hate her.

The latest salvo in the ongoing debate over alternative rationales for conservation: Richard Conniff on how he’s tired of pretending that every species is or might be “useful”. A semi-related old post, and here are two more.

Some of you may recall a high profile PNAS paper from 2004 purporting to show that diverse teams of problem-solvers outperform less-diverse teams of experts. Yeah, not so much. A case study of how to abuse mathematics. There are plenty of good arguments for “diversity” in many contexts, but it’s a shame that this bad one has gotten a lot of play. (ht Mathbabe)

Let’s turn from that last link to a better use of math: to a good first approximation, your h-index is proportional to the square root of the number of times you’ve been cited–and you can use combinatorics to prove it. Technical in places, but well-written, and the intuition is straightforward. A good example of using math to aid interpretation of data. (ht Mathbabe)

Steven Pinker with an astute discussion of why so many academics write badly–and how they can write better.

In a new paper, philosopher of science Jonathan Birch tries to unpack the kerfuffle over kin selection prompted by Nowak et al. (2010). Very accessible. I read it with interest for the thoughtful discussion of what the Price equation can and can’t teach us about evolution. Includes interesting comments on trade-offs between different desiderata of theoretical models (e.g., between generality and predictive power, and between mechanistic explanation vs. explanation in the sense of unification). A nice example of a philosopher moving a scientific argument forward, by clarifying conceptual points that the scientists themselves had been struggling to articulate.

Nice blog post from Adam Algar discussing Helmus et al.’s recent Nature cover article. Human activities are reshaping anole biogeography in ways that are consistent with island biogeography theory. Notes that this is a nice example of fundamental research turning out to be relevant to an applied issue in ways that weren’t anticipated at the time the fundamental research was being conducted.

Key points about the Ebola outbreak.

Speaking of disease outbreaks, Ottar Bjørnstad and colleagues at Penn State are teaching a MOOC on infectious disease dynamics. It just started, apparently there’s still time to sign up.

My Calgary colleague Steve Vamosi has started blogging. His first post is on p-values and model selection.

Why declining US college enrollments are a good thing (well, a symptom of a good thing).

This week in artificial selection: today’s chicken breeds grow 4x larger on the same diet than breeds from the 1950s. One of those studies that’s best summarized by a photo rather than a figure.

As a blogger, I enjoyed this: the 10 best essays since 1950. Not because I think it’s a definitive list (I haven’t read nearly enough essays to judge), but just because I always like being pointed towards good essays. Longer blog posts are like essays, or perhaps mini-essays. (ht Marginal Revolution)

An amusing online demonstration of fishing for statistical significance. (ht Andrew Gelman, though some of his commenters on this seem to have missed the joke)

Sub-optimal foraging: a leopard tries Marmite. The reaction is…hard to interpret. :-) (ht Not Exactly Rocket Science)

Presumably this is the hunting technique the leopard used to catch the Marmite. Wow! :-) (ht Not Exactly Rocket Science)

Hoisted from the comments:

A recent linkfest led to a good discussion of the ethics of lecturing, as opposed to using active learning.