Marquet et al. on theory in ecology

Marquet et al. (2014) is a very interesting new paper on theory in ecology–what theories are, why they’re valuable, and what makes for a good one (or a bad one–we’ll get to that). It’s explicitly philosophical, which is great–scientists should be be explicit about their philosophy of science. But it’s also very concrete–Marquet et al. illustrate and support their general philosophical claims with detailed discussions of several familiar ecological theories.

Below are are a bunch of thoughts on the paper (see Peter Keil’s blog for more thoughts). As usual, don’t think of this as “post-publication review”, it’s just me thinking out loud about a paper that’s worth thinking about.

  • Here’s a brief summary of the paper, to whet your appetite and encourage you to click through. Marquet et al. start by adopting the same distinction between theories and models I discussed here. They share my impression that models and data currently are ascendant over theories. They argue that this is bad, that we can’t do without the understanding and unifying general principles provided by good theories. They emphasize the importance of theory-data linkages. They follow philosopher Larry Laudan (1977) in saying that theory evaluation is a comparative matter, and that good theories are “efficient” in the sense of providing more or better explanations and predictions with fewer free parameters. They offer various reasons for preferring efficient theories (which I kinda wish they’d presented in a bullet list, to maximize clarity.) And they discuss examples of efficient and inefficient theories. Their examples of efficient theories: Fisher’s sex ratio theory, optimal foraging theory, the metabolic theory of ecology, MaxEnt, and neutral theory. Their examples of inefficient theories: R* and resource ratio theory, and dynamic energy budget theory.
  • I love that Marquet et al. have the courage of their convictions to criticize some very prominent theories. It really bugs me when people stake out a position but then consciously or unconsciously duck the full implications (e.g., focusing on the upsides and not the downsides). The only way you can evaluate and improve your ideas is by facing up to their full implications.
  • And I do think it’s fair to read Marquet et al. as criticizing some theories. They say that “Our strategy is not normative”, but I think it actually is. They don’t merely describe what efficient theories are, they talk at length about why theories should be efficient. Now, they recognize the value of other things besides theories, and other virtues of theories besides efficiency, and maybe that’s what they mean when they say they’re not being “normative”. But make no mistake, they think theoretical efficiency is really valuable and that inefficiency is a significant strike against a theory, even if that strike might be counterbalanced by other things.
  • I don’t agree that theory evaluation should always be a comparative matter. If all of our current theories about X are bad in some absolute sense, I think it behooves us to recognize that, rather than just sticking with (and trying to improve) the best apple of a bad bunch. And no, this doesn’t necessarily mean making the best the enemy of the good (or good enough), or giving up on the possibility of incremental improvement of inevitably-imperfect theories. Indeed, one important spur to the development of new, better theories of X is recognizing the inadequacy of all current theories of X.
  • In passing, Marquet et al. make many remarks with which I agree. Theories mostly aren’t very useful unless they’re expressed mathematically. All theories make simplifications and so are literally false, which is what makes them useful. Theories are valuable for other reasons besides making predictions. Just because a theory leaves lots of unexplained variation doesn’t necessarily mean it’s bad. Etc. Many of these points are familiar, but I liked seeing them all made in one place.
  • I’m sure there’s a lot more that could be and has been said on the philosophical side here (and I’m not the one to say it, because I’m not a philosopher, I just play one on the intertubes). Larry Laudan’s work is very influential, but is far from the last word. Still digging a bit for good overview links (will update the post if I find any), but it’s hard because there’s a big philosophical literature on issues like simplicity and unification.
  • Following on from the previous bullet, there are tough philosophical issues here to do with “explanation”. Marquet et al. want theories that explain why the world is the way it is. I want that too. But it’s not always obvious what counts as “explanatory” (see here and here for some discussion). For instance, MaxEnt provides explanations in terms of “constraints”. Given the constraints (e.g., that you have X species, and that mean abundance per species equals Y), it tells you that the species-abundance distribution (or whatever) will be the smoothest distribution consistent with those constraints. But what if those constraints aren’t exogenously determined? What if they’re endogenous, determined by the same underlying forces that also determine the things MaxEnt is trying to predict? Is MaxEnt then “explaining” the things it predicts? Or is it merely showing that the constraints and the things it’s trying to predict are correlated? Or maybe it’s neither, maybe MaxEnt is just pushing the explanatory question back a step, to “What explains the values of those constraints?” Honest questions, to which I’m unsure of the answer.
  • Marquet et al. makes for a really interesting contrast with Evans et al. (2013), another recent paper on theory in ecology. For instance, Evans et al. argue that complex models are more general than simple ones (though I think they mean something different by “general” than Marquet et al.). They argue against the idea that simplicity has a single definition. They argue that simple models aren’t explanatory (for the record, I disagree). They even argue that it’s currently more difficult to publish system-specific modeling work than it is to publish general theory (I disagree with them on this too, at least if we’re restricting attention to general ecology journals, unless they’re just thinking of some very particular sort of modeling like individual-based simulations). So if you want a provocative pair of papers for your lab group or reading group, something to really get people thinking and talking, you should totally read Marquet et al. and Evans et al. (and then comment to tell us how the discussion went!)
  • It’s striking that several of the theories Marquet et al. call “efficient” are macroecological. It’s interesting to ask why that is. Maybe it’s just happenstance. Or maybe certain kinds of problems are more open to theorizing about (e.g., problems characterized by statistical symmetries)? Whereas others demand models rather than theories (e.g., questions about population dynamics or species coexistence)?
  • Marquet et al. think it’s essential to link theories to data, and so in that respect contrast with folks like Caswell 1988. Indeed, they almost leave the (accidental?) impression that what they really care about is not efficiency or generality or fundamentalness of theories, but how easy it is to test the theory. Unfortunately, they don’t talk much as I’d have liked about the effectiveness of empirical tests. For instance, empirical tests of neutral theory often have been uninformative (McGill 2003, 2006). But that might change in future, as it seems to be for MaxEnt (White et al. in press).
  • More broadly, how many times a theory has been tested, and in what ways, and how informatively, depends on not just on the theory’s efficiency but also on all sorts of other factors. I don’t know that Marquet et al. would deny that, but they sometimes give the impression that they think it’s the theory’s fault if the theory hasn’t been tested a lot.
  • Which leads to my biggest disagreement with the paper: their criticisms of R* theory and dynamic energy budget theory. I was very surprised by these criticisms, but tried my best to think hard about them because the paper as a whole is quite good and because the authors are all really smart, thoughtful ecologists. But having thought hard about it, I still think Marquet et al. are off base. They say R* theory is difficult to test because you have to measure at least three parameters for each competing species in order to test it. Sorry, no. I know this because I’ve tested it myself in experiments that involved measuring one parameter per species, namely R* values (Fox 2002). So have other people (e.g., Harpole & Tilman 2006). And if you say, well, that’s still one parameter per species, which is still a lot because after all there are lots of species in the world, well, I don’t see why that’s so different than tests of the metabolic theory of ecology or MaxEnt or neutral theory. For instance, testing even one allometric scaling exponent predicted by metabolic theory requires measuring two numbers (body size, and whatever you’re regressing on body size) in hundreds of species of widely-varying sizes. Yes, all those numbers get boiled down into an estimate of a single parameter–the allometric scaling exponent–but that doesn’t thereby make metabolic theory easy to test. Similarly, MaxEnt predicts various things based on just a few “constraints” like mean abundance per species–but to measure those constraints you have to measure various properties of all the species and then take their averages. And that’s before we even talk about how there are often ways to test theories that don’t involve “estimating all of their parameters”. So whatever the virtues of efficient theory might be, “reducing the number of things you have to measure in order to test the theory, thereby making the theory easier to test” is not one of them. Marquet et al. also complain that R* theory has mostly been tested with small organisms (or grassland plants, they might have added). True enough–that’s because those are the species for which R* values are easiest to measure (though not easy in an absolute sense). But why is that relevant? Doesn’t that amount to implicitly giving neutral theory, MaxEnt, and metabolic theory “extra credit” for the fact that body sizes, abundances, and metabolic rates often are pretty easy to measure or estimate, so that lots of people happen to have measured those things on lots of species already? Surely neutral theory, MaxEnt, and metabolic theory shouldn’t be given “extra credit” for having parameters that happen to be easily measurable or estimable. Any more than one should ding general relativity or the Standard Model of particle physics for having parameters that require expensive high tech equipment to measure. And I’ve tried, but I just cannot understand why Marquet et al. see empirical and theoretical work on optimal foraging theory as an example of efficient theory and strong theory-data linkages, but see R* and resource ratio theory as an example of inefficient theory and weak theory-data linkages. Because to my mind the two bodies of work are very similar in what sort of theories they are, the ways in which people have tested them (e.g., by measuring species-specific parameters), the fact that they’ve both been tested mostly with certain kinds of organisms, how they’ve been modified and extended to incorporate realistic complications to the simplest limiting cases, etc. Compare Grover (1997) on R* theory and data, and Stephens and Krebs (1986) on optimal foraging theory and data–is there really a world of difference there? Finally, I think it’s worth considering effectiveness of tests here too. Tests of R* and resource ratio theory might be hard to conduct, but I don’t think it’s an accident that most of those tests have been really good tests. One nice thing about a theory being hard to test is that it prevents bandwagons based on weak tests of the theory. As far as I know, nobody’s ever seen an opportunity for a quick paper in testing R* theory. So if you’re going to count number and diversity of tests against R* theory, shouldn’t you count quality of tests in its favor? I know much less about DEB theory (though I do know a bit), but I suspect similar remarks would apply. (e.g., here’s Cressler et al. 2014 linking DEB theory to data on host-parasite interactions in Daphnia).
  • I wish Marquet et al. had been a bit more precise about the various reasons why we might want a “simple” theory. For instance, a simple theory might define a limiting case which we hardly ever observe in nature (not even approximately). The “R* rule” and various optimal foraging theorems (“0-1″ diet rule, ideal free distribution) are examples. The point of such theories is focus attention on a factor of interest, whether or not that factor is more “important” (by any measure) than those omitted from the theory. Another seemingly similar but actually quite different way a simple theory can be helpful is by including the most important factor while omitting less important ones. Metabolic theory is an example–if you want to explain metabolic rates, the two most important things to know are body size and temperature. Both sorts of simple theories can be described as providing a “baseline” that helps you learn something about the factors omitted from the theory. But what you learn from such “baseline” comparisons is different when the “baseline” is an unrealistic limiting case, vs. when the “baseline” is realistic in the sense of including the most important factor. The former is a conceptual baseline, the latter is an empirical baseline. Evans et al. make this point too (their “demonstration” models are what I’m calling theories of simple limiting cases).
  • The previous two bullets illustrate how tricky it can be to apply general principles (here, general philosophical principles) to specific cases. I think the previous two bullets also illustrate a point from the philosophy of science literature: “simplicity” is an infamously slippery concept, and it’s infamously difficult to say why scientists should prefer “simpler” theories. This is something I’ve talked about before in an ecological context. See Evans et al. for further discussion.
  • Following on from the previous bullet, it’s interesting to try to put other examples into Marquet et al.’s framework. For instance, is island biogeography theory efficient or not? Metapopulation theory? Life history theory? The point of such an exercise is not to slap labels on theories, but to try to come to a better comparative understanding of what works and what doesn’t in theory development. And I’m curious how Marquet et al. would’ve looked different if it had been written by people who believe in the same general philosophical principles, but who’ve developed different theories (among the authors of Marquet et al. are people who’ve worked on several of the theories Marquet et al. praise, but not the ones they criticize). For instance, in a 1987 paper Dave Tilman himself argued for R* theory as a simple, general theory based on a small number of fundamental parameters that makes testable predictions about lots of different things, facilitating tight linkage of theory and data. So, pretty much all the same general points as Marquet et al.–but the opposite illustrative example!
  • Nitpicky aside: what is “the” metabolic theory of ecology, exactly? Is is really one theory, or is it better thought of as a whole complex of different models or theories that all involve body size and metabolic rate? Don’t misunderstand, I can totally see that metabolic theory is an integrated body of work, and it’s totally fine to refer to that body of work as “the metabolic theory of ecology”. But if you’re trying to rank theories by their efficiency and defining efficiency in terms of number of free parameters, well, what’s the total parameter count for the entire complex of ideas that together comprise the metabolic theory of ecology? I bet it’s pretty high (e.g., there’s a whole bunch of parameters just in the original West et al. 1997 paper). One could of course ask similar questions about other examples Marquet et al. raise.
  • Marquet et al. talk briefly about theories as unifying, but they miss that there’s more than one way to have unification. One way to get unification is to have a single fundamental theory that explains a lot, at least to a first approximation; that’s the sort of unification Marquet et al. have in mind. But another way to get unification is to have general theoretical frameworks that, while not making any testable predictions themselves, bring together lots of different system-specific models under a unifying umbrella. Modern coexistence theory as developed by Peter Chesson and colleagues is a prime example of this sort of unification in ecology, and the Price equation is a prime example from evolution. More broadly, see here and here for discussion of how having a bunch of system-specific models is not the same thing as just having a disunified “stamp collection” of unique special cases. Of course, those are two different senses of “unification” and there’s probably an interesting discussion to be had about whether one can substitute for the other (my tentative view is that they’re at least partially substitutable). I talked more about this in one of the first blog posts I ever wrote (and still one of the best, I think).

Non-academic careers for ecologists: data science (guest post)

Note from Jeremy: This is a guest post from Ted Hart, who holds a Ph.D. in ecology and did a postdoc at the University of British Columbia, but is now a data scientist in Silicon Valley. Thanks very much to Ted for offering to share his experiences (and click through on the link above for Ted’s blog, where he discusses his career path at greater length).

This is the latest in our series of posts on non-academic careers for ecologists. For previous posts in the series, go here. And if you’re an ecologist in a non-academic career we haven’t covered yet and want to write a guest post on it, drop me a line! (jefox@ucalgary.ca)

*************************************

1. When and how did you decide to go into data science?

When I moved to San Francisco (The joke being: A data scientist is a scientist who lives in San Francisco). In all seriousness though, I was recruited by my current company and it seemed like an opportunity I couldn’t pass up. The longer answer is that I took Jeremy’s story about almost leaving science to heart and cultivated a technical skill that would make me an appealing hire outside of academia. As I further progressed in my PhD I began to realize the practical realities of getting a faculty job. While I still hoped to stay in academia at the time, I knew that I needed a contingency plan. However I found that what started out my contingency plan was my real interest. I enjoyed working with data, modelling, coding and doing research more than I liked other parts of academia (paper writing, grant writing, teaching, etc…).

2. Did you get advice (wanted or unwanted) from others about your non-academic career path? If so, what sort of advice did you get, and how did it affect you?

I did get some great advice that I sought out each time a post-academia opportunity came along. My PhD advisor had little advice about alternative careers at the time of my PhD, but he has been a great resource throughout all of my career decisions post-PhD. Much of the other advice I recieved came from personal connections I made in the ecological software development community. Whenever I was offered a job, I e-mailed the people I trusted, and asked for their opinions on my career decisions. Their responses were always thoughtful and helpful. I think having more senior people who you look-up to is invaluable in guiding big career decisions.

3. Tell readers a bit about your current position, how you found it, and what attracted you to it.

I can’t speak freely about what I do these days in too much detail due to the nature of my work. However I can say generally that I work on a research team with other “academic refugees” from fields like physics, computer science and economics. We collaborate on projects in way very similar to an academic environment. I got the job when a member of the team found me on LinkedIn and invited me to apply for the position. I think it just goes to show you never know where an opportunity will come from. As far as what attracted me, ironically it was a chance to return to doing research. Whereas my previous work was in informatics and data engineering, I felt that my current job would offer more room for intellectual growth. I think that giant data sets present a unique set of intellectual challenges. Whereas much of ecological research is about “how do I do a lot with sparse data”, industry data presents the opposite challenge: “how do I find meaning in the firehose of information?” It’s a different approach to questions.

4. In what ways do you find your current position to be a change from academia? Are there aspects of the position that are a “culture shock” or that have required some adjustment on your part?

If I had to choose one word it would be pace. Everything in industry happens faster. The deadlines are measured in weeks or days, not months or years. While some projects have a lot of lead time and I have the freedom to think deeply, others require a rapid turn around time for a meeting with an executive or a quarterly report. You also get feedback much faster in industry. At my last grant funded job, the cycle for funding was measured in years. If we were slow to produce results it didn’t matter much from a funding perspective in so much as we had money for X years already. However in business if you make a bad decision and start losing money, people start to care really quickly. I think the biggest culture shock is in communication. Business has its own set of acronyms, metrics and general slang (like what is AB testing?). Beyond semantic differences though, I face larger issues in communicating complexity in data. One of the biggest challenges is talking about uncertainty. Scientists are trained their whole careers to think about uncertainty, and be comfortable with it, many others aren’t similarly trained. I can’t just give a presentation and say “here are the confidence intervals” because most people don’t know what that means. I wish I could say I’ve come up with an ideal way to exlpain this, but I really haven’t. I also found that academia has shaped my written language, and I’ve needed to become more colloquial in writing reports.

5. In what ways has your academic background helped you in your current position?

The training I received in modelling, data management, and the scientific method allows me to do my job everyday. It has also taught me how to frame questions in a meaningful way and organize research projects effectively. Being a data scientist is very similar to doing any other kind of research. However instead of a field site and sampling protocols, I write map-reduce queries. Either way I’m creating controls, and collecting data, it’s just that I’m not quite as tan as when I was a field biologist. Obviously other parts of my background haven’t helped as much, like I’m still pretty good at IDing aquatic insects and digging holes is the forest but that doesn’t do me much good in my current job.

6. Any regrets about not pursuing an academic career path? Could you see yourself ever going back to academia at some point?

I don’t know if I’d say it’s a regret. I know that in many ways I wouldn’t enjoy aspects of a faculty job, and my current job retains parts of academia I love. However I still have romanticized notions of being an academic. Like maybe if I’d stuck with it I would have an amazing breakthrough and get featured on an episode of Radiolab. I know that will never happen now. Another part of academia I’ll miss is the travel and conferences, and the sense of dispersed community. I love grabbing beers with fellow ecologists and instantly being able to connect of over the mutual friends we had in the small community. I’ll miss the frenetic energy of ESA and seeing all the friends I’ve made over the years (My liver thanks me for not doing this anymore though). As far as returning, I don’t think I could ever go back to academia if for no other reason than the highway to facultyville only has one on ramp and many exit ramps. My impression is once you leave, there’s no going back, especially at my age. However I certainly wouldn’t rule out returning to a role like the one I previously held at NEON, or some other agency if the right opportunity came my way.

7. Anything about your current position that came as a surprise to you?

I think just how much it feels the same as when I was in academia. I work with a team of really smart people all with PhD’s to tackle hard problems. We generate hypotheses and collect data to test those hypotheses. I find it just as intellectually rewarding as anything I ever did in academia. My position also affords me a lot of freedom with my time, like my hours are almost as flexible as when I was a grad student (this may be a function of the tech industry more than others though).

8. Anything else you want to say to readers considering data science, or a non-academic career path more generally?

A PhD in ecology or evolution is good preparation for a career in data science. The traits that allow someone to get a PhD also prepare them for success almost anywhere, especially in a data science position. You’re probably really smart, highly motivated, an autodidact, and that can get you far professionally. However, despite all the qualifications most PhD’s have, a big shortcoming of our training is that no one teaches us how to sell ourselves. You need to go into an interview in industry with the ability to wow not just your scientific peers, but also the upper level VP of whatever with how your scientific skills can provide concrete, actionable (<- business lingo) knowledge. On top of the scientific training and personal marketing, you’ll probably need to bolster some of your technical skills. I’ve made a short hand list below of things you’ll need to know as a data scientist, but it’s been discussed in depth on other venues:

1). Know your databases. You’ll want to know SQL, and have a passing knowledge of a couple of NoSQL technologies like Neo4J or couchDB. If you’ve heard about ‘big data’, you’ve no doubt heard of Hadoop. You’ll need to at least be familiar with Hadoop and it’s associated tools like Hike and Pig.

2). Know something other than R. R is great, but you’ll be well served to know another language(s). Python is a good second stop, followed by C, Java or Javascript if you’re so inclined.

3). Don’t rage against the machine…learning. I’m assuming that most ecologists frequentist statistics chops are pretty good. In the world of data science though machine learning is everywhere. A good place to start is An introduction to statistical learning with applications in R. When I interviewed I had several comprehensive exam style questions about machine learning.

Finally, leaving academia isn’t easy or something to take lightly. You’ve probably devoted a decadeish of your life to training for a particular job, and changing course is a difficult decision. I know for me it was a long time coming. I’ve written about this in detail and in the end I made the right choice for my own happiness and my family’s.

Friday links: what significant results look like, optimal journal submission strategy, and more

Also this week: how to schedule a grad student committee meeting, PlanetPopNet, Wildlife Photographer of the Year, computer science vs. women, the Canadian government vs. its own scientists, and more.

From Meg:

This piece on how to schedule a committee meeting should be required reading for grad students. The things not to do include:

1. Don’t ask me to list all my availabilities between March 15 and June 1st. I’m not going to replicate my entire calendar into an email to you.

2. Don’t give me a list of 120 possible date/time combinations and ask me to check off all the ones that don’t work. See the previous point.

3. Don’t assume my availabilities remain unchanged for more than a couple of days.

Yes, yes, and yes. (ht: Leonid Kruglyak)

NPR’s Planet Money had a story on women in computer science, and focuses on how computer science came to be dominated by men. A key factor they focus on is how ads for early computers were marketed almost exclusively to boys and men. (Jeremy adds: Mom, dad, Meg took my link, make her give it back! ;-)  )

From Jeremy:

The Canadian Institute for Ecology and Evolution (the Canadian equivalent of NCEAS) is calling for proposals for working groups. Deadline Nov. 1. I organized their first working group, our first paper just came out.

Here’s a little simulator that could be useful to biostats instructors: it generates linear regressions with a specified sample size and p value for the null hypothesis of zero slope. Good for giving students a visual feel for what a significant regression looks like, and also for correcting common misconceptions (e.g., highly significant regressions don’t necessarily have high R^2 values).

Is it ever optimal to work your way up the journal ladder rather than down? That is, revise and resubmit a rejected paper to a more selective journal rather than a less selective one? Here‘s a simple model addressing that question. Easy to see how it could be elaborated to incorporate other effects (e.g., probably of rejection without review). Note that the model parameters will depend on the paper you’re submitting as well as on the journal. This model could also be extended to determine when it’s helpful to submit to Axios Review first (I bet if you ran the numbers you’d find it’s often helpful).

PlantPopNet is a new global-scale distributed experiment on the population biology of Plantago lanceolata. I’m a big fan of NutNet, the pioneering global-scale distributed ecological experiment, so it’s great to see more such experiments. Click the link to find out how you too can join PlantPopNet.

A nice balanced post on the necessity, and pitfalls, of exploratory data analysis. Resonates with Brian’s old post. I don’t entirely agree with the author’s vision as to what to do about it (basically, do away with papers entirely in favor of open-ended open science), but it’s thought-provoking.

The Canadian government continues to muzzle its scientists.

As Meg just told you, the male/female balance of computer science majors was improving steadily until the early 80s–then it went into reverse, for interesting reasons. Things kept improving in law, medicine, and physical sciences, although recently the male/female balance of those majors seems to have stabilized a bit short of equality. In other news, apparently Meg and I should divide the internet between us to minimize redundancy.

Some good points here about the utility of Twitter, but it’s going too far to say that using it is “vital to the success of your Ph.D.” Twitter is not (yet?) an essential tool the way, say, email is, and depending on the sort of person you are you won’t necessarily get anything out of it. (ht @smvamosi)

The winners of the Natural History Museum’s Wildlife Photographer of the Year contest have just been announced. I have fond memories of attending this exhibition when I was a postdoc in London. The winning image (taken from the linked story) is jaw dropping, as always:

And finally, I took the ESA’s survey of its members on what ecological concepts you find “useful” in your own work. When asked about the intermediate disturbance hypothesis I checked “not familiar with the concept”. :-P At the end of the survey, you have the chance to name some concepts that you find useful, but that you weren’t asked about. I’m pretty sure I blew my anonymity by writing “Price equation, metacommunity, zombie ideas”. :-)

What if NSF preproposals WERE the proposals?

I just finished my NSERC grant (hooray!), so thought I’d fire off a quick post with some thoughts on the difference between NSERC grants and NSF grants. At the end, there’s a radical suggestion for NSF grants: do away with full proposals and just go with the preproposals.

If you don’t know, NSERC is the Canadian federal government agency that funds non-biomedical research in Canada. It’s the Canadian equivalent of NSF (US) or NERC (UK). As I’ve discussed in the past, NSERC Discovery grants (DGs) are very different beasts than NSF grants (or grants for almost any other funding agency on earth, as far as I know). Briefly, DGs are 5 pages long, and you propose your entire research program for the next 5 years, not just one project. DGs are similar to NSF preproposals in terms of length, but even that’s not really a great comparison because NSF preproposals describe a single project rather than an entire research program.

As an example, here’s my previous DG from 5 years ago.* It’s not the greatest proposal (looking back, there are things I wish I’d done differently), but it’ll give you the flavor of what DGs are like. If you’re from the US and have no experience with the Canadian system, you may want to read it sitting down so you don’t hurt yourself when you faint from shock. :-) As a US colleague of mine said when he read a draft of my latest DG: “This is like three NSF proposals worth of work in five pages!”

My US colleague continued by saying, “Once NSF gives you the money, they don’t want you to have to think.” That is, NSF (or at least their reviewers) wants you to have thought through every methodological detail, so that if you get funded you can just go out and do exactly what you proposed to do. “Here’s the basic idea, I’ll figure out the details later, trust me”, or words to that effect, is not something you want to say in your full NSF proposal. Whereas that’s more or less exactly what you say in an NSERC proposal.

One way to look at it is to say that NSF wants to pre-approve your methods, whereas NSERC is happy to “outsource” review of methodological details to journal referees and others who evaluate NSERC-funded research. Personally, I prefer the NSERC system. Your methods are going to be evaluated by journal referees whether or not they’ve been pre-approved by NSF, which makes NSF’s pre-approval seem a bit redundant. Plus, it’s not like NSF actually requires people to do exactly what they proposed. Now, you might say that NSF needs to evaluate your proposed methods in detail in order to identify the best proposals and avoid wasting money on proposals that won’t work. But if that were true, wouldn’t it imply that NSERC is wasting billions on infeasible or seriously-flawed work, thereby burdening journal referees with the job of weeding out lots of crappy Canadian science? To ask that question is to answer it. If that was happening, the recent international review of the DG program would’ve slammed the program rather than praising it.** The NSERC example suggests to me that a granting agency can let investigators say “we’ll figure out the details later, trust us” and not thereby waste lots of money on inferior or infeasible science.***

So here’s an idea: what if NSF preproposals were the proposals? After all, one way to encourage reviewers to focus on the big ideas rather than picky methodological details is to not provide them with any methodological details in the first place. Somewhat like how, if you’re designing an online matchmaking system and don’t want potential dates to focus on less-important things like height or weight, you need to design a system that doesn’t provide those details.

Just spitballin’ here, curious to hear what people think of this.

p.s. In an old post Meg proposed two stage peer review: you could get your methods approved before collecting the data. In a sense, that’s what NSF-style proposal reviews are: a methods review prior to data collection.

*Note that this is just the proposal, not all the supplementary information (the budget, my cv, etc.). That supplementary information is very important to the evaluation of DGs.

**The linked report is very interesting reading. For instance, some of the panelists apparently were worried that the NSERC DG success rate is too high, so that NSERC is wasting significant money on inferior science while underfunding the best stuff. But the panel couldn’t find any evidence that that’s the case (and note that other lines of evidence point the same way). This isn’t to say that NSF could or should simply adopt the entire NSERC model whole hog (though they could make some moves in that direction). I think the most plausible interpretation of international comparative evidence on scientific funding systems is that lots of different systems can work. But anyone who thinks that NSERC “must” be wasting lots of money on inferior science while underfunding excellent work will have a tough time explaining how Canada does as well or better than other advanced countries on metrics of scientific productivity and influence.

***Note that NSERC and their referees do care about your methods to some extent, and don’t just blindly trust investigators to figure everything out later. For instance, in that old proposal of mine, I got dinged for not describing and justifying the genetics methods for one of the projects in sufficient detail. And because I had no previous experience with genetics, the referees weren’t willing to trust me to figure it out later. And honestly, they were probably right not to trust me–looking back, that bit of the proposal wasn’t sufficiently well-developed in my own mind. The NSERC example merely shows that investigators can establish that they know what they’re doing while providing much less methodological detail than NSF ordinarily expects. And no, the only way to do this is not to limit your proposal to methods you’ve used before–people often propose to do things they’ve never done before in their NSERC proposals. They just need to make a better case than I did that they know what they’re doing.

Elliot Sober on the present and future of philosophy of biology

Back in Sept. I was fortunate to be able to attend a philosophy of science “summit” at the University of Calgary, with talks by a bunch of the world’s top philosophers of science. I thought I’d share my notes from Elliot Sober’s talk, on the present and future of philosophy of biology. As I’m sure most of you know, Sober is a top philosopher of evolutionary biology, his book The Nature of Selection is a classic. I found his talk very interesting for several reasons. He talked about the state of philosophy of biology and its place within philosophy more broadly. I always have an anthropological interest in hearing about how people see the state of their own fields. He had a lot of advice about how to do philosophy of science, much of which encouraged philosophers to engage in scientific debates. And he made some passing remarks on how scientists in various fields perceive philosophers (apparently we ecologists are unusually receptive to philosophical input!) I don’t know enough about philosophy to evaluate all of Sober’s remarks, but I enjoyed mulling them over.

My notes follow. I did the best I could, but obviously any errors or omissions are mine.*

*************************

Philosophy of biology today seems to have less and less connection to the rest of philosophy, and seems to have little to contribute to science itself. Talking about science is science journalism; it’s not the same as contributing to science. Worried that philosophy departments will stop hiring philosophers of biology.

Philosophers seem to think that philosophy of science, and philosophy of biology, are now less central to philosophy than was the case 20-30 years ago. Why?

Public controversies about biology which had philosophical elements (e.g., sociobiology) used to have a high public profile. Not so much anymore. Gifted popularizers of biology also used to talk about philosophical issues (Dawkins, Gould, Lewontin). Again, not so much anymore.

Sociologist Kieran Healy (aside from me: hey, I’ve heard of him, I read his blog!) has done citation analyses of changes in philosophy, rankings of philosophy depts., centrality of different disciplines. Philosophy of science is not central to philosophy (though not peripheral either).

Biology is relatively hospitable to philosophy of science. Half-joking: 99% of physicists think philosophy is bullshit, it’s only 95% for biologists. (aside from me: Wonder what the number is for ecologists? I bet it’s fairly low, which maybe that means ecology is especially fertile ground for philosophy of science? But in that case, why does ecology seem to get much less attention from philosophers than evolutionary biology? Presumably because evolution has an agreed-on core set of ideas and questions that give philosophers a handle to latch onto? Whereas ecology is kind of a mess, so that it’s hard for outsiders to develop a road map of the field and figure out where they might profitably contribute philosophical insight?)

Overspecialization and the “regionalist turn” in philosophy of science (Jean Gayon–the view that nothing of interest in philosophy of science can be done except in within-discipline work). Reason to doubt this—methods of reasoning and inference are not subject-matter specific. Philosophers of biology sometimes unaware that their questions had been addressed in general philosophy of science. Unfortunate because a good trick for developing your career is to use well-developed ideas from one area to solve problems in another area (aside from me: Yup! That’s not just good advice for philosophers, it’s good advice for anyone. That’s what a good chunk of my own career consists of, anyway–shamelessly stealing ideas from one area and applying them to a different area: applying the Price equation to ecology, applying modern coexistence theory to the IDH, half the blog posts I write. Or think of neutral theory in ecology, or MaxEnt, or using ideas from economics to understand resource trade mutualisms…)

Philosophers seem to be retreating from making normative statements about the practice of science. Describing science without critiquing it. We critique creationists, why not scientists? Scientists make normative judgements of one another’s methods, so why can’t philosophers do so too? Or think of statistics—gives normative advice on how to proceed, given epistemic goals and empirical facts.

Clarifying a concept or the logic of a line of argument is clearly recognized as “real” philosophy by philosophers outside of philosophy of science. That’s not just purely descriptive work.

Retreat from normativity also due to influence of history of science on philosophy of science? Historians think of normative judgements as anachronistic and hubristic.

Rational reconstruction of historical scientific arguments—is this legit? Helpful? If the scientists themselves didn’t use the logical and mathematical tools used in your reconstruction, isn’t that anachronistic? (aside from me: Deborah Mayo would say yes. She calls rational reconstructions–e.g., trying to show that a piece of pre-Bayesian scientific reasoning was “really” Bayesian–“painting by numbers”. Just because you can come up with a paint by numbers picture of the Mona Lisa, in what sense does that let you understand how the Mona Lisa was painted? But Sober likes the approach and uses it himself.)

One way for philosophers to make normative contributions without fear—find a scientific controversy and engage in it. Of course need to identify controversies that do have a philosophical component. Which means you need to reject or at least not take too seriously Quine’s claim that philosophy is continuous with science. (aside from me: speaking as a scientist, this is great advice. There are a lot of scientific controversies that are really philosophical, but that aren’t recognized by the participating scientists as philosophical. Or even if they are, the scientists lack the philosophical expertise to properly resolve them. If any philosophers are reading this and want some suggestions for ecological topics that could use some proper philosophical attention, drop me a line!)

Another way to be interestingly normative—find a proposition scientists accept uncritically, and identify scientifically interesting conditions under which it would be true or false. Note that these conditions need not be highly probable. For instance, think of Felsenstein’s demonstration that cladistics parsimony is statistically inconsistent. Turns out that parsimony does in fact make implicit substantive assumptions.

Practical tips to get scientists to pay attention—collaborate with scientists and publish in scientific journals. (aside from me: many of the philosophers I admire have done this. Sober himself. Deborah Mayo. William Wimsatt. Samir Okasha. It would be cool if someone like Chris Eliot were to publish some philosophy of ecology in an ecology journal. I’ll bet Oikos would take philosophy of ecology in its Forum section, and the right paper might fly as a Synthesis & Perspectives piece in Ecology. Ideas in Ecology & Evolution would go for it too, though they’re less widely-read.)

Normative problems are often discarded as dead ends when the question could be revised in a fruitful way. Example: best Bayesian definition of degree of confirmation? Change the question to “how do Bayesian and frequentist accounts of testing differ and which is better under which circumstances?” The latter question is of practical relevance for science—journals have policies on what statistics are appropriate. What’s the right criterion for empirical significance? Change the question to “What does it mean for a theory to be testable, relative to background info?” Does Ockham’s razor depend on the assumptions that nature is simple? Change the question to “Does cladistics parsimony work only when evolution is parsimonious?”

All research programs experience diminishing returns. But when it happens with normative problems, that doesn’t mean you should stop asking normative questions, you just need fresh ones.

After the talk, the discussion was kicked off by some designated commenters, some of whom were scientists (aside from me: I thought this was an interesting way to structure the symposium; I’d be curious to see a similar structure tried at an ESA symposium). Ford Doolittle remarked that molecular biologists and genomicists often don’t even realize they have a philosophy. Which leads to disputes that are thought to be empirical, but are not (think of the debate over ENCODE and whether 80% of the genome is ‘functional’). Another example: the debate over whether there’s a tree of life is really a debate over “what do you mean by ‘tree’”? As opposed to evolutionary biologists and ecologists, who are open to philosophy. To reach biomedical and molecular types, Doolittle suggested that philosophers will need to publish in Nature and Science and PNAS, presumably in collaboration with biologists.

*Sorry, no time for real posts at the moment, we’re all swamped. But my grant deadline will soon be past and I’m now done with my teaching for this term, so normal service from me will resume shortly.

What math should ecologists teach

Recently Jeremy made the point that we can’t expect ecology grad students to learn everything useful under the sun and asked in a poll what people would prioritize and toss. More math skills was a common answer of what should be prioritized.

As somebody who has my undergraduate (bachelor’s) degree in mathematics I often get asked by earnest graduate students what math courses they should take if they  want to add to their math skills. My usual answer is nothing – the way math departments teach math is very inefficient for ecologists, you should teach yourself. But its not a great answer.

In a typical math department in the US, the following sequence is the norm as one seeks to add math skills (each line is a 1 semester course taken roughly in the sequence shown)

  1. Calculus 1 – Infinite series, limits and derivatives
  2. Calculus 2 – Integrals
  3. Calculus 3 – Multivariate calculus (partial derivatives, multivariate integrals, Green’s theorem, etc)
  4. Linear algebra – solving systems of linear equations, determinants, eigenvectors
  5. Differential equations – solving systems of linear differential equations, solving engineering equations (y”+cy=0)
  6. Dynamical systems – yt+1=f(yt) variations including chaos
  7. Probability theory (usually using measure theory)
  8. Stochastic processes
  9. Operations research (starting with linear programming)

That’s 7 courses over and above 1st year calculus to get to all the material that I think a well-trained mathematical ecologist needs! There are some obvious problems with this. First few ecologists are willing to take that many classes. But even if they were, this is an extraordinary waste of time since over half of what is taught in those classes is pretty much useless in ecology even if you’re pursuing deep into theory. For example – path and surface integrals and Green’s theorem is completely irrelevant. Solving systems of linear equations is useless. Thereby making determinants more or less useless. Differential equations as taught – useless (to ecologists very useful to physicists and engineers). Measure-based probability theory – useless. Linear programming – almost useless.

Here’s my list of topics that a very well-trained mathematical ecologist would need (beyond a 1st year calculus sequence):

  1. Multivariate calculus simplified (partial derivatives, volume integrals)
  2. Matrix algebra and eigenvectors
  3. Dynamical systems (equilibrium analysis, cycling and chaos)
  4. Basic probability theory and stochastic processes (especially Markov chains with brief coverage of branching processes and master equations)
  5. Optimization theory focusing on simple calculus based optimization and Lagrange multipliers (and numerical optimization) with brief coverage of dynamic programming and game theory

Now how should that be covered? I can see a lot of ways. I could see all of that material covered in a 3 semester sequence #1/#2, #3, #4/#5 if you want to teach it as a formal set of math courses. And here is an interesting question. We ecologists often refuse to let the stats department teach stats to our students (undergrad or grad)  because we consider it an important enough topic we want our spin on it. Why don’t have the same feelings about math? Yet as my two lists show math departments are clearly focused on somebody other than ecologists (mostly I think they’re focused on other mathematicians in upper level courses). So should ecology department start listing a few semesters  of ecology-oriented math on their courses?

But I could see less rigorous, more integrative ways to teach the material as well. For example, I think in a year long community ecology class you could slip in all the concepts. Dynamical systems (and partial derivatives) with logistic/ricker models and then Lotka-Volterra. Eigenvectors and Markov Chain’s with Horn’s succession models or on age-stage structure, then eigenvectors returning as a Jacobian on predtor-prey. Master equations on Neutral Theory. Optimizaiton on optimal foraging and game theory Yes the coverage would be much less deep than a 3 semester sequence of math only courses, but it would, I think, be highly successful.

I say “I think” because, I don’t know anywhere that teaches the math this way. I teach a one semester community ecology grad class and try to get a subset of the concepts across, but certainly don’t come anywhere close covering everything that I wish were covered (i.e. my list above). And I know a lot of places have a one-semester modelling course for grad students. But teaching their own math courses, or teaching a math-intensive ecology sequence I haven’t come across.

What do you think? Have I listed too much math? or left your favorite topic out? How should this be taught? How many of our students (undergrads, just all grads, only a subset of interested grads) should this be taught to?.

Friday links: new p-hacking data, grant lotteries, things ecologists will never experience, and more

Also this week: a blogging anniversary, betting on replication, Shakespeare vs. dead animals, Brian and Jeremy have a link fight, and more. Also terrible philosophy puns.

From Brian (!):

Does which countries whose researchers you coauthor papers with affect the impact factor of the journal you get in? Apparently yes: in this piece from Emilio Bruna.

In the always entertaining and provoking Ecological Rants blog, there is a quote from Thomas Piketty’s book (setting the economic world on fire in the topic of income inequality for its careful empirical compilation of historical data). The quote is pretty harsh about economists’ obsession with little toy mathematical models that don’t inform about the real world.  Krebs  argues this critique applies to ecology as well (and cites no less than Joel Cohen one of the great theoretical ecologists who regularly chides ecologists for their physics envy). While I am an advocate for more math education in biology, I have to confess a certain sympathy with the quote. We’re so busy obsessing with equilibrium math models and small scale manipulative experiments we’re missing a lot of the story that is sitting in front of us in the massive amounts of data that have been and could be assembled. (There’s a controversial statement to make you sit up on a Friday)

Following up on my post about NSF’s declining acceptance rates there is a well argued blog by coastalpathogens suggesting we should just revert to a lottery system (one of my suggestions but not one that received a lot of votes in the poll).

From Meg:

Things ecologists are unlikely to learn firsthand: it’s hard to fly with a Nobel Prize. (Jeremy adds: is it hard to fly with the Crafoord Prize?)

The Chronicle of Higher Education had an article on increasing scrutiny of some NSF grants by Congressional Republicans (subscription required).

From Jeremy:

Link war! Brian, I’ll see your Thomas Piketty quote, and raise you Paul Krugman. Krugman’s long advocated the value of deliberately simplified toy models as essential for explaining important real-world data, making predictions, and guiding policy. See this wonderful essay on “accidental theorists” (and why it’s better to be a non-accidental theorist), this equally-wonderful essay on how badly both economists and evolutionary biologists go wrong when they ignore “simple” mathematical models, and this one in which Krugman explains his favorite toy model and how it let him make several non-obvious and very successful predictions about the Great Recession. Oh, and as important as Piketty’s empirical work is, it’s worth noting that even very smart and sympathetic readers have had a hard time figuring out what his implicit model is. If your model’s not explicit (and if you don’t care much for doing experiments), then your big data might as well be pig data. While I’m at it, I’ll raise you R. A. Fisher too.*

Statistician Andrew Gelman has been blogging for 10 years. I was interested to read his comments that there used to be more back-and-forth among blogs 10 years ago, and that these days that only happens in economics. I share the impression that economics is the only field that has a blogosphere. I also share Andrew’s view that Twitter is no substitute for blogs. Twitter has its uses. But “in depth conversation and open-ended exploration of ideas” is not one of them.

Speaking of Andrew Gelman, he passes on a link to a new preprint on the distribution of 50,000 published p-values in three top economics journals from 2005-2011. I’ve skimmed it, it seems like a pretty careful study, which avoids at least some of the problems of similar studies I’ve linked to in the past. The distribution has an obvious trough for marginally non-significant p-values, and an obvious bump for just barely-significant p-values. The authors argue that’s evidence not just of publication bias, but of p-hacking (e.g., choosing whichever of a set of alternative plausible model specifications gives you a significant result). They estimate that 10-20% of marginally non-significant tests are p-hacked into significance. The shape of the distribution is invariant to all sorts of factors–the average age of the authors, were any of the authors very senior, was a research assistant involved in the research, was the result a “main” result, were the authors testing a theoretical model, were the data and/or code publicly available, were the data from lab experiments, and more.

One more from Gelman: You can now bet real money on whether a bunch of replication attempts in psychology will pan out. I think it would be really fun, and very useful, to have something like this in ecology.

Most tenure-track jobs do not have 300+ applicants (and even the few that do tend to have an unusually-high proportion of obviously-uncompetitive applicants).

Speaking of tenure-track job searches: soil ecologist Thea Whitman with a long post on what it was like to interview (successfully!) for a tenure-track job. Go read it, it’s full of win.

Shakespearean insult or animal on display at Harvard’s Museum of Natural History?

Philosophy student karaoke songs.

*I’m guessing that Brian saw this response from me coming from 10 miles away, but I figure he (and y’all) would have been disappointed if I didn’t actually follow through and provide it. My boring predictability clockwork reliability is one of my most endearing features. That, and my refusal to to take second place to any ecologist when it comes to making half-baked analogies with economics. [looks over at Meg, sees her rolling her eyes, coughs awkwardly] :-) In seriousness, I actually do see what Brian means and probably don’t disagree with him that much here. And for what it’s worth, I think current trends in ecology are mostly running in the direction Brian would like to see them run (e.g., away from MacArthur-style toy models of a single process.)

 

How to get a postdoc position (guest post)

Note from Jeremy: This is a guest post by Margaret Kosmala, a postdoc in Organismal and Evolutionary Biology at Harvard. It’s the first in a planned series on life as a postdoc.

**************************************

I did not start thinking about getting a postdoc position until it was almost too late. I was focused on my dissertation research and finishing up before I ran out of money. About six months from defending, I suddenly realized that I would be unemployed once I did defend. I knew that I had to start trying to find a postdoc position right away. And then I realized I had no idea how to go about doing so. This was at the beginning of last summer and so I spent the next months talking to as many people as possible. Here is what I learned.

There are essentially two ways of obtaining a postdoc. The first is to write your own. The second is to apply for job with someone who already has a project.

To write your own postdoc may be the best option if your objective is a future research career. However, you need to start early. Assuming you already know what sort of research you want to do, you have three potential methods of obtaining the funding to support yourself. You can co-write a proposal with your future postdoc mentor, you can look for fellowship opportunities, or you can look for a postdoc advisor with deep pockets.

If you know who you want to work with and what you want to do, co-writing a successful major grant proposal can be great experience and look stellar on your CV or in a letter of recommendation. If you want to try this route, you should start contacting prospective postdoc advisors a couple years before you expect to defend.

Yes, I said a couple years.

Why a couple years? Most organizations have just one or two funding cycles per year. For example, if you expect to defend May 2016, and you would like to be funded on an NSF DEB grant, you would need to have that grant funded by January 2016. In order to do that you would need to submit your pre-proposal in January 2015. And then order to submit in January, you will needed to start working on the proposal this fall (2014). Which means that should probably have established a rapport with your future postdoc advisor by now.

Defending before May 2016? Fellowships are your thing? You can look for postdoctoral fellowships offered by funding organizations such as NSF, by research centers like SESYNC and NIMBioS, and by private entities like the McDonnell Foundation. Generally speaking, you will need to have a postdoc advisor in mind.

A less well-known source of fellowship funding is universities themselves. Some universities offer institution-wide fellowships on a competitive basis. At other universities there are research centers focused on environmental issues that also offer fellowship opportunities. Finding out which universities provide these opportunities can be tedious however, so it’s often best to ask potential postdoc advisors what, if any, opportunities are offered at their institutions.

If you’re looking for postdoc fellowships offered through large agencies or foundations, they often have just one or two deadlines per year, which means that you may need to write a competitive proposal about a year in advance. When I started thinking about a postdoc position six months ahead of defending, I was too late for almost all postdoc fellowships.

Which brings me to the third method for writing your own postdoc. Some professors have, at times, a pot of money they can use to hire a postdoc. It may be in the form of an endowed professorship, start up funds, prize money, etc. If you’ve only got six months or so before defending, you might start asking around to see if anyone you know – or anyone those people know – expect to have money to fund a postdoc in the next year or so. Sometimes researchers get money they weren’t expecting and need to use it relatively quickly, so keep your ears open. You’ll want to be able to pitch an exciting idea to your prospective postdoc advisor and have a handful of references (friends of the prospective advisor are ideal) who are willing to attest to your awesomeness.

Finally, the remaining way of obtaining a postdoc: applying for advertised positions. I won’t say too much about this method, since it’s pretty straightforward and there are other websites which give guidance as to where to look for job ads and how to best position yourself. In a nutshell: you find a position that looks like it would fit you, send in an application, perhaps get an interview (often by phone or Skype), and sign a contract if you’re offered the position and accept it. In applying, you should do smart things like read the webpage(s) and some recent publications of the job offerer. If you’re offered the position, interview other postdocs and grad students in the lab before accepting; you should like your work environment as much as the research itself. And you might take a glimpse at the benefits package to make sure it’s sufficient.

Hurray! You’ve got a postdoc position. Now tell everyone you know, save up a couple thousand dollars or raise the limit on your credit card in preparation for your move, and say goodbye to your friends. Check out ESA’s new Early Career Ecologist Section. Oh, and definitely finish that dissertation.

Friday links: dump the Canadian CCV, unreclaiming zoology, billion dollar grant, and more

Also this week: new videos for teaching ecology, social media as professional development, the pluses and minuses of minority-focused conferences, the best ecology blog you’ve (well, I’d) never heard of, and more.

From Meg:

I added two fun, deep sea-related new videos to my collection of videos for teaching ecology: 1) a massive deep sea mussel bed; in the video, they have the robotic arm play with the solid methane hydrate that has formed near the mussels (ht: Deep Sea News), and 2) a video of a whale fall community, complete with footage of a shark tearing into the whale (ht: Joshua Drew). Fun! And, while we’re talking about whale falls, this is a neat article about them; among other things, it talks about snotworms (yes, there really are things called “snotworms”).

Conservation Biology is the latest journal to go to double blind peer review. I love the opening line of this announcement: “To have biases is human, to fight them, while not divine, is at least worth attempting.” (ht: David Shiffman)

SciWo had a Tenure, She Wrote post on social media as professional development. I really enjoyed it. As she summarizes near the end

Being on-line does take some time, but so does everything worth doing in life. I’ve never seen any convincing data to show that strategic use of social media is any worse investment of my time and energy than any other thing I could do with those random moments of brain weariness or distraction when I find myself refreshing my Twitter feed or reading a blog post. Instead, the benefits I’ve listed above seem to make a compelling case for engaging with your academic peers on-line – just as you would outline benefits if you encourage networking at in-person at conferences.

From Jeremy:

Here’s a petition I can get behind! Tell NSERC to dump the Canadian Common CV. For you non-Canadians: last year NSERC and other Canadian funding agencies started requiring researchers applying for grants to provide their CV’s using a ridiculous online form. The petition is not exaggerating–it literally is two weeks of work to enter all the information (I know because I just did it for my grant renewal application). Which the software then prints in a horrendously organized and butt-ugly format that makes it very difficult for the people who are evaluating your application to find the information they want. But hey, at least we get…um, actually there’s no upside. Unfortunately, researchers have already been protesting the CCV since it was introduced, and all we’ve gotten in response is minor software updates, so I doubt this petition will go anywhere. The next time an institution admits a mistake and drops enterprise software it had previously adopted will be the first time.

Caroline Tucker asks a good question: What would you do with a billion dollar grant? The inspiration is a billion dollar EU-funded project to recreate the human brain with supercomputers. As that example and others illustrate, the way to attract a big slug of money to a field these days often is with some very ambitious project. Click through for Caroline’s nice discussion of what a billion dollar ecology project might look like (a more expensive version of NEON isn’t the only possibility). Semi-related discussions of the trade-offs between centrally-coordinated science and individual investigator science, and between expensive science and cheap science, here, here, and here, and see here for a relevant historical discussion of the IBP.

Ray Hilborn on “faith-based fisheries“. An entertaining and provocative polemic from 2006. I’m not qualified to evaluate it, but thought it worth passing on. (ht a correspondent, via email)

Un-reclaiming the name “zoologist”. Just one of many interesting posts from the EcoEvo group blog from the ecology & evolution students (and faculty?) at Trinity College Dublin. It is long-running but I just stumbled across last week. Apparently I should’ve noticed them much earlier, as they’ve just been named “Best Science and Technology Blog in Ireland“. I added them to our blogroll.

For instance, here’s Natalie Cooper from EcoEvo on her experience of having to work very hard to organize a gender-balanced plenary session for a specialized conference. Includes lots of practical suggestions for overcoming the usual excuses for lack of gender balance (which as she notes aren’t merely excuses–they’re often real problems). Kudos to her for putting in all that effort and I’m glad it was rewarded in the end. Though had it not been rewarded, I hope she wouldn’t have beaten herself up. See this old post of Meg’s for related discussion.

Last year the NSF DEB and IOS surveyed the community about their views of the new preproposal system. The results are in. The headline result is that people like the preproposal system but don’t like being able to apply only once/year. Note that fears that the new system would disproportionately affect certain groups have not been borne out. Group representation among awardees is the same under the new system as under the old system. (ht Sociobiology)

Terry McGlynn is torn over whether it’s useful for minority students to attend a minority-focused conference if that conference doesn’t include many people working in their field. Somewhat related to my old post arguing that students mostly shouldn’t bother attending student-focused conferences.

Zen Faulkes wonders whatever happened to the annual Open Lab anthology of the best online science writing, and what its apparent demise says about the changing face of online science writing.

Amy Parachnowitsch on the many benefits of writing a review paper (or three at once, in her case!)

The Chronicle has picked up ecologist Stephen Heard’s piece (noted in a previous linkfest) on the value of humor in scientific writing.

In 2006 Germany started following the lead of many other countries and began charging tuition at its public universities. They’ve now reversed that decision.

And finally: Happy Canadian Thanksgiving! :-)

Poll results: what should ecologists learn less of?

Here, for what they’re worth*, are the results so far from yesterday’s poll asking readers to name the most important thing for ecologists to learn more of, and the thing they should learn less of in order to free up time. We’ve gotten 165 responses so far, and based on past experience the results won’t change much if we wait any longer.

Results first, then some comments:

more of

less of

  • No consensus for either question. And not only that, every topic got at least one vote as the most important thing for ecologists to learn more of, and at least one vote as the most important thing for ecologists to learn less of! I think this is a useful reminder of just how diverse ecologists are in terms of their background knowledge, motivations, interests, and expertise (which I think is a good thing, by the way).
  • Most popular answer to both questions was “it depends”, which I interpret as a vote in favor of flexible curricula that let different people specialize on different things according to their own needs and interests. That’s what many (not all) graduate curricula are like, of course.
  • Probably no surprise that the next two most popular choices for what ecologists should know more of were “programming” and “statistical techniques”. For obvious and very good reasons, there’s a long-term trend for all fields of science to become more quantitative, and to make heavier use of computers. Then natural history, math, and evolution.
  • Next most popular choices for what ecologists should know less of were “chemistry” and “physics”. I interpret that as a vote against the common North American practice of requiring all science majors (not just ecologists) to take introductory physics and introductory chemistry. Curious to hear discussion of this. After that was “economics” and “mathematical foundations of statistics”. Ecologists aren’t ordinarily taught anything about either of those, so I suspect that votes for these were just people’s way of identifying the least-important subjects on the list for ecologists to know, whether or not those subjects are actually part of current ecology curricula. Next was “genetics and molecular biology”, followed by “natural history” and then “philosophy of science”.
  • There were no obvious associations between what folks thought ecologists should learn more of, and what they thought ecologists should learn less of, except that most (but not all) people who said “it depends” for one question also said “it depends” for the other.
  • If for each topic you take the difference between the number of votes for more of it and less of it, you get a crude index of respondents’ net desire to see ecologists learn more of it. By this measure, the topics rank as follows: programming +20, statistical techniques +15, math +11, evolution +9, natural history +7…[skipping some]…economics -16, physics -19, chemistry -21. And the topics in the middle were those that received few votes for either question. There weren’t any hugely controversial topics that lots of people really  want ecologists to learn more of and lots of other people really want ecologists to learn less of.
  • Full disclosure: I’d have answered both questions “it depends”

*Probably not much