Friday links: what significant results look like, optimal journal submission strategy, and more

Also this week: how to schedule a grad student committee meeting, PlanetPopNet, Wildlife Photographer of the Year, computer science vs. women, the Canadian government vs. its own scientists, and more.

From Meg:

This piece on how to schedule a committee meeting should be required reading for grad students. The things not to do include:

1. Don’t ask me to list all my availabilities between March 15 and June 1st. I’m not going to replicate my entire calendar into an email to you.

2. Don’t give me a list of 120 possible date/time combinations and ask me to check off all the ones that don’t work. See the previous point.

3. Don’t assume my availabilities remain unchanged for more than a couple of days.

Yes, yes, and yes. (ht: Leonid Kruglyak)

NPR’s Planet Money had a story on women in computer science, and focuses on how computer science came to be dominated by men. A key factor they focus on is how ads for early computers were marketed almost exclusively to boys and men. (Jeremy adds: Mom, dad, Meg took my link, make her give it back! ;-)  )

From Jeremy:

The Canadian Institute for Ecology and Evolution (the Canadian equivalent of NCEAS) is calling for proposals for working groups. Deadline Nov. 1. I organized their first working group, our first paper just came out.

Here’s a little simulator that could be useful to biostats instructors: it generates linear regressions with a specified sample size and p value for the null hypothesis of zero slope. Good for giving students a visual feel for what a significant regression looks like, and also for correcting common misconceptions (e.g., highly significant regressions don’t necessarily have high R^2 values).

Is it ever optimal to work your way up the journal ladder rather than down? That is, revise and resubmit a rejected paper to a more selective journal rather than a less selective one? Here‘s a simple model addressing that question. Easy to see how it could be elaborated to incorporate other effects (e.g., probably of rejection without review). Note that the model parameters will depend on the paper you’re submitting as well as on the journal. This model could also be extended to determine when it’s helpful to submit to Axios Review first (I bet if you ran the numbers you’d find it’s often helpful).

PlantPopNet is a new global-scale distributed experiment on the population biology of Plantago lanceolata. I’m a big fan of NutNet, the pioneering global-scale distributed ecological experiment, so it’s great to see more such experiments. Click the link to find out how you too can join PlantPopNet.

A nice balanced post on the necessity, and pitfalls, of exploratory data analysis. Resonates with Brian’s old post. I don’t entirely agree with the author’s vision as to what to do about it (basically, do away with papers entirely in favor of open-ended open science), but it’s thought-provoking.

The Canadian government continues to muzzle its scientists.

As Meg just told you, the male/female balance of computer science majors was improving steadily until the early 80s–then it went into reverse, for interesting reasons. Things kept improving in law, medicine, and physical sciences, although recently the male/female balance of those majors seems to have stabilized a bit short of equality. In other news, apparently Meg and I should divide the internet between us to minimize redundancy.

Some good points here about the utility of Twitter, but it’s going too far to say that using it is “vital to the success of your Ph.D.” Twitter is not (yet?) an essential tool the way, say, email is, and depending on the sort of person you are you won’t necessarily get anything out of it. (ht @smvamosi)

The winners of the Natural History Museum’s Wildlife Photographer of the Year contest have just been announced. I have fond memories of attending this exhibition when I was a postdoc in London. The winning image (taken from the linked story) is jaw dropping, as always:

And finally, I took the ESA’s survey of its members on what ecological concepts you find “useful” in your own work. When asked about the intermediate disturbance hypothesis I checked “not familiar with the concept”. :-P At the end of the survey, you have the chance to name some concepts that you find useful, but that you weren’t asked about. I’m pretty sure I blew my anonymity by writing “Price equation, metacommunity, zombie ideas”. :-)

What if NSF preproposals WERE the proposals?

I just finished my NSERC grant (hooray!), so thought I’d fire off a quick post with some thoughts on the difference between NSERC grants and NSF grants. At the end, there’s a radical suggestion for NSF grants: do away with full proposals and just go with the preproposals.

If you don’t know, NSERC is the Canadian federal government agency that funds non-biomedical research in Canada. It’s the Canadian equivalent of NSF (US) or NERC (UK). As I’ve discussed in the past, NSERC Discovery grants (DGs) are very different beasts than NSF grants (or grants for almost any other funding agency on earth, as far as I know). Briefly, DGs are 5 pages long, and you propose your entire research program for the next 5 years, not just one project. DGs are similar to NSF preproposals in terms of length, but even that’s not really a great comparison because NSF preproposals describe a single project rather than an entire research program.

As an example, here’s my previous DG from 5 years ago.* It’s not the greatest proposal (looking back, there are things I wish I’d done differently), but it’ll give you the flavor of what DGs are like. If you’re from the US and have no experience with the Canadian system, you may want to read it sitting down so you don’t hurt yourself when you faint from shock. :-) As a US colleague of mine said when he read a draft of my latest DG: “This is like three NSF proposals worth of work in five pages!”

My US colleague continued by saying, “Once NSF gives you the money, they don’t want you to have to think.” That is, NSF (or at least their reviewers) wants you to have thought through every methodological detail, so that if you get funded you can just go out and do exactly what you proposed to do. “Here’s the basic idea, I’ll figure out the details later, trust me”, or words to that effect, is not something you want to say in your full NSF proposal. Whereas that’s more or less exactly what you say in an NSERC proposal.

One way to look at it is to say that NSF wants to pre-approve your methods, whereas NSERC is happy to “outsource” review of methodological details to journal referees and others who evaluate NSERC-funded research. Personally, I prefer the NSERC system. Your methods are going to be evaluated by journal referees whether or not they’ve been pre-approved by NSF, which makes NSF’s pre-approval seem a bit redundant. Plus, it’s not like NSF actually requires people to do exactly what they proposed. Now, you might say that NSF needs to evaluate your proposed methods in detail in order to identify the best proposals and avoid wasting money on proposals that won’t work. But if that were true, wouldn’t it imply that NSERC is wasting billions on infeasible or seriously-flawed work, thereby burdening journal referees with the job of weeding out lots of crappy Canadian science? To ask that question is to answer it. If that was happening, the recent international review of the DG program would’ve slammed the program rather than praising it.** The NSERC example suggests to me that a granting agency can let investigators say “we’ll figure out the details later, trust us” and not thereby waste lots of money on inferior or infeasible science.***

So here’s an idea: what if NSF preproposals were the proposals? After all, one way to encourage reviewers to focus on the big ideas rather than picky methodological details is to not provide them with any methodological details in the first place. Somewhat like how, if you’re designing an online matchmaking system and don’t want potential dates to focus on less-important things like height or weight, you need to design a system that doesn’t provide those details.

Just spitballin’ here, curious to hear what people think of this.

p.s. In an old post Meg proposed two stage peer review: you could get your methods approved before collecting the data. In a sense, that’s what NSF-style proposal reviews are: a methods review prior to data collection.

*Note that this is just the proposal, not all the supplementary information (the budget, my cv, etc.). That supplementary information is very important to the evaluation of DGs.

**The linked report is very interesting reading. For instance, some of the panelists apparently were worried that the NSERC DG success rate is too high, so that NSERC is wasting significant money on inferior science while underfunding the best stuff. But the panel couldn’t find any evidence that that’s the case (and note that other lines of evidence point the same way). This isn’t to say that NSF could or should simply adopt the entire NSERC model whole hog (though they could make some moves in that direction). I think the most plausible interpretation of international comparative evidence on scientific funding systems is that lots of different systems can work. But anyone who thinks that NSERC “must” be wasting lots of money on inferior science while underfunding excellent work will have a tough time explaining how Canada does as well or better than other advanced countries on metrics of scientific productivity and influence.

***Note that NSERC and their referees do care about your methods to some extent, and don’t just blindly trust investigators to figure everything out later. For instance, in that old proposal of mine, I got dinged for not describing and justifying the genetics methods for one of the projects in sufficient detail. And because I had no previous experience with genetics, the referees weren’t willing to trust me to figure it out later. And honestly, they were probably right not to trust me–looking back, that bit of the proposal wasn’t sufficiently well-developed in my own mind. The NSERC example merely shows that investigators can establish that they know what they’re doing while providing much less methodological detail than NSF ordinarily expects. And no, the only way to do this is not to limit your proposal to methods you’ve used before–people often propose to do things they’ve never done before in their NSERC proposals. They just need to make a better case than I did that they know what they’re doing.

Elliot Sober on the present and future of philosophy of biology

Back in Sept. I was fortunate to be able to attend a philosophy of science “summit” at the University of Calgary, with talks by a bunch of the world’s top philosophers of science. I thought I’d share my notes from Elliot Sober’s talk, on the present and future of philosophy of biology. As I’m sure most of you know, Sober is a top philosopher of evolutionary biology, his book The Nature of Selection is a classic. I found his talk very interesting for several reasons. He talked about the state of philosophy of biology and its place within philosophy more broadly. I always have an anthropological interest in hearing about how people see the state of their own fields. He had a lot of advice about how to do philosophy of science, much of which encouraged philosophers to engage in scientific debates. And he made some passing remarks on how scientists in various fields perceive philosophers (apparently we ecologists are unusually receptive to philosophical input!) I don’t know enough about philosophy to evaluate all of Sober’s remarks, but I enjoyed mulling them over.

My notes follow. I did the best I could, but obviously any errors or omissions are mine.*

*************************

Philosophy of biology today seems to have less and less connection to the rest of philosophy, and seems to have little to contribute to science itself. Talking about science is science journalism; it’s not the same as contributing to science. Worried that philosophy departments will stop hiring philosophers of biology.

Philosophers seem to think that philosophy of science, and philosophy of biology, are now less central to philosophy than was the case 20-30 years ago. Why?

Public controversies about biology which had philosophical elements (e.g., sociobiology) used to have a high public profile. Not so much anymore. Gifted popularizers of biology also used to talk about philosophical issues (Dawkins, Gould, Lewontin). Again, not so much anymore.

Sociologist Kieran Healy (aside from me: hey, I’ve heard of him, I read his blog!) has done citation analyses of changes in philosophy, rankings of philosophy depts., centrality of different disciplines. Philosophy of science is not central to philosophy (though not peripheral either).

Biology is relatively hospitable to philosophy of science. Half-joking: 99% of physicists think philosophy is bullshit, it’s only 95% for biologists. (aside from me: Wonder what the number is for ecologists? I bet it’s fairly low, which maybe that means ecology is especially fertile ground for philosophy of science? But in that case, why does ecology seem to get much less attention from philosophers than evolutionary biology? Presumably because evolution has an agreed-on core set of ideas and questions that give philosophers a handle to latch onto? Whereas ecology is kind of a mess, so that it’s hard for outsiders to develop a road map of the field and figure out where they might profitably contribute philosophical insight?)

Overspecialization and the “regionalist turn” in philosophy of science (Jean Gayon–the view that nothing of interest in philosophy of science can be done except in within-discipline work). Reason to doubt this—methods of reasoning and inference are not subject-matter specific. Philosophers of biology sometimes unaware that their questions had been addressed in general philosophy of science. Unfortunate because a good trick for developing your career is to use well-developed ideas from one area to solve problems in another area (aside from me: Yup! That’s not just good advice for philosophers, it’s good advice for anyone. That’s what a good chunk of my own career consists of, anyway–shamelessly stealing ideas from one area and applying them to a different area: applying the Price equation to ecology, applying modern coexistence theory to the IDH, half the blog posts I write. Or think of neutral theory in ecology, or MaxEnt, or using ideas from economics to understand resource trade mutualisms…)

Philosophers seem to be retreating from making normative statements about the practice of science. Describing science without critiquing it. We critique creationists, why not scientists? Scientists make normative judgements of one another’s methods, so why can’t philosophers do so too? Or think of statistics—gives normative advice on how to proceed, given epistemic goals and empirical facts.

Clarifying a concept or the logic of a line of argument is clearly recognized as “real” philosophy by philosophers outside of philosophy of science. That’s not just purely descriptive work.

Retreat from normativity also due to influence of history of science on philosophy of science? Historians think of normative judgements as anachronistic and hubristic.

Rational reconstruction of historical scientific arguments—is this legit? Helpful? If the scientists themselves didn’t use the logical and mathematical tools used in your reconstruction, isn’t that anachronistic? (aside from me: Deborah Mayo would say yes. She calls rational reconstructions–e.g., trying to show that a piece of pre-Bayesian scientific reasoning was “really” Bayesian–“painting by numbers”. Just because you can come up with a paint by numbers picture of the Mona Lisa, in what sense does that let you understand how the Mona Lisa was painted? But Sober likes the approach and uses it himself.)

One way for philosophers to make normative contributions without fear—find a scientific controversy and engage in it. Of course need to identify controversies that do have a philosophical component. Which means you need to reject or at least not take too seriously Quine’s claim that philosophy is continuous with science. (aside from me: speaking as a scientist, this is great advice. There are a lot of scientific controversies that are really philosophical, but that aren’t recognized by the participating scientists as philosophical. Or even if they are, the scientists lack the philosophical expertise to properly resolve them. If any philosophers are reading this and want some suggestions for ecological topics that could use some proper philosophical attention, drop me a line!)

Another way to be interestingly normative—find a proposition scientists accept uncritically, and identify scientifically interesting conditions under which it would be true or false. Note that these conditions need not be highly probable. For instance, think of Felsenstein’s demonstration that cladistics parsimony is statistically inconsistent. Turns out that parsimony does in fact make implicit substantive assumptions.

Practical tips to get scientists to pay attention—collaborate with scientists and publish in scientific journals. (aside from me: many of the philosophers I admire have done this. Sober himself. Deborah Mayo. William Wimsatt. Samir Okasha. It would be cool if someone like Chris Eliot were to publish some philosophy of ecology in an ecology journal. I’ll bet Oikos would take philosophy of ecology in its Forum section, and the right paper might fly as a Synthesis & Perspectives piece in Ecology. Ideas in Ecology & Evolution would go for it too, though they’re less widely-read.)

Normative problems are often discarded as dead ends when the question could be revised in a fruitful way. Example: best Bayesian definition of degree of confirmation? Change the question to “how do Bayesian and frequentist accounts of testing differ and which is better under which circumstances?” The latter question is of practical relevance for science—journals have policies on what statistics are appropriate. What’s the right criterion for empirical significance? Change the question to “What does it mean for a theory to be testable, relative to background info?” Does Ockham’s razor depend on the assumptions that nature is simple? Change the question to “Does cladistics parsimony work only when evolution is parsimonious?”

All research programs experience diminishing returns. But when it happens with normative problems, that doesn’t mean you should stop asking normative questions, you just need fresh ones.

After the talk, the discussion was kicked off by some designated commenters, some of whom were scientists (aside from me: I thought this was an interesting way to structure the symposium; I’d be curious to see a similar structure tried at an ESA symposium). Ford Doolittle remarked that molecular biologists and genomicists often don’t even realize they have a philosophy. Which leads to disputes that are thought to be empirical, but are not (think of the debate over ENCODE and whether 80% of the genome is ‘functional’). Another example: the debate over whether there’s a tree of life is really a debate over “what do you mean by ‘tree’”? As opposed to evolutionary biologists and ecologists, who are open to philosophy. To reach biomedical and molecular types, Doolittle suggested that philosophers will need to publish in Nature and Science and PNAS, presumably in collaboration with biologists.

*Sorry, no time for real posts at the moment, we’re all swamped. But my grant deadline will soon be past and I’m now done with my teaching for this term, so normal service from me will resume shortly.

What math should ecologists teach

Recently Jeremy made the point that we can’t expect ecology grad students to learn everything useful under the sun and asked in a poll what people would prioritize and toss. More math skills was a common answer of what should be prioritized.

As somebody who has my undergraduate (bachelor’s) degree in mathematics I often get asked by earnest graduate students what math courses they should take if they  want to add to their math skills. My usual answer is nothing – the way math departments teach math is very inefficient for ecologists, you should teach yourself. But its not a great answer.

In a typical math department in the US, the following sequence is the norm as one seeks to add math skills (each line is a 1 semester course taken roughly in the sequence shown)

  1. Calculus 1 – Infinite series, limits and derivatives
  2. Calculus 2 – Integrals
  3. Calculus 3 – Multivariate calculus (partial derivatives, multivariate integrals, Green’s theorem, etc)
  4. Linear algebra – solving systems of linear equations, determinants, eigenvectors
  5. Differential equations – solving systems of linear differential equations, solving engineering equations (y”+cy=0)
  6. Dynamical systems – yt+1=f(yt) variations including chaos
  7. Probability theory (usually using measure theory)
  8. Stochastic processes
  9. Operations research (starting with linear programming)

That’s 7 courses over and above 1st year calculus to get to all the material that I think a well-trained mathematical ecologist needs! There are some obvious problems with this. First few ecologists are willing to take that many classes. But even if they were, this is an extraordinary waste of time since over half of what is taught in those classes is pretty much useless in ecology even if you’re pursuing deep into theory. For example – path and surface integrals and Green’s theorem is completely irrelevant. Solving systems of linear equations is useless. Thereby making determinants more or less useless. Differential equations as taught – useless (to ecologists very useful to physicists and engineers). Measure-based probability theory – useless. Linear programming – almost useless.

Here’s my list of topics that a very well-trained mathematical ecologist would need (beyond a 1st year calculus sequence):

  1. Multivariate calculus simplified (partial derivatives, volume integrals)
  2. Matrix algebra and eigenvectors
  3. Dynamical systems (equilibrium analysis, cycling and chaos)
  4. Basic probability theory and stochastic processes (especially Markov chains with brief coverage of branching processes and master equations)
  5. Optimization theory focusing on simple calculus based optimization and Lagrange multipliers (and numerical optimization) with brief coverage of dynamic programming and game theory

Now how should that be covered? I can see a lot of ways. I could see all of that material covered in a 3 semester sequence #1/#2, #3, #4/#5 if you want to teach it as a formal set of math courses. And here is an interesting question. We ecologists often refuse to let the stats department teach stats to our students (undergrad or grad)  because we consider it an important enough topic we want our spin on it. Why don’t have the same feelings about math? Yet as my two lists show math departments are clearly focused on somebody other than ecologists (mostly I think they’re focused on other mathematicians in upper level courses). So should ecology department start listing a few semesters  of ecology-oriented math on their courses?

But I could see less rigorous, more integrative ways to teach the material as well. For example, I think in a year long community ecology class you could slip in all the concepts. Dynamical systems (and partial derivatives) with logistic/ricker models and then Lotka-Volterra. Eigenvectors and Markov Chain’s with Horn’s succession models or on age-stage structure, then eigenvectors returning as a Jacobian on predtor-prey. Master equations on Neutral Theory. Optimizaiton on optimal foraging and game theory Yes the coverage would be much less deep than a 3 semester sequence of math only courses, but it would, I think, be highly successful.

I say “I think” because, I don’t know anywhere that teaches the math this way. I teach a one semester community ecology grad class and try to get a subset of the concepts across, but certainly don’t come anywhere close covering everything that I wish were covered (i.e. my list above). And I know a lot of places have a one-semester modelling course for grad students. But teaching their own math courses, or teaching a math-intensive ecology sequence I haven’t come across.

What do you think? Have I listed too much math? or left your favorite topic out? How should this be taught? How many of our students (undergrads, just all grads, only a subset of interested grads) should this be taught to?.

Friday links: new p-hacking data, grant lotteries, things ecologists will never experience, and more

Also this week: a blogging anniversary, betting on replication, Shakespeare vs. dead animals, Brian and Jeremy have a link fight, and more. Also terrible philosophy puns.

From Brian (!):

Does which countries whose researchers you coauthor papers with affect the impact factor of the journal you get in? Apparently yes: in this piece from Emilio Bruna.

In the always entertaining and provoking Ecological Rants blog, there is a quote from Thomas Piketty’s book (setting the economic world on fire in the topic of income inequality for its careful empirical compilation of historical data). The quote is pretty harsh about economists’ obsession with little toy mathematical models that don’t inform about the real world.  Krebs  argues this critique applies to ecology as well (and cites no less than Joel Cohen one of the great theoretical ecologists who regularly chides ecologists for their physics envy). While I am an advocate for more math education in biology, I have to confess a certain sympathy with the quote. We’re so busy obsessing with equilibrium math models and small scale manipulative experiments we’re missing a lot of the story that is sitting in front of us in the massive amounts of data that have been and could be assembled. (There’s a controversial statement to make you sit up on a Friday)

Following up on my post about NSF’s declining acceptance rates there is a well argued blog by coastalpathogens suggesting we should just revert to a lottery system (one of my suggestions but not one that received a lot of votes in the poll).

From Meg:

Things ecologists are unlikely to learn firsthand: it’s hard to fly with a Nobel Prize. (Jeremy adds: is it hard to fly with the Crafoord Prize?)

The Chronicle of Higher Education had an article on increasing scrutiny of some NSF grants by Congressional Republicans (subscription required).

From Jeremy:

Link war! Brian, I’ll see your Thomas Piketty quote, and raise you Paul Krugman. Krugman’s long advocated the value of deliberately simplified toy models as essential for explaining important real-world data, making predictions, and guiding policy. See this wonderful essay on “accidental theorists” (and why it’s better to be a non-accidental theorist), this equally-wonderful essay on how badly both economists and evolutionary biologists go wrong when they ignore “simple” mathematical models, and this one in which Krugman explains his favorite toy model and how it let him make several non-obvious and very successful predictions about the Great Recession. Oh, and as important as Piketty’s empirical work is, it’s worth noting that even very smart and sympathetic readers have had a hard time figuring out what his implicit model is. If your model’s not explicit (and if you don’t care much for doing experiments), then your big data might as well be pig data. While I’m at it, I’ll raise you R. A. Fisher too.*

Statistician Andrew Gelman has been blogging for 10 years. I was interested to read his comments that there used to be more back-and-forth among blogs 10 years ago, and that these days that only happens in economics. I share the impression that economics is the only field that has a blogosphere. I also share Andrew’s view that Twitter is no substitute for blogs. Twitter has its uses. But “in depth conversation and open-ended exploration of ideas” is not one of them.

Speaking of Andrew Gelman, he passes on a link to a new preprint on the distribution of 50,000 published p-values in three top economics journals from 2005-2011. I’ve skimmed it, it seems like a pretty careful study, which avoids at least some of the problems of similar studies I’ve linked to in the past. The distribution has an obvious trough for marginally non-significant p-values, and an obvious bump for just barely-significant p-values. The authors argue that’s evidence not just of publication bias, but of p-hacking (e.g., choosing whichever of a set of alternative plausible model specifications gives you a significant result). They estimate that 10-20% of marginally non-significant tests are p-hacked into significance. The shape of the distribution is invariant to all sorts of factors–the average age of the authors, were any of the authors very senior, was a research assistant involved in the research, was the result a “main” result, were the authors testing a theoretical model, were the data and/or code publicly available, were the data from lab experiments, and more.

One more from Gelman: You can now bet real money on whether a bunch of replication attempts in psychology will pan out. I think it would be really fun, and very useful, to have something like this in ecology.

Most tenure-track jobs do not have 300+ applicants (and even the few that do tend to have an unusually-high proportion of obviously-uncompetitive applicants).

Speaking of tenure-track job searches: soil ecologist Thea Whitman with a long post on what it was like to interview (successfully!) for a tenure-track job. Go read it, it’s full of win.

Shakespearean insult or animal on display at Harvard’s Museum of Natural History?

Philosophy student karaoke songs.

*I’m guessing that Brian saw this response from me coming from 10 miles away, but I figure he (and y’all) would have been disappointed if I didn’t actually follow through and provide it. My boring predictability clockwork reliability is one of my most endearing features. That, and my refusal to to take second place to any ecologist when it comes to making half-baked analogies with economics. [looks over at Meg, sees her rolling her eyes, coughs awkwardly] :-) In seriousness, I actually do see what Brian means and probably don’t disagree with him that much here. And for what it’s worth, I think current trends in ecology are mostly running in the direction Brian would like to see them run (e.g., away from MacArthur-style toy models of a single process.)

 

How to get a postdoc position (guest post)

Note from Jeremy: This is a guest post by Margaret Kosmala, a postdoc in Organismal and Evolutionary Biology at Harvard. It’s the first in a planned series on life as a postdoc.

**************************************

I did not start thinking about getting a postdoc position until it was almost too late. I was focused on my dissertation research and finishing up before I ran out of money. About six months from defending, I suddenly realized that I would be unemployed once I did defend. I knew that I had to start trying to find a postdoc position right away. And then I realized I had no idea how to go about doing so. This was at the beginning of last summer and so I spent the next months talking to as many people as possible. Here is what I learned.

There are essentially two ways of obtaining a postdoc. The first is to write your own. The second is to apply for job with someone who already has a project.

To write your own postdoc may be the best option if your objective is a future research career. However, you need to start early. Assuming you already know what sort of research you want to do, you have three potential methods of obtaining the funding to support yourself. You can co-write a proposal with your future postdoc mentor, you can look for fellowship opportunities, or you can look for a postdoc advisor with deep pockets.

If you know who you want to work with and what you want to do, co-writing a successful major grant proposal can be great experience and look stellar on your CV or in a letter of recommendation. If you want to try this route, you should start contacting prospective postdoc advisors a couple years before you expect to defend.

Yes, I said a couple years.

Why a couple years? Most organizations have just one or two funding cycles per year. For example, if you expect to defend May 2016, and you would like to be funded on an NSF DEB grant, you would need to have that grant funded by January 2016. In order to do that you would need to submit your pre-proposal in January 2015. And then order to submit in January, you will needed to start working on the proposal this fall (2014). Which means that should probably have established a rapport with your future postdoc advisor by now.

Defending before May 2016? Fellowships are your thing? You can look for postdoctoral fellowships offered by funding organizations such as NSF, by research centers like SESYNC and NIMBioS, and by private entities like the McDonnell Foundation. Generally speaking, you will need to have a postdoc advisor in mind.

A less well-known source of fellowship funding is universities themselves. Some universities offer institution-wide fellowships on a competitive basis. At other universities there are research centers focused on environmental issues that also offer fellowship opportunities. Finding out which universities provide these opportunities can be tedious however, so it’s often best to ask potential postdoc advisors what, if any, opportunities are offered at their institutions.

If you’re looking for postdoc fellowships offered through large agencies or foundations, they often have just one or two deadlines per year, which means that you may need to write a competitive proposal about a year in advance. When I started thinking about a postdoc position six months ahead of defending, I was too late for almost all postdoc fellowships.

Which brings me to the third method for writing your own postdoc. Some professors have, at times, a pot of money they can use to hire a postdoc. It may be in the form of an endowed professorship, start up funds, prize money, etc. If you’ve only got six months or so before defending, you might start asking around to see if anyone you know – or anyone those people know – expect to have money to fund a postdoc in the next year or so. Sometimes researchers get money they weren’t expecting and need to use it relatively quickly, so keep your ears open. You’ll want to be able to pitch an exciting idea to your prospective postdoc advisor and have a handful of references (friends of the prospective advisor are ideal) who are willing to attest to your awesomeness.

Finally, the remaining way of obtaining a postdoc: applying for advertised positions. I won’t say too much about this method, since it’s pretty straightforward and there are other websites which give guidance as to where to look for job ads and how to best position yourself. In a nutshell: you find a position that looks like it would fit you, send in an application, perhaps get an interview (often by phone or Skype), and sign a contract if you’re offered the position and accept it. In applying, you should do smart things like read the webpage(s) and some recent publications of the job offerer. If you’re offered the position, interview other postdocs and grad students in the lab before accepting; you should like your work environment as much as the research itself. And you might take a glimpse at the benefits package to make sure it’s sufficient.

Hurray! You’ve got a postdoc position. Now tell everyone you know, save up a couple thousand dollars or raise the limit on your credit card in preparation for your move, and say goodbye to your friends. Check out ESA’s new Early Career Ecologist Section. Oh, and definitely finish that dissertation.

Friday links: dump the Canadian CCV, unreclaiming zoology, billion dollar grant, and more

Also this week: new videos for teaching ecology, social media as professional development, the pluses and minuses of minority-focused conferences, the best ecology blog you’ve (well, I’d) never heard of, and more.

From Meg:

I added two fun, deep sea-related new videos to my collection of videos for teaching ecology: 1) a massive deep sea mussel bed; in the video, they have the robotic arm play with the solid methane hydrate that has formed near the mussels (ht: Deep Sea News), and 2) a video of a whale fall community, complete with footage of a shark tearing into the whale (ht: Joshua Drew). Fun! And, while we’re talking about whale falls, this is a neat article about them; among other things, it talks about snotworms (yes, there really are things called “snotworms”).

Conservation Biology is the latest journal to go to double blind peer review. I love the opening line of this announcement: “To have biases is human, to fight them, while not divine, is at least worth attempting.” (ht: David Shiffman)

SciWo had a Tenure, She Wrote post on social media as professional development. I really enjoyed it. As she summarizes near the end

Being on-line does take some time, but so does everything worth doing in life. I’ve never seen any convincing data to show that strategic use of social media is any worse investment of my time and energy than any other thing I could do with those random moments of brain weariness or distraction when I find myself refreshing my Twitter feed or reading a blog post. Instead, the benefits I’ve listed above seem to make a compelling case for engaging with your academic peers on-line – just as you would outline benefits if you encourage networking at in-person at conferences.

From Jeremy:

Here’s a petition I can get behind! Tell NSERC to dump the Canadian Common CV. For you non-Canadians: last year NSERC and other Canadian funding agencies started requiring researchers applying for grants to provide their CV’s using a ridiculous online form. The petition is not exaggerating–it literally is two weeks of work to enter all the information (I know because I just did it for my grant renewal application). Which the software then prints in a horrendously organized and butt-ugly format that makes it very difficult for the people who are evaluating your application to find the information they want. But hey, at least we get…um, actually there’s no upside. Unfortunately, researchers have already been protesting the CCV since it was introduced, and all we’ve gotten in response is minor software updates, so I doubt this petition will go anywhere. The next time an institution admits a mistake and drops enterprise software it had previously adopted will be the first time.

Caroline Tucker asks a good question: What would you do with a billion dollar grant? The inspiration is a billion dollar EU-funded project to recreate the human brain with supercomputers. As that example and others illustrate, the way to attract a big slug of money to a field these days often is with some very ambitious project. Click through for Caroline’s nice discussion of what a billion dollar ecology project might look like (a more expensive version of NEON isn’t the only possibility). Semi-related discussions of the trade-offs between centrally-coordinated science and individual investigator science, and between expensive science and cheap science, here, here, and here, and see here for a relevant historical discussion of the IBP.

Ray Hilborn on “faith-based fisheries“. An entertaining and provocative polemic from 2006. I’m not qualified to evaluate it, but thought it worth passing on. (ht a correspondent, via email)

Un-reclaiming the name “zoologist”. Just one of many interesting posts from the EcoEvo group blog from the ecology & evolution students (and faculty?) at Trinity College Dublin. It is long-running but I just stumbled across last week. Apparently I should’ve noticed them much earlier, as they’ve just been named “Best Science and Technology Blog in Ireland“. I added them to our blogroll.

For instance, here’s Natalie Cooper from EcoEvo on her experience of having to work very hard to organize a gender-balanced plenary session for a specialized conference. Includes lots of practical suggestions for overcoming the usual excuses for lack of gender balance (which as she notes aren’t merely excuses–they’re often real problems). Kudos to her for putting in all that effort and I’m glad it was rewarded in the end. Though had it not been rewarded, I hope she wouldn’t have beaten herself up. See this old post of Meg’s for related discussion.

Last year the NSF DEB and IOS surveyed the community about their views of the new preproposal system. The results are in. The headline result is that people like the preproposal system but don’t like being able to apply only once/year. Note that fears that the new system would disproportionately affect certain groups have not been borne out. Group representation among awardees is the same under the new system as under the old system. (ht Sociobiology)

Terry McGlynn is torn over whether it’s useful for minority students to attend a minority-focused conference if that conference doesn’t include many people working in their field. Somewhat related to my old post arguing that students mostly shouldn’t bother attending student-focused conferences.

Zen Faulkes wonders whatever happened to the annual Open Lab anthology of the best online science writing, and what its apparent demise says about the changing face of online science writing.

Amy Parachnowitsch on the many benefits of writing a review paper (or three at once, in her case!)

The Chronicle has picked up ecologist Stephen Heard’s piece (noted in a previous linkfest) on the value of humor in scientific writing.

In 2006 Germany started following the lead of many other countries and began charging tuition at its public universities. They’ve now reversed that decision.

And finally: Happy Canadian Thanksgiving! :-)

Poll results: what should ecologists learn less of?

Here, for what they’re worth*, are the results so far from yesterday’s poll asking readers to name the most important thing for ecologists to learn more of, and the thing they should learn less of in order to free up time. We’ve gotten 165 responses so far, and based on past experience the results won’t change much if we wait any longer.

Results first, then some comments:

more of

less of

  • No consensus for either question. And not only that, every topic got at least one vote as the most important thing for ecologists to learn more of, and at least one vote as the most important thing for ecologists to learn less of! I think this is a useful reminder of just how diverse ecologists are in terms of their background knowledge, motivations, interests, and expertise (which I think is a good thing, by the way).
  • Most popular answer to both questions was “it depends”, which I interpret as a vote in favor of flexible curricula that let different people specialize on different things according to their own needs and interests. That’s what many (not all) graduate curricula are like, of course.
  • Probably no surprise that the next two most popular choices for what ecologists should know more of were “programming” and “statistical techniques”. For obvious and very good reasons, there’s a long-term trend for all fields of science to become more quantitative, and to make heavier use of computers. Then natural history, math, and evolution.
  • Next most popular choices for what ecologists should know less of were “chemistry” and “physics”. I interpret that as a vote against the common North American practice of requiring all science majors (not just ecologists) to take introductory physics and introductory chemistry. Curious to hear discussion of this. After that was “economics” and “mathematical foundations of statistics”. Ecologists aren’t ordinarily taught anything about either of those, so I suspect that votes for these were just people’s way of identifying the least-important subjects on the list for ecologists to know, whether or not those subjects are actually part of current ecology curricula. Next was “genetics and molecular biology”, followed by “natural history” and then “philosophy of science”.
  • There were no obvious associations between what folks thought ecologists should learn more of, and what they thought ecologists should learn less of, except that most (but not all) people who said “it depends” for one question also said “it depends” for the other.
  • If for each topic you take the difference between the number of votes for more of it and less of it, you get a crude index of respondents’ net desire to see ecologists learn more of it. By this measure, the topics rank as follows: programming +20, statistical techniques +15, math +11, evolution +9, natural history +7…[skipping some]…economics -16, physics -19, chemistry -21. And the topics in the middle were those that received few votes for either question. There weren’t any hugely controversial topics that lots of people really  want ecologists to learn more of and lots of other people really want ecologists to learn less of.
  • Full disclosure: I’d have answered both questions “it depends”

*Probably not much

What should ecologists learn LESS of?

There are lots of things that it would be nice for ecologists to know more of. Natural history. Math. Programming. Statistical techniques. The mathematical foundations of statistics. Philosophy of science. Genetics. Evolution. Other things.

If you’re like me, you probably think ecologists should know more about at least one of those things, and don’t think ecologists should know less of any of them. After all, you often hear people say “Ecologists should know more about X”. But you never hear anyone say “Ecologists should know less about X”. Which is a problem. If you want ecologists to be trained in more of some things than they currently are, without being trained in less of anything else they are currently trained in, then you want the impossible. Well, unless you also think that undergraduate and graduate programs in ecology should last significantly longer than they do!

Don’t misunderstand, it’s fine for people to say what they think ecologists should know more of. That’s an essential part of revising curricula. But the other half–the less fun, but equally necessary, half–is deciding what to drop in order to free up time for the stuff you want to do more of. Anyone who’s taught a class has had the experience of agonizing over not being able to cover lots of fascinating and tremendously important material, because there’s just not enough time. But I think we sometimes forget that time constraints also operate at the level of entire curricula. So it’s fine to say that ecologists should know more of X. But if that’s all you say, well, that’s the curriculum design equivalent of wishing for a pony.*

Of course, when people say “Ecologists should know more of X”, they aren’t necessarily commenting on the design of ecology curricula. In my admittedly anecdotcal experience, sometimes it seems like they’re really saying, “I know a lot about X, and so it really bugs me when people who know less about X make mistakes that could’ve been prevented had they known more about X.” Of course, nobody ever continues, “On the other hand, I know nothing of Y, and so am totally unaware of all the mistakes people make due to their lack of knowledge of Y, and so can’t really judge the relative importance of knowing X vs. knowing Y.” And sometimes what they’re really saying is “I know more about X than the average ecologist, which is good because the optimal amount to know about X is whatever amount I personally happen to know.” And sometimes they’re really saying something else. But for purposes of this post, I want to take statements like “Ecologists should know more about X” at face value, and think about the hard choices of curriculum design that follow from such statements.

After all, the world is changing, technology is changing, etc., so maybe ecology curricula do need to change to keep up (they’ve certainly changed in the past). Maybe we really do all need to know more about X, in which case we need to make some hard choices and figure out how to free up the time for everybody to learn more about X.

So let’s talk about those hard choices. As a conversation starter and mind-focuser, below is a little poll. It asks you to name the one thing you think it’s most important for ecologists to learn more of, and the one thing you think ecologists should learn less of, in order to free up time for them to learn more of whatever it is you think they should learn more of. Both questions are required, so you can’t complete the poll by just wishing for a pony and saying what you think ecologists should learn more of. If you don’t think ecologists need to learn more of anything, there’s an option for that (in which case you’re allowed to say they don’t need to learn less of anything either). And if you think different ecologists need to learn more of different things, or less of different things, you have that option. That’s the option you’d pick if you think ecology should involve lots of collaboration among differently-trained specialists. But reasonable as that last option might well be, I’m hoping you don’t all chicken out and take it. :-)

Note that you can think of the poll as encompassing undergraduate and graduate training collectively (which is how I think of it), or as focusing on one or the other (e.g., because you think undergraduate curricula are fine but graduate curricula need revamping).

p.s. Before anyone complains about the way the poll is structured: yes, I obviously could’ve structured it differently. But no structure would’ve pleased everyone. I went with this poll because it seemed like a fun conversation starter, which is all it’s meant to be. It’s not a scientific sample from any well-defined population. Also, this poll was easy to write; you get the polls you pay for on this blog. If you don’t like the poll, no worries, just ignore it. You can still comment on what changes you’d like to see to ecology curricula–but no wishing for ponies! :-)

*Of course, you can also argue that ecologists should learn the same things, but better or differently than they currently do. See for instance Fred Barraquand’s comment on a recent post. That’s an important point, but it’s orthogonal to this post.

Should journal editors be anonymous?

Should journal handling editors be anonymous?

Editor anonymity used to be rare or nonexistent at ecology journals. But it seems to be more common now, at least for certain decisions and at certain journals. In particular, it now seems to be fairly common for rejections without review to be anonymous.

I can understand the reasons for this. The stakes are higher these days, or at least they’re perceived to be higher, which might amount to the same thing. Many authors probably feel like they have a lot riding on every ms, and editors don’t want authors to get upset with them over rejections. Both because it’s no fun to have to deal with irate authors, and because of the fear that an author might hold a grudge against you and give you a bad review on your next grant or something. I have friends and colleagues whom I hugely respect who serve as editors and are glad to have, or wish they had, the option to remain anonymous.

But while I can understand the reasons, I think they’re outweighed by other considerations. I personally don’t like editor anonymity. I served as an editor myself at Oikos for several years, starting before I had tenure. As far as I can recall, our names went on all our decisions, including rejections without review, and I wouldn’t have had it any other way. As an editor, I felt that since I was the one with decision-making power, I needed to take responsibility for my decisions. Which for me meant being willing to sign my name to them. This is unlike being a referee, whose job is merely to provide advice to the editor. And while the final decision officially rested with the Editor-in-Chief, in practice the EiC ordinarily just rubber-stamped the decisions of the editorial board members (that’s the way it is at most ecology journals). And if that led to a senior ecologist getting upset with me (as happened to me once at Oikos), well, if you can’t take the heat stay out of the kitchen.* Once in a while, a professional decision you make might upset someone. That’s unfortunate, but that’s life.

I worry that editor anonymity undermines trust in the peer review system. Authors are more likely to respect a decision if they know who it’s coming from. Editor anonymity feeds the perception that peer review is a crapshoot at best and a rigged game at worst. Journals and their editors should fight that perception, not encourage it.

The Committee on Publication Ethics (COPE) has criticized editor anonymity. Now, in fairness their criticism focuses on the practice of editors writing anonymous reviews of the mss they handle.** But COPE’s reasons for criticizing that practice apply to editor anonymity more broadly, I think. As COPE notes, editors are the overseers–there’s nobody to oversee and evaluate them. Overseers shouldn’t be anonymous.

But I bet this is an issue on which some folks (probably including some of my friends) will disagree with me, so let’s talk about it. As an author, do you mind editor anonymity, or not? As an editor, are some or all of your decisions anonymous, and if not do you wish they were? Why? Looking forward to your comments.

*Plus, is it really that common for scientists to hold serious long-term grudges against one another, and be in a position to act on them in a way that would materially affect someone else’s career? Or is the increased competition for jobs, grants, and space in leading journals just causing people to worry more about that unlikely possibility? For instance, in an old post on a related topic, Brian notes that his very first paper as a grad student was a very high profile paper that seems to have upset a very prominent ecologist. But Brian’s career has gone just fine. As I said in a different context, I think it’s pretty rare for one little thing–like say, one editorial decision you make–to materially affect your career one way or the other. But of course, I have nothing more than anecdotes to back this view.

**When I read that, I was stunned. There are editors who do that? I’d never heard of such a ridiculous editorial practice. But that’s not what this post is about.