Friday links: on live tweeting talks, measurement vs. theory, #myworstgrade, and more

Also this week: the ethnography of Wikipedia, why statistically significant parameter estimates are biased estimates, and more. Plus a hilarious prank instructors can play on their TAs! For some value of “hilarious”.

From Jeremy:

Lots of sensible discussion in the paleo blogosphere this week about the need for clear policies on live tweeting of conference talks, after a speaker asked the audience not to live tweet her talk and a late-arriving audience member did so. See here, here, here, and here. I don’t have much to add, except that this seems to me to be one more example of how we’re living through culture clashes. (full disclosure: I personally am fine with people live-tweeting my talks, taking the view that it’s no different than people talking about my talks or journalists writing about them, and that it’s vanishingly unlikely that anyone would try to scoop me on the basis of tweets, or be able to if they tried. But I’m an ecologist, if I were in some other field I might feel differently.) I do think it’s interesting to see people who like live tweeting nevertheless calling for conferences to impose some rules for everyone to abide by. I haven’t often seen this sort of call in other areas in which new online tools are unsettling established expectations and practices. In my admittedly-anecdotal experience, it’s more common for advocates of new online tools to downplay the importance of agreed rules for the appropriate use of those tools. Not sure why.

Interesting piece on the need for more theory in neuroscience, along with suggestions for how to promote theory and theory-data linkages. I always like reading about how folks in other fields see issues that also crop up in ecology. (ht Not Exactly Rocket Science)

Speaking of the need for theory, here’s a really nice post on the dangers of “measurement before theory”. If you don’t know exactly what you’re trying to measure, so you just go with some plausible-seeming index, there are going to be tears before bedtime. I once tried to get at this in an old post, but didn’t say it as well. Also provides a nice cautionary tale, suitable for undergraduate introductory stats courses, on the limitations of trying to use covariates to try to control for extraneous sources of variation. Note that the linked post is about economics, but it’s totally accessible and you’ll be able to think of the ecological analogues very easily. For instance, think of the fruitless debate over different indices of the “importance” of competition in community ecology. (ht Economist’s View)

Here’s a really nice figure from Andrew Gelman, illustrating the expected distribution of estimated effect sizes for a low-powered study in which the true effect size is positive but only slightly different than zero. Statistically significant estimates are those that are much larger in absolute magnitude than the true effect, and often have the wrong sign. I’ve read some of Gelman’s writings about “type M” (magitude) errors and “type S” (sign) errors before, but this really clarified his point for me. Still mulling it over.

Apparently we all need to check our Google Scholar profiles for fake papers. Yes, really. Call me old fashioned, but this is an illustration of why I prefer to rely on Web of Science.

The ethnography of Wikipedia. Confirms my impression from previous discussions we’ve had. (ht Marginal Revolution)

And finally, a teaching prank: start an analogy, and then leave the TA to finish it. :-)

From Meg:

I love this post from SciCurious in response to my post on keeping perspective and the #myworstgrade hashtag. I wish all the students struggling in my class right now would read it!

Keeping Perspective

We are at that point in the semester where many students are incredibly anxious about their performance in courses. This is especially true for first year undergraduate students. One aspect of teaching Intro Bio in the fall semester is trying to help students manage the stress of transitioning to college. For many of these students, this semester has marked the first time they have ever received a C on a major assignment (such as an exam or paper), and it can be very, very hard. I get it. I remember very well what it was like to struggle as a freshman.

I went to a small high school that I loved, but that had pretty poor science instruction overall. It became quickly clear in my science classes that I was woefully underprepared. Like many students, I hadn’t needed to develop good study skills and habits in high school, because the work was easy. And then there was the cultural shift – my graduating class had 36 students. My sister’s had just 9! So, going from a high school of 100 students total to an Intro Chem lecture hall with 300 students (and that was just one of three sections!) was really overwhelming. I was pretty clueless.

In my first year, I got a C+ in inorganic chemistry. And, frankly, that wasn’t such a bad grade, considering how far behind I was coming in, and that I was pretty sick that semester. At that time, I didn’t panic about its effects on med school or grad school, because I knew I didn’t want to go to med school, and grad school wasn’t on my radar at all. I also, though, had the perspective provided by my older sister. She had gone to the same high school and college I did, and had gotten a C in her first semester Intro Bio course. When she got that grade, she was sure she wasn’t going to get into med school. In the end, she had no trouble getting in, and she’s now a successful family medicine physician who loves her work. I often tell students about my sister and myself at this point in the semester, because many of them really, truly believe that a single bad grade will cut off career options. I can also tell them, based on my experience on my department’s grad admissions committee that it is absolutely not true that one bad grade in a STEM course will prevent you from getting into grad school in the sciences.

And, based on responses to my tweets about this, I am far from alone in having done poorly in a science class but then gone on to a successful science career. I’ll put several of them below at the end of posts. They are great for putting things in perspective – there are lots of us who had bumps (sometimes big ones!) along our path. This series of posts from SciCurious is particularly worth reading, in my opinion:

On a related note, I also struggle to keep my own perspective during these times. It can be so easy for me to take on the students’ stress and anxiety and become anxious myself. Plus, as I will cover in a future post on flipping the classroom, this semester has felt like trying to sprint a marathon, since we’ve done a major overhaul to the class. So, I am already a bit frazzled when interacting with my students. The frazzlement (pretty sure I just made up that word!) comes because, inevitably, there will be some slides with typos, or one question on a given quiz that was confusing. I absolutely HATE when these things happen. Like many (most?) academics, I have high standards for myself, and hate making mistakes. But, rationally, I know that, if I’m writing 100 quiz questions a week (and, yes, that number is correct), there will be some mistakes. So, I need to have some better perspective for myself: I very much want to get an “A” in teaching, so to speak. That is, I want to be a good, engaging, effective teacher. But that does not mean that I’m not allowed to be human and make mistakes. So, just as a single bad grade (or even a few!) doesn’t mean for my students that their dreams of a career in science or medicine are dashed, for me, a mistake in a lecture or on a quiz doesn’t mean I’m not a good instructor.

Perspective. It’s useful.


Related Post:
Hat tip to Tanya Noel for pointing me to this post on a related topic


Here are some of the tweets. Tweet your own using the #myworstgrade hashtag:

Please complete the Dynamic Ecology reader survey! (UPDATEDx2)

UPDATE #2: Responses have slowed to a trickle, so the survey is now closed. Thanks to everyone who completed it, results next week!

Two years ago, when this blog was only a few months old, we surveyed y’all to learn something about who you are, and get feedback on how we can improve Dynamic Ecology. That was really valuable to us. You might be surprised to hear this, but we don’t have that much information (and hardly any non-anecdotal information) about who reads this blog or what they think of it. Having a bit of information helps us justify our blogging to our employers and funding agencies, and helps us get better.

We’ve changed in the last two years, and our audience has grown a lot. So we’re doing a new reader survey. Please take about 2-4 minutes (which is all the time it should take) to complete the anonymous survey below. Please fill it out even if you’re not a regular reader; it’s much less helpful if only our biggest fans complete the survey. We’ll summarize the results in a future post. Thanks in advance for your help!

UPDATE: Thanks to a correspondent who pointed me to this resource, I’ve realized that the question on gender could’ve been structured better. I wanted to ask about this because the last survey indicated that our readership skewed heavily male. And I wanted to be inclusive rather than just limiting the options to “male” and “female”. But just including a “transgender” option wasn’t the best way to be inclusive. In retrospect, instead of “transgender” I probably should’ve gone with an “other” option letting respondents fill in their preferred gender identification. That’s what I’ll do on future surveys.

Friday links: female ESA award winners, #overlyhonestcitations, academic karma, and more (UPDATED)

Also this week: George Scialabba vs. depression, Andrew Gelman’s thoughts are worth the wait, baseball player vs. evolution, and more…

From Meg:

Frontiers in Ecology and the Environment had a piece by Chris Beck et al. on women and underrepresented minorities in the Ecological Society of America. Having posted about a lack of women award winners before, I found WebTable 2.0 particularly interesting:


The Eminent Ecologist, MacArthur, and Mercer awards have skewed male, while the Buell, Braun, and Distinguished Service awards have skewed female in recent years. Their analysis doesn’t look at the newer ESA Fellows, but last year only one of the 12 fellows was a woman. I’ve been working with others (including Gina Baucom and Pleuni Pennings) to make sure that more women and underrepresented minorities are nominated for awards this year. (ht: Cat Searle)

I enjoyed Terry McGlynn’s response to the shirt worn by the scientific head of operations for the European Space Agency (the other ESA!)’s Rosetta Project. I also thought this tweet was worth thinking about more:


Like him, I would like to think I’d have said something. But it’s a good reminder that we need to speak up in these situations, even if it might make us uncomfortable. (UPDATE: Jeremy Yoder passes on the news that the guy in question, Matt Taylor, has issued a heartfelt apology.)

From Jeremy:

Essayist George Scialabba tells the story of his four-plus decade battle with depression through the notes of his doctors and psychiatrists. A sobering read for someone like me, who’s been fortunate not to have had to deal with depression. Crooked Timber comments on the piece.

A few years ago Owen Petchey and I suggested that authors should have to “pay” for reviews of their papers by performing reviews themselves, using a notional “currency” called PubCreds. Subsequently, others have hit on the same basic idea and are starting to turn it into a reality. I just stumbled across another such effort: Academic Karma. From a glance, it looks to be more or less exactly like PubCreds, except that any authors, reviewers, and editors who want to participate do so voluntarily. Very early days, but worth watching. Relatedly: here’s some data on whether ecologists currently review in appropriate proportion to how much they submit.

The research productivity of newly-minted economics PhDs is highly skewed, with a small fraction of people producing a large fraction of the high-profile papers. This seems like one more bit of evidence for William Shockley’s “hurdle model” of scientific productivity. (ht Economist’s View).

Andrew Gelman’s belated comments on that experiment manipulating the emotional content of people’s Facebook feeds are better than any comments I saw at the time.

#overlyhonestcitations: Ethology just published a paper containing the following phrase where a citation should have been:

should we cite that crappy Gabor paper here?

Well, this is awkward. Especially since Caitlin Gabor knows and has published with some of the authors. And in a sign of the times, this story has now gone viral and has been splashed on popular general news sites like Vox. Which seems like kind of a big penalty for an embarrassing but minor mistake. Because let’s be honest–everyone has negative opinions about some papers, and that’s perfectly fine (indeed, it’d be very worrisome if it were otherwise). So there’s a part of me that’s happy to have a chuckle by linking to this–and a part of me that’s a little scared because “there but for the grace of God go I.” (ht Jeff Ollerton)

I’m only linking to this for the benefit of longtime reader and commenter Jim Bouldin: Curt Schilling vs. evolution. On Twitter. Apparently, you don’t need to click through because it was exactly like what you’d imagine.

@ResearchMark (as in Mark Wahlberg) is only one joke. And more or less the same joke as the now-defunct biostatistics pickup lines Tumblr. But it’s still funny. In a similar vein: @AcademicBatgirl. :-) (ht Simply Statistics)

Jeremy Fox seeking applicants for the Killam Postdoctoral Fellowship

I am seeking applicants for the University of Calgary Killam Postdoctoral Fellowship. This is a competitive award, funded by the Killam Foundation. The purpose is to allow the Fellow to develop his or her own research program, under the guidance of a faculty sponsor. The Fellow also is expected to contribute to the intellectual life of the university by giving a research seminar and possibly doing a bit of guest lecturing.

An ideal applicant for my lab would be someone with strong mathematical, computational, and/or programming skills, whose interests overlap somewhat with mine: question-driven work in population, community, and eco-evolutionary dynamics. My hope would be that you would pursue your own research as well as collaborating with me on some project of mutual interest (which need not be a project currently ongoing in my lab). That’s how my former postdoc Dave Vasseur and I worked, and it was great for both of us. My lab certainly would be a good fit for someone looking to learn, or continue with, lab-based model systems like protist microcosms or bean beetles. It’s a two year award (see below), so the ideal candidate will be someone who can complete a project within that timeframe. But I’m absolutely open to applications from anyone who broadly shares my interests. For more on current research in my lab, see my homepage.

Here’s a list of the ecologists and evolutionary biologists in my department. It’s a good intellectual environment. Calgary is a city of just over 1 million people, close to the Canadian Rockies with all the opportunity for recreation that that implies.

The Killam postdoc is a university-wide award, open to scholars in all disciplines, and only one is given out each year, so competition is keen. In the past, I’ve had one candidate receive (and decline) the award, and another make it to the final stage of the multi-stage evaluation process. So I have a good sense of what’s required to be competitive. The application process is not onerous, so if you’re even slightly interested I encourage you to contact me (

Under the rules, I can only support one candidate for the award. In order to allow sufficient time for me to choose among potential applicants and decide who to support, please contact me ASAP, and in any case by mid-Dec. to ensure full consideration. When you contact me, please tell me something about your research interests and what sort of work you’d see yourself pursuing in my lab. Please include a cv and contact details for three references.

Key details are below, click the above link for full details.


You need to have earned your Ph.D. after Sept. 1, 2012, or earn it before Sept. 1, 2015. You do NOT need to be Canadian or a resident of Canada, and there is no preference for Canadians or Canadian residents when applications are evaluated.

Term, salary, benefits

It’s a two year award, which pays $45,000 CAD/year. There’s also $6000 for research and/or moving expenses. Also Alberta Health Care and extended health benefits. There’s some possibility a small top-up to the salary could be negotiated.

Start date

You have to start between May 1 and Sept. 1, 2015.

Application deadline

The application, which includes a research proposal of a couple of pages or so, is due on Jan. 15, 2015.

Postdoc leave policies (guest post)

Note from Jeremy: This is a guest post by Margaret Kosmala, a postdoc in Organismal and Evolutionary Biology at Harvard.

Note from Margaret: This is the first post in a mini-series examining the enormous variation in U.S. postdoc leave benefits. While most postdocs do not consider benefits packages when choosing a position, the benefits available can greatly affect quality of life, and sometimes mean the difference between staying in academia and leaving it — especially for caregivers and those with chronic health conditions. I surveyed 21 U.S. universities with highly ranked ecology programs (according to The Chronical of Higher Education and U.S. News and World Report) and the U.S. federal government by looking up postdoc benefit information on their webpages, and present the data (with commentary) here. (Note that this information is up-to-date as of July 2014. Please provide updates and corrections in the comments. I also welcome data about other universities and will add them to the charts if full info is provided.)

When you become a postdoc, you jump from being a student to being a contract employee or self-employed fellowship holder. The implications for taxes and benefits are quite important, but rarely discussed. In this post, I will talk about one small aspect of being an employee postdoc: paid leave.

Surprisingly, leave policies for postdocs vary quite a bit from university to university, from minimal to quite generous. Employee postdocs are expected to be working eight hours each day Monday to Friday, just like any other normally employed full-time employee. Some universities (and the U.S. government) put postdocs on the clock like other staff and require them to track and report hours on timesheets. At other places, postdocs are treated more like professors and are expected to be working full time, but do not have to fill out timesheets. In these latter cases, sometimes postdocs are supposed to record their time off even if they don’t do hourly timesheets.

Leave can be granted in lump amounts to be used throughout the year, or else can be accrued in small amounts each pay period. (Postdocs are frequently paid monthly, but some are paid bi-weekly.) Sometimes leave can be carried from one year to the next, and sometimes there’s a cap on the amount of leave that can be accrued. Sometimes leave can be used immediately, and sometimes there’s a minimum length of employment before leave can be used.

Most universities (and the federal government) provide sick leave and vacation leave (which goes by many names, such as “annual leave”, “personal leave”, “paid time off”, etc.). Six of the 21 universities I surveyed also provide two to three “personal” days. As far as I can tell, these personal days differ from vacation in that they can be used without getting permission; in most cases, vacation days technically have to be approved in advance by the supervisor.

Sick leave (table of postdoc sick leave by university)

Sick leave is the most standard type of leave, and typically allows postdocs to take leave with pay when they or immediate family members are sick, injured, or have medical appointments. Many universities also allow the use of sick leave for pregnancy, post-pregnancy recovery, adoption, and for bereavement. A couple universities I surveyed (University of Chicago, Washington University in St. Louis) have severe sick leave policies* that allow sick leave to be taken only for personal illnesses and no other purpose, including caring for sick dependents.

Universities (and the federal government) generally grant postdocs 8 to 15 sick days per year. A few universities don’t offer standard sick leave, but instead have alternative schemes. Indiana University lumps together sick leave and disability leave and allows postdocs to take up to six weeks of sick/disability leave per year. Cornell doesn’t have sick leave at all and just asks postdocs to take a reasonable number of unrecorded brief absences as needed for illness and injury. The University of Florida lumps sick leave and vacation leave together as “postdoc leave,” of which postdocs receive just 16 days per year total.

Vacation (table of postdoc vacation leave by university)

Vacation leave is more variable across universities than sick leave. But it tends to be more generous than in industry, with 3 or more weeks of leave per year the norm (in addition to holidays). At some institutions, the amount of vacation leave increases the longer the employee is there. For my survey, I only considered vacation leave in the first year because most postdoc appointments are short and most policies only increase leave amounts in the third year of employment or after. Vacation leave can typically be taken whenever the postdoc wants – subject to the approval of the supervisor. However, at the University of Chicago, employees (including postdocs) are expected to take vacation during the four weeks between quarters; there is no provision to take leave outside of these periods. And Colorado State does not give postdocs any vacation at all! The most generous universities (University of California and Princeton) offer 24 days of vacation per year – that’s almost five weeks! But I wonder: how many postdocs actually take that much time off?!

* Real, actual sick leave policies (I am not making this up):

University of Chicago: “Sick leave shall be used in keeping with normally approved purposes, including personal illness; medical appointments; and, childbearing. It may not be used to care for others who are ill.”  Let me paraphrase: the University of Chicago’s sick leave policy forbids you from leaving work to take care of your ill children – unless they’re so ill that they die, in which case you can take sick leave to attend their funeral (since sick leave CAN be used for bereavement).

Washington University in St. Louis: “Sick leave may only be used for the illness of the postdoctoral appointee only [sic]. Time away to care for an immediate family member’s illness is considered vacation time.” Let me enact a scene. Place: St. Louis. Time: Winter.

Postdoc1: “I had a great time on vacation, swimming in the blue Caribbean waters… how was your vacation?”

Postdoc2: “It was swell; I spent the first half comforting my young son who was vomiting every three hours, and the second half worrying that my baby was going to stop breathing because her croupy cough was so terrible. I can’t wait for next year’s vacation!”

Friday links: everything old is new again, and more

Also this week: how to become really highly cited, robot statistician, universities vs. brands, and more.

From Jeremy:

We’re a bit late to this: the 100 most cited scientific papers ever. See how many you can guess before clicking through. Hint: They’re mostly methods papers. Hint #2: There are no ecology papers (not even close!). A few papers on statistical methods and software packages for phylogenetic estimation are the only evolution papers that make the cut. Of course, really influential work is rarely cited, as it’s just part of the background knowledge every scientist is supposed to have. If that weren’t the case, R. A. Fisher would probably be the most-cited scientist ever.

Speaking of citations: according to this preprint, a (modestly) greater fraction of citations now go to papers >10 (or 15, or 20) years old than was the case in 1990. Most areas of scholarship show the trend, albeit to varying degrees. The study’s based on Google Scholar data; I’m not sure if that creates any artifacts. (ht Marginal Revolution)

How to spot the holes in a data-based news story. Very good reading for undergrad stats classes. Based on compelling real world examples. Particularly good on driving home the points that correlation is not causation, and the reasons why statistically controlling for confounding variables often is ineffective.

Universities are not brands.

Good tips for giving a good talk. Includes some advice I haven’t seen elsewhere.

Jeremy Yoder has an interview with the founders of Haldane’s Sieve, a website that promotes and discusses preprints in population and evolutionary genetics. Always interesting to hear from folks who are experimenting with new ways of doing things. Glad to see that they recognize a key virtue of the current pre-publication peer review system: it ensures that at least some close attention is paid to every paper. I’m pessimistic that there’s any way to prevent serious attention concentration post-publication (see also here). Indeed, isn’t Haldane’s Sieve itself a mechanism for concentrating post-publication attention?  I was also interested in their perception that the scientific publication system is mostly an overly-critical, “down-voting” system. That might be true of pre-publication review, but if anything I think the opposite is the case post-publication. Post-publication, bandwagons and zombie ideas far outnumber Buddy Holly ideas, and even clear-cut mistakes continue to attract attention and citations much longer than they should. So if you want a scientific publication system that achieves some ideal balance of “up voting” enthusiasm and “down voting” criticism, well, maybe our current system isn’t too far off the mark? Our current pre-publication review system also has the advantage that it is a system, with agreed, enforced rules and norms that at least in principle (and I think for the most part in practice) apply equally to everybody, and that everybody knows they’re signing up for when they start doing science (see here for discussion). The next person who figures out what the rules and norms of post-publication commenting should be (in particular, what the rules and norms for critical comments should be), gets everyone to agree to them, and figures out how to enforce them, will be the first.

A while back I discussed the suggestion for a “deterministic statistical machine”–basically, statistical software that would automatically choose an appropriate analysis and then do it for you. It would be aimed at users who don’t know statistics, much as premade meals are aimed at people who can’t (or won’t) cook. Now someone’s invented such a machine.

What to do if you’ve been denied tenure, or are about to be. Related: Meg’s old post on how to navigate the tenure track and maximize the odds that you never need to click that link (although once you have a tenure-track faculty position, the odds are very much in your favor).

Literary Starbucks. :-)

And finally: a bear misunderstands the Wildlife Photographer of the Year competition. :-)

Guess the famous ecologist from the wordle! (UPDATED)

Guess the famous ecologist from wordles made from abstracts of a bunch of their (fairly recent, first authored) papers!

(UPDATE: Wow, that was fast! Took less than half an hour for commenters to combine to identify all four! :-) I won’t put the answers in the post, in case anyone else wants to have a go.)




dave tilman wordle


meg duffy wordle


ben bolker wordle

100 Internet Points for the first correct guess of any of these (which shouldn’t be hard; #3 in particular should be easy). One year’s free subscription to Dynamic Ecology for the first person to guess all four. :-)

Anybody out there teaching a successful intro biostats course? Tell us about it!

This is a bleg.* A while back I asked your help in choosing a textbook for an introductory biostats course I co-teach. We settled on Whitlock & Schluter, which fits our needs quite well. The course covers a pretty traditional set of topics–basically, most (not all) of what’s in chapters 1-17 of the textbook.

Now I need to ask your advice again, to help my co-instructor and I improve the rest of the course. There are a couple of big things about the course that I would like to improve:

  • Student grasp of the material. I think we do ok on this front, but I’d like to do better–to get more students pushed higher up Bloom’s taxonomy, if you want to think of it that way. Get them beyond just memorizing stuff.
  • Student satisfaction and engagement. Not that these are ends in themselves–ultimately what I care about is that students learn the material, even if they don’t enjoy it. But we have various lines of evidence that many students just aren’t “into” the course, as compared to, say, how much they’re into their biology courses. The worry is that, if students aren’t sufficiently engaged with the course, at some point it starts affecting their performance. Further, even if student satisfaction and engagement aren’t ends in themselves, it sure would be nice if they all came out of the course feeling glad that they took it, excited about statistics, eager to learn more statistics, etc.

I’m not sure either of these can be improved substantially by tweaking either content or pedagogy. We’ve already done numerous tweaks to both over the past couple of years (and have ideas for more tweaks). But on the other hand, I’m reluctant to put in all the effort required to start completely from scratch (say, by flipping the classroom, and/or radically cutting back on the breadth of material for the sake of improved depth of understanding of core concepts) unless I’m confident that the result will be a big improvement.

My dream is that somebody out there is teaching a successful, popular intro biostats course, hopefully in a context similar to ours**, so that we can just shamelessly copy it! :-) But failing that, any success stories you have would be welcome. Tell me what you do in your intro biostats course that really works. And if you’re struggling with the same issues we’re struggling with, please tell us about that too.

*Blogging beg.

**A summary of that context: It’s a large class–130 students who meet together for lectures and are divided among 6 lab sections. The students are mostly in their second year. Most are majoring in biology or some subfield thereof. Many take the course because it’s required for their major, but many others take it for other reasons. The labs are computer labs, which have the dual function of teaching students the basics of R, and teaching them to apply (and thus, better understand) the lecture material. We’re a large public research university, and so there’s a fair bit of among-student variance in any attribute you care to name.

A hypothesis about why some ecologists don’t like “pure theory”

As we’ve discussed several times (e.g., this comment thread), ecologists as a whole may be increasingly skeptical of the value in “pure theory”, meaning theory that is at best only loosely connected to “reality” or “nature”. The evidence for that is anecdotal, but for the sake of discussion let’s assume it’s a real trend. What’s driving it?

In the past, I’ve diagnosed it as an empirical/theoretical divide, arising because empiricists and theoreticians have different motivations and backgrounds (see here, here, here and here, for instance). Or perhaps it’s because technical advances in statistics and software have made it easier to link models and data, so maybe data-linked modeling is crowding out pure theory. But lately I’m wondering if there’s something else to it as well. After all, Brian’s hardly a math phobe, and would never insist on doing science one way rather than another, and yet even he writes:

But as a prescription, models ultimately do need some smash against reality (even the “toy” or strategic models like May advocated)…If they never smash against reality, then I would have to agree they’re not advancing science.

And while Brian has a broad understanding of the phrase “some smash against reality” (taking it to mean much more than just, e.g., having parameters that can be estimated from data), I still think his view contrasts with that of Hal Caswell (1988):

Perhaps the greatest obstacle to understanding the role of theory is the failure to recognize that theoretical studies attempt to solve theoretical problems, and that these problems are a legitimate part of ecology.

Theoretical problems are those arising from a body of theory (or sometimes from the lack of one). One important theoretical problem is: ‘It this theory really true in nature?” (Are more complex ecosystems really more stable?) This problem cannot be solved by theory alone; it requires experimental or observational tests of the predictions of the theory, and the answer is always fallible. However, other, equally important theoretical problems arise any time a theory begins to develop. Some of these problems ask questions about the theory itself; they cannot be answered by empirical investigation.

Caswell’s examples of “theoretical problems” (what I call “pure theory”) include exploring the consequences of alternative assumptions, demonstrating connections between apparently-unrelated theories, and identifying the simplest possible assumptions capable of producing specified results. He was responding to an earlier paper by Dan Simberloff in which Simberloff criticized theory “as remote from biology as faith-healing.” But Caswell could equally well have been responding to others. For instance, back in 1934 Nicolas Tesla wrote,

Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.

Even theoretical ecologists sometimes express the same worry. Here’s Simon Levin writing in 2012:

The legacy of Volterra and Lotka has not been universally positive, although this is certainly not their fault. The attractive simplicity of the model equations proved irresistible to mathematicians eager to add bells and whistles, with little concern for biological relevance, and to explore their tortured implications in painful detail. This has produced a large literature, harmless except for its effect on perceptions of the field of mathematical biology, and its obfuscation of the cryptic nuggets that sometimes lie within.

I have a hypothesis as to the worry behind comments like Brian’s, Simberloff’s, Tesla’s, or Levin’s. The worry is that, to the extent that theory is unconnected to nature, we have no externally-imposed criteria by which to judge its merits. So doing theory–deciding which theoretical problems are most important or interesting, which approaches are most fruitful, etc.–just becomes a matter of following conventions. And those conventions are arbitrary, intrinsically no better or worse than any other conventions we might have chosen instead. Or worse, maybe there aren’t even any conventions, maybe it’s just anything goes–assume whatever you want (doesn’t matter what, or why), and see what follows. Pure theory on this view is a sort of pointless, free-floating activity, valuable only to its practitioners, and only because they happen to enjoy it. It’s not (just) that it’s remote from biology. It’s remote from anything besides itself.

This is an understandable worry about “pure” theory. It certainly is possible for a collective activity to devolve into pointless navel-gazing, or at least mere games-playing, if it doesn’t have to obey any rules and goals except those that its participants arbitrarily decide to impose. Think of Calvinball, or more broadly any hobby, game, or sport.*

But this is a worry about any human activity, not just theoretical ecology, isn’t it? Conventions and criteria for pretty much anything humans do ultimately are human constructs, at least in large part, aren’t they? For instance, even if you’re doing “purely” empirical research the identity of the interesting and important questions isn’t God-given. Heck, it’s not even something we all agree on. We always have to make judgment calls about what questions to ask and how to answer them, on grounds that others can appreciate if not necessarily agree with.

For instance, probably many birders and ornithologists would say that Mallards are boring birds. They’re everywhere, and they look the same everywhere. But as Andrew Hendry points out, doesn’t that actually make them rather unusual and interesting? That is, the common judgement that Mallards are boring is just that–a judgement. It’s not totally arbitrary–there are understandable reasons for thinking Mallards are boring. But nor is the boringness of Mallards some purely objective external fact that we discovered. Or think of the infamous difficulty of justifying any fundamental research, and distinguishing good fundamental research from mere self-indulgence by smart people with obscure interests. To my eyes, debates within fundamental empirical ecology about what questions or approaches are most worth pursuing don’t look all that much different than, say, debates within pure mathematics as to what branches of mathematics are most worth pursuing.

Yes, it’s possible for pure theory to devolve into pointless study of equations nobody has any good reason to care about. But I think any human activity runs that same risk. So I don’t know that “pure theory” should be singled out for concern here.

What do you think? Looking forward to your comments.**

*Warning: long footnote in which I makes superficial analogies to lots of stuff that is not ecology. It’s quite possibly the most “Jeremy” footnote ever:

It’s perhaps worth noting that analogous worries crop up in all sorts of areas. Many human activities have been argued to become pointlessly self-referential if they’re unmoored from any external criteria of merit. Philosopher Dan Dennett once advised philosophy grad students to avoid studying “artificial puzzles” of no true significance just because they’d been studied by other philosophers. Modern art has been thought pointless because it’s hard to identify agreed-upon external aesthetic criteria by which to evaluate it. A lack of external criteria of merit seems to imply that merit is purely subjective (think of Duke Ellington’s famous remark about how to identify good music: “If it sounds good, it is good“). Closer to home, when you worry about bandwagons in science, you’re worrying that scientists are deciding to pursue research program X just because everyone else is too. Rather than scientists choosing what to work on based on putatively “external” criteria like what’s interesting or important, they’re choosing based on an “internal” or self-referential criterion, namely what other scientists are working on. It’s similar to worries about what happens when people start substituting an index or symptom of something for the thing itself. For instance, judging a movie by how much money it makes, which can lead to studios trying to make movies that will make money rather than making good movies. Or think of picking stocks by just copying the choices of other investors, which leads to market bubbles and crashes if enough people do it, and which would render the stock market non-functional if everyone did it. It’s often argued that politics goes off the rails when politicians start seeking power as an end in itself rather than as a means to the end of achieving some substantive policy goal. But on the other hand, just because an activity operates (or appears to operate) according to “internal” or “self-referential” conventions and criteria doesn’t necessarily mean it’s pointless. Mathematics has sometimes been criticized (or praised) as a useless human invention. We just specify arbitrary axioms, and then derive their consequences. Think of Kronecker’s claim that “God made the integers, all else is the work of man.” But I don’t think those criticisms of mathematics hold much water (if only because even the most seemingly-pointless bits of math keep turning out to be useful; think of how number theory turned out to be essential to cryptography). Or think of the common law, where the law is defined recursively, i.e. via precedent rather than by legislation. So I don’t think you can show that pure ecological theory is pointless merely by pointing to its focus on theoretical problems.

**Because I might be totally off base here. Indeed, I predict that the first comment will be “Sorry Jeremy, but what the hell are you talking about?” :-) Which is fair enough. I spent an entire afternoon struggling to say what I wanted to say, and I’m still not sure I said it very well. Which means it’s not clear in my own head. So consider this an invitation to help me think more clearly.