Meg’s recent post on #365papers inspired lots of questions and comments (and other blog posts). It led into questions about what kind of papers, how to read them (skim vs in detail), how to choose them, etc. But it led me to wonder if there was a consensus opinion on the even more basic question of how much time we should be spending on reading papers (and scholarly books such as monographs or others aimed at graduate students and above)? Continue reading
I’ve been thinking a lot lately about the term biodiversity. Not so much its scientific defintion as its usage in public discussions. No doubt this is because I am increasingly using the word biodiversity to describe my own work as I move in more applied directions. And a few weeks ago I got to spend over an hour with a reporter talking about the history and implications of using the term biodiversity. She asked good questions and forced me to get clear about what I really think. So I’ve got a lot of thoughts rattling around in my brain on the usefulness of term “biodiversity” that I would like to discuss with the community.
Biodiversity is a really important term that is being woven into the international regulatory framework at the moment. But biodiversity is also an emotion laden term in ecology these days. So … I’m going to adopt the philosopher’s trick and talk about something completely different for a bit (pizza!) and then circle back and tell you I was really talking about biodiversity all along.
There is a great deal of discussion on the internet these days about impact factors of journals (e.g. Stephen Heard’s take and or the tongue-in-cheek response to fluctuating impact factors at MEE in various years). Most people are quick to point out (very correctly) that impact factors were designed to measure journals, not papers or scientists. But what about when you are choosing which journal to submit your own hard won manuscript to? Then surely journal metrics are relevant. But if you can only know one thing about a journal in considering which journal to submit your paper to, what would it be? I would argue that you should think most about fit, and the rest (including impact factor) will take care of itself.
A meme that seemed to run through much of the comments on Jeremy’s recent post on salesmanship in science seemed to be that you could be a wonderful scientist but a terrible communicator of your science and that you would suffer for this career-wise and that would be unfair. This came as a surprise to me. I have a hard time thinking of people who I would call a great scientist but a terrible communicator. Now they may have stage fright and give a bad talk, but write great papers (or vice versa). And they may be bad networkers or bad self-promoters. But the sterotypical genius with ground breaking ideas but who drools and can’t put two words together let alone coherently communicate what they’ve done and why it is important, no. Which leads to the deeper, more philosophical question, if there is “good science” inside somebody’s head and it can’t get out, is it science? Hence the allusion in the title to the zen koan about a tree falling in the forest. Or if somebody is shipwrecked on a desert island does research for 10 years and then dies and their notes decay before they are found, have they done science?
Teaching a graduate statistics class, I end up as a statistical consultant a lot. One of the questions I get most often is should I treat this as a fixed or a random effect? This topic seems to be shrouded in mystery. Indeed when I came of age statistically in the dark ages (=20 years ago), the main distinction given between a fixed and a random effect was philosophically based: are you measuring a few specific instances of interest in themselves (=fixed) or a few randomly chosen instances interesting only as representatives of a population (=random). This is not a bad approach, and seems clear to me, although I have to confess I have not had great luck teaching this distinction.
The modern distinction seems to be: am I interested in this variable (=fixed) or is it just a blocking factor/control variable* (=random). To my mind, this is not a great heuristic. Among other things, I regularly see people who have a continuous variable they are not interested in aka a control variable (e.g. age) and so they think it has to be a random variable and cannot figure out the syntax to force R to treat a continuous variable as random (hint you can’t). Continue reading
I have been thinking a lot lately about how the modern pedagogy movement has been focused almost solely on what have traditionally been indoor lecture courses. I won’t claim to be an expert on the literature in modern pedagogy, but I haven’t seen anything and couldn’t find anything with some quick Googling that applies modern university pedagogy theory to field courses (recommendations welcome in the comments) .
I have commented in the past that while I am annoyed by the unthinking group-think around the modern pedagogy movement involving active/flipped/inquiry-based/peer-instructed/just-in-time/etc classrooms, and wonder if we aren’t missing some elephants in the room by our recent focus on this area, it seems pretty clear that it is a good thing we are now having conversations about classroom pedagogy and have a good toolkit emerging.
So what exactly would the modern pedagogy movement think about undergraduate field courses in ecology? Are they already doing the right thing? Or are they as much in need of innovation as lectures?
I think the answer depends. As both a student and a teacher, I have been involved in two very different types of field courses.
The first type is often attached to an ecology class or a stand alone field course (OTS being a canonical example). It involves going outside and conducting experiments or observations. I think such courses have already naturally tapped into the best of modern pedagogy. They are inquiry based – they have students working on their own to answer a question. With students working in small teams a fair amount of peer instruction happens. Since students do their own measurements, analysis and write-up it is active learning. They’re certainly hands-on and real-world relevant. If the students do reading at home to prepare for the field trip you could argue they are flipped, although I don’t think too many ecology labs have this structure, and I don’t think its essential to tick off every possible box of modern pedagogy.
The second type of field class I have been part of are those usually attached to an -ology class (ornithology, entomology, mammalogy or plantology (aka botany :-) ). These classes often have a focus on memorizing specialized vocabulary for describing the anatomy and physiology of the group of interest (this often coming from lab dissections) and then applying these terms to memorize and spot identify dozens of taxonomic groups. These are really memorization-focused classes. Which means they fall really low on the Bloom hierarchy (recapped in my earlier post). And they have a lot in common with lectures, at least as traditionally taught – the expert stands and pontificates while the students look on – really just a lecture with a live specimen in hand and no walls around the classroom. Now that last fact of a live specimen and no walls around the classroom is pretty darn cool. In fact its great. Its why I became an ecologist. But does it really change whether its a lecture? And does it really change how much students learn? And how much they retain?
I personally am in the process of thinking through what what this second type of class would look like in a modern pedagogy style. How high up the Bloom taxonomy would you aim? What general principles/concepts would you even try to teach in say a botany or entomology class? Would peer-instruction work when none of the students have seen the organism before?
So far the things I have thought are:
- How would you replace those outdoor lectures. The best alternative I’ve thought of is turning students loose in small groups with a key or field guide trying to figure out on their own what they’ve found. You will get through a lot less species (like maybe only 1/4 as many), but I think they will be retained better and they will learn the process of identification instead of memorization.
- In a weird sort of flipping, I think you may need to spend a good chunk of lecture preparing students for the time outside. EG – put up a picture of leaves from five different maple species and have the students spend time in small groups developing a key to distinguish these species.
- Beyond species ID, emphasize things students can observe with their own eyes (e.g. growing in wet vs dry soils, associations with other species, elevational and successional gradients) rather than more abstract but cool things like allelopathy, nitrogen fixation.
- Deemphasize terminology. You’re not going to be able to ID trees without knowing alternate vs opposite, simple vs compound leaves. But do you need to know cordate vs lanceolate leaves or is a field guide (or key) with pictures good enough? And can’t we just say heart-shaped and grass-leaf-shaped?
Some of this probably differs drastically between undergraduates and graduates as well. A graduate student doing vegetation analysis probably does need to know what cordate means to be able to use keys. And they probably do need to know all the species, not just learn the process of keying out.
Or maybe we should just abandon -ology classes. Perhaps everything should be an OTS-like class centered around inquiry and experiments, and they will learn some species along the way? This seems extreme, even antithetical, to me but if I am honest, it might be the full logical application of modern pedagogy to field courses. IE that -ology courses as currently structured should go away?
Those are my thoughts at the moment. What do you think? What do you think a modern pedagogy -ology class would look like. Or should they continue to exist? Would you try to move up Bloom’s hierarchy? What are the higher level principles you would try to teach? Which techniques out of active/inquiry-based/peer/just-in-time/flipped would you apply? What would the class look like? Would you do a graduate course differently?
As briefly mentioned previously in this blog, I have accepted the position of Editor in Chief (EiC) at Global Ecology and Biogeography. I, of course, think it is a fantastic journal (objectively it ranks top in my field and top 10 in all of ecology) thanks to the great work of outgoing EiC David Currie. As you might imagine taking on this new role and my ensuing contract negotiations with the journal owner (Wiley) have caused me to think a lot about exactly what the job of EiC should entail. This is a question of current relevance not just to me but all of ecology and science; the world of academic publishing is changing so quickly that everything in academic publishing is being rethought these days, including the role of EiC. The recently announced movement of the ESA journals to Wiley is a case in point. While this will not result in significant changes to the editorial staff or processes, anytime there is such major institutional change, roles and expectations will be revisited. I expect many of you have thought little if at all about the EiC at journals, but I am incented to provoke you to think about it and am curious to hear your thoughts.
First a quick review for those less familiar with publishing (skip to the poll if you know all of this). Journals typically have an EiC and a panel of associate or handling editors (hereafter AE). The typical flow is:
- A paper is submitted electronically
- EiC evaluates the paper for quality and goodness of fit and either issues an editorial reject without review or assigns it to a handling editor. These days the EiC editorial rejects 30-90% of all the submitted manuscripts with 50% being a quite typical number (publishing hint: cover letters didn’t use to matter much, but they are now critical in making it past this first screen)
- If the EiC decides to send it to review, s/he assigns it to a specific AE (publishing hint: recommending AEs who are expert in the topic of the paper is helpful, but the EiC knows the AEs quite well so this is not particularly subject to gaming – further hint: your letter better be more snappy than your abstract, not just a rehash because that is the one other thing they will read).
- The AE may choose to recommend editorial reject without review as well, although typically this is much rarer than the EiC doing it (but maybe 5-10% of all submissions).
- The AE provides a list of 5 or so potential reviewers (publishing hint: this is critical to the ultimate decision, but I have no clue on how to game this aspect of who gets picked as reviewers – I don’t think it can be gamed).
- An editorial assistant, increasingly often based at the publisher’s office, will contact the prospective reviewers until (usually) 2 people say yes. Sometimes it may take asking as much as 10-15 people (especially in the middle of the summer). In my experience getting reviewers to say yes has nothing to say about the quality of the paper – so don’t take it as a bad sign if you get a note saying there have been delays in finding reviewers.
- Once the reviews are back, the admin will contact the AE to submit a recommendation.
- The AE will read the recommendations and should read the paper in full and then make a recommendation (the dramatic accept/major/minor revision/reject that everybody pays attention to, but also a summary of the reviews and a focused list of the most important, must have changes that you should pay a lot of attention to).
- The recommendation then goes to the EiC who makes a final decision. Many EiC follows the AE recommendations unless there are serious red flags, but a few insert their own evaluations into the process.
Some journals also have Deputy EiC – and at some journals these DEiC effectively act like fancy AEs while at other journals they are effectively co-EiC. Journals also have a managing editor who is responsible for the business side. In most society journals the managing editor reports to the society, but in journals owned by the publishing company the managing editor is part of the publishing company.
So, everybody who has ever submitted a paper is likely pretty clear on the roles of the reviewers and the AE. What exactly does the EiC do or should they do? I have my own opinions, which I will review in a few days in the comments, but I am curious as a reader and author, what is most important to you that the EiC devotes her/his energies on? (everything in the poll below is a job of the EiC, but obviously some are more important than others) So to put it another way, which features would make you more likely to submit to a journal if you know the EiC was prioritizing time on them?
Please take the poll below. (Note: mss=manuscripts)
In what seems to be becoming my annual post on help me think out loud about my fall teaching assignment (see last year’s post on community ecology classes), I am thinking about a field-oriented natural history course I’ll be teaching this fall and what assignments/evaluation tools I should use. Or more broadly, you hear a lot about best pedagogical approaches to classroom learning (including many great posts from Meg), but less about outdoor pedagogy. I think we all think since we’re ecologists this is obvious. Or maybe outdoor learning is so obviously active-learning, project-based, real-world etc which is what we’re trying to bring into classrooms that we don’t have to worry about it. But really, outdoor pedagogy is pretty much teach as we’ve been taught every bit as much as classrooms have been. I’ve increasingly been appreciating how much deep thinking is required to really get pedagogy right, and since I’m taking over a field course, I’ve been thinking a lot about my goals and how to align them with teaching and evaluation tools outdoors. I’d be really curious to hear your thoughts.
To make this concrete will still keeping this fairly generic, I am looking at a natural history course, the center-piece of which is multiple half-day field trips to a variety of ecosystems, and I am looking for an integrative project that spans the semester. I am considering three different projects:
- Do your own research/experiment – this is fairly typical in the OTS type model where you are spending a few weeks at a field station (I’ve taught such a course myself). Here you mentor students through the process of designing, executing and writing up a discrete piece of novel research. Pros – this teaches the scientific method and is fairly open-ended and clearly requires stretching their critical thinking skills and at least one form of writing. Cons – Many students aren’t really ready to do independent research as undergrads (especially lower level) and so often find this assignment more frustrating/intimidating than inspiring and in some cases do such low level work I’m not sure they learn much (or worse learn a very simplistic view of science), and its not particularly integrative (i.e. good at teaching scientific method, bad at helping students make connections and insights in natural history)
- Do a digital specimen collection – this is also a fairly typical assignment in “ology” classes (I did one myself in my graduate days in an entomology class). Since my class cuts across many taxa (requiring many types of collection equipment) I would probably have this be a digital collection instead of a physical collection where students take photos, put them into a document and annotate each photo with species ID, location, and notes about the species. Pros – this reinforces the goal of learning to ID species, paying attention while outdoors and seems likely to be retained as a tool useful to students after they graduate. Cons – less integrative than the other two choices, although this comes down a lot to what and how much I make them write in addition to the photos.
- Write a natural history journal – I haven’t encountered this one as much but a colleague suggested it. The assignment basics would be: 1) pick a small piece of land, 2) study it in depth from the soil to the sky, 3) make repeated visits, 4) write 5 pages about this location and its dynamics and interconnections in the spirit of Thoreau or McPhee. Pros – very integrative, very open-ended, a lot of emphasis on writing which is good (although like most biology courses we’re not really set up to do extensive mentoring on writing). Cons – pretty risky to expect students to observe and write like Thoreau.
There are some course-specific constraints in my own mind for my personal situation (although I think they’re not untypical of many teaching situations): this is lower division undergraduate (200-level), largish (44 students vs 2 instructors) course so more limited opportunities for mentorship than ideal. It is not a 2-week at a field station type of course so students will be doing this assignment very independently on their own time in the business of the semester (or not doing it until the last minute in some cases). The course is also literally focused on natural history, not principles of ecology or such (we have a separate ecology class). You can, of course, share your thoughts in the context of these constraints or I would be equally interested to hear your thoughts about the three options in your own context.
I personally have two main goals for this assignment: 1) is to be integrative. By integrative I mean they will already have lab exams on species ID and lecture exams on the stages of old-field succession etc. I really want something different that makes them think big picture, have ah-ha moments of connection and develop critical thinking and writing skills in addition to memorization. 2) is just to have fun and inspire. It is shocking how little time the average ecology student spends outdoors in their 4 years (forestry and wildlife do a little better than biology departments but still not great). This is likely to be their primary exposure to in-the-field until their 4th year for many students. I want them feel inspired by the awesomeness of nature that made us all go into the subject (while still being able to evaluate learning and give grades).
How important do you think these goals are? Do you think these assignments meet these goals? Any tips or gotchas you’ve learned the hard way on any of these projects?Other goals are imaginable and I’d be curious to hear them. Of course I’d be curious to hear other suggestions for assignments to meet these goals too.What do you see as the relative merits of these three projects? What do you think should be the primary pedagogical goals in a course that represents many students first exposure to the wonders of nature in a hands-on fashion? More broadly is pedagogy for outdoor teaching easy, or do we need to rethink this too?
For better and for worse, big data has reached and is thoroughly permeating ecology. Unlike many, I actually think this is a good thing (not to the degree it replaces other things but to the degree to which it becomes an another tool in our kit).
But I fear there is a persistent myth that will cripple this new tool – that more data is always better. This myth may exist because “big” is in the name of the technique. Or it may be an innate human trait (especially in America) to value bigger house, car, etc. Or maybe in science we are always trying to find simple metrics to know where we rank in the pecking order (e.g. impact factors), and the only real metric we have to rank an experiment is its sample size.
And there is a certain logic to this. You will often hear that the point of big data is that “all the errors cancel each other out”. This goes back to statistics 101. The standard error (description of the variance in our estimate of the mean of a population) is . Since n (sample size) is in the denominator the “error” just gets smaller and smaller as n gets bigger. And p-values get corresponding closer to zero which is the real goal. Right?
Well, not really. First (standard deviation of noise) is in the numerator. If all data were created equally, shouldn’t change too much as we add data. But in reality there is a lognormal-like aspect to data quality. A few very high quality data sets and many low quality data sets (I just made this law up but I expect most of you will agree with it). And even if we’re not going from better to worse data sets, we are almost certainly going from more comparable (e.g. same organisms, nearby locations) to less comparable organisms. The fact that noise in ecology is reddened (variance goes up without limit as temporal and spatial extent increase) is a law (and it almost certainly carries over to increasingly divergent taxa although I don’t know of a study of this). So as we add data we’re actually adding lower quality and/or more divergent data sets with larger and larger . So can easily go up as we add data.
But that is the least of the problems. First, estimating effect size (difference in means or slopes) is often only one task. What if we care about r2 or RMSE (my favorite measures of prediction. These have sigma in the denominator and numerator respectively so the metrics only get worse as variance increases.
And then there is the hardest to fix problem of all – what if adding bad datasets adds bias. Its not too hard to imagine how this occurs. Observer effects is a big one.
So more data definitely does NOT mean a better analysis. It means including datasets that are lower quality and more divergent and hence noisier and problably more biased.
And this is all just within the framework of statistical sampling theory. There are plenty of other problems too. Denser data (in space or time) often means worse autocorrelation. And another problem. At a minimum less observation effort produces smaller counts (of species, individuals, or whatever). Most people know to correct for this crudely by dividing by effort (e.g. CPUE is catch per unit effort). But what if the observation is non-linear (e.g. increasing decelerating function of effort as it often is). Then dividing observed by effort will inappropriately downweight all of those high effort datasets. Another closely related issue that relates to non-linearity is scale. It is extremely common in meta-analyses and macroecology analyses to lump together studies at very different scales. Is this really wise given that we know patterns and processes often change with scale. Isn’t this likely to be a massive introduction of noise?
And it goes beyond statistical framing to inferential framing to what I think of as the depth of the data. What if we want to know about the distribution of a species. It seems pretty obvious that measuring the abundance of that species at many points across its range would be the most informative (since we know abundance varies by orders of magnitude across a range within a species). But that’s a lot of work. Instead, we have lots of datasets that only measure occupancy. But even that is quite a bit of work. We can just do a query on museum records and download often 100s of presence records in 15 minutes.But now we’re letting data quantity drive the question. If we really want to know where a species is and is not found, measuring both sides of what we’re interested in is a far superior approach (and no amount of magic statistics will fix that). The same issues occur with species richness. If we’re really serious about comparing species richness (a good example of that aforementioned case where the response to effort is non-linear), we need abundances to rarify. But boatloads of papers don’t report abundances, just richness. Should we really throw them all away in our analyses?
As a side note, a recurring theme in this post and many previous ones is that complex, magic statistical methods will NOT fix all the shortcomings of the data. They cannot. Nothing can extract information that isn’t there or reduce noise that is built in.
So, returning to the question of two paragraphs ago, should I knowingly leave data on the table and out of the analysis? The trend has been to never say no to a datset. To paraphrase a quote from Will Rogers, “I never met a dataset I didnt’ like”. But is this the right trend? I am of course suggesting it is not. I think we would be better off if we only used high quality datasets that are directly relevant to and support the necessary analytical techniques for our question. Which datasets should we be omitting? I cannot tell you of course. You have to think it through in the particulars. But things like sampling quality (e.g. amount of noise, quality control of observation protocols), getting data that make apples to apples comparisons, and the depth of the data (e.g. abundance vs occupancy vs presence/absence) may well place you in a realm where less is more!
What do you think? Have you had a situation where you turned away data?
Although the notion “bandwagon” technically only means something that is rapidly growing in popularity, calling a scientific research program a bandwagon carries several more connotations. These include the idea that it will crash (people will abandon it) and that people are piling in because they perceive the research program as a way to do something that is “easy” (or even formulaic) but still get in a good journal (i.e. the proverbial something for nothing). Popular and easy are of course two of the worst reasons to choose a research project, but that seems not to matter in the bandwagon phenomenon.
There is little doubt that functional traits are a bandwagon research program right now:
The use of the phrase “functional trait*” (per Web of Science) is rising exponentially with a doubling time of less than 4 years. In less than two decades, there are almost 3000 total publications cited 56000 times, 14000 times last year alone (with an astonishing average citation rate of 19 times/article and an h-index for the field of over 100).
For better and worse, I am probably one of a fairly large group of people responsible for this bandwagon due to this paper which came out simultaneously with a couple of other papers arguing for a trait based approach, although (as likely true of all bandwagons) the idea has been around much longer and builds on the research of many people.
By calling functional trait research a bandwagon, I am implying (and now making explicit) two things: 1) The popularity of the functional trait program is in part due to the fact that people see it as a simple way to do something trendy. I think there is no doubt of this – there are a lot of papers being published right now that just measure a bunch of functional traits on a community or guild and don’t do much more. 2) That this party is about to come to an end. I predict we will see multiple papers in the next two years talking about how functional trait research is problematic and has not delivered on its promise and many people bailing out on the program.
You might think I am worried about the impending crash, but I am not. I actually relish it. Its after the bandwagon crashes that we lose all the people just looking for a quick paper and the people who are really serious about the research field stay, take the lessons learned (and identify what they are), build a less simple, more complex but more realistic, productive world view. In my own career I have seen this with phylogenetic systematics, neutral theory of biodiversity, and – if we go back to my undergraduate days – neutral theory of genetics and island biogeography.
In an attempt to shorten the painful period and hasten the renewal, what follows are my ideas/opinions about what is being ignored right now on the functional trait bandwagon (although by no means ignored by the researchers I expect will still hang around after the crash and I have tried to give citations where possible), which I predict will become part of the new, more complex view of functional traits version 2.0 in 5-10 years down the road.
(As an aside – I wanted to briefly note as a meta comment on how I think science proceeds, that: a) I think probably many other people are thinking these thoughts right now – they’re in the air, but as far as I know nobody has put them down as a group in ink (or electrons) yet and b) my own thinking on this has been deeply influenced by at least a dozen people and especially by Julie Messier as well as Brian Enquist & Marty Lechowicz – more full acknowledgements are at the bottom c) its not as easy to assign authorship on these thought pieces as it is on a concrete piece of experiment or analysis – if this were a paper I could easily argue for just myself as author or 1 more or 3 more or 10 more)
So without further ado, here are 9 things I think we need to change to steer the bandwagon:
- What is a trait? – there are a lot of definitions (both the papers linked to above have them). But the two key aspects are: 1) measured from a single individual and 2) conceivably linked to function or performance (e.g fitness or a component such as growth rate). The 2nd is not a high bar to clear. But a lot of people right now are ignoring #1 by taking values that can only be tied to a species or population (such as population growth rate, geographic range size, mortality rates) and calling them functional traits. They’re not. They’re important and interesting and maybe science will someday decide they’re more important than things you can measure on individuals. But they’re not functional traits if you can’t measure it on one individual. The functional trait program is going from function (behavior and physiology) to communities or ecosystem properties. Its where a lot of the excitement and power of the idea comes from. It is actually in a subtle way a rejection of the population approach that dominated ecology for decades.
- Where’s the variance? – I believe that the first step in any domain of science is to know at what scales and levels of measurement variation occurs. Only then can you know what needs to be explained. There has been an implicit assumption for a long time that most of the variance in functional traits is between species and/or along environmental gradients. There is indeed variation at these two levels. But there is also an enormous amount of variation between individuals in the same species (even population). And there is way more variation between members of a community than between communities along a gradient. Finally, although the previous statements are reasonably general, the exact structure of this variance partitioning depends heavily on the trait measured. Functional traits won’t deliver as a field until we all get our head around these last three facts. And learn a lot more than we already know about where the variance is. A good intro to this topic is Messier et al 2010 and Viollet et al 2012 (warning I’m a coauthor on both).
- Traits are hierarchical (can be placed on scale from low level to high level) – we tend to lump all traits together, but traits are hierarchical. Some are very low level (e.g. chlorophyl concentration per leaf volume), one level up (e.g. light absorption), and going on up the ladder from this one trait we have Amax (maximum photosynthetic rate), leaf CO2 fixation/time, CUE (or carbon use efficiency or assimilation over assimilation+respiration), plant growth rate, and fitness. Note that each trait directly depends on the trait listed before it, but also on many other traits not listed in this sequence. Thus traits are really organized in an inverted tree and traits can be identified at any tip or node and performance sits at the top of the tree. We move from very physiological to very fitness oriented as we move up the tree. One level is not more important than the other but the idea of different levels and being closer to physiology or closer to fitness/performance is very real and needs to be accounted for. And we need to pick the right level for the question. All traits are not equivalent in how we should think about them! And learning how to link these levels together is vital. A depressing fact in phenotypic evolution is that the higher up the hierarchy a phenotypic character is, the less heritable it is (with fitness being barely heritable), but so far we seem to be having the opposite luck with functional traits – higher level traits covary more with environment than low level traits (there are a lot of good reasons for this). A good intro paper to this topic is Marks 2007.
- Traits aren’t univariate and they’re not just reflections of 1-D trade-offs – How many papers have you seen where trait #1 is correlated with environment. Then trait #2 is correlated with environment, and etc.? This is WRONG! Traits are part of a complex set of interactions. If you’re a geneticist you call this epistasis and pleiotropy. If you’re a physiologist you call this allocation decisions (of resources). If you are a phenotypic evolution person you call this the phenotypic covariance matrix. Of course we are finding that one trait low in the hierarchy is neither predictive of overall performance nor strongly correlated with environment. It is part of an intricate web – you have to know more about the web. The main response to this has been to identify trade-off axes. The most famous is the leaf economic spectrum (LES) which basically an r-K like trade-off between leaf life span and rate of photosynthesis. Any number of traits are correlated with this trade-off (e.g. high nitrogen concentrations are correlated with the fast photosynthesis, short life end). And several of the smartest thinkers in traits (e.g. Westoby and Laughlin) have suggested that we will find a handful of clear trade-off axes. I hate to contradict these bright people, but I am increasingly thinking that even the idea of multiple trade-off axes is flawed. First the correlations of traits with the LES are surprisingly weak (typically 0.2-0.4). Second, I increasingly suspect the LES is not general across all scales. And the search for other spectra have gone poorly. For example, despite efforts, there has not yet emerged a clear wood economic spectrum that I can understand and explain. So to truly deal with traits we need to throw away univariate and even trade-off axes and start dealing with the full complexity of covariance matrices. This is complex and unfortunate, but it has profound implications. Even the question of maintenance of variation simplifies when we adopt this full-blown multivariate view of phenotype (two nice papers by Walsh and Blows and Blows and Walsh). For a good review of the issue in traits see the newly out just this week in TREE Laughlin & Messier
- Any hope to predict the performance consequences of traits requires dealing with the TxE (traitXenvironment) interaction – Does high SLA (specific leaf area or basically thinness of leaf, a trait strongly correlated with the rapid photosynthesis end of the LES) lead to high or low performance? The answer blatantly depends on the environment (e.g. causes lower performance in dry environments or environments with lots of herbivory). Too many studies just look at trait-performance correlations when they really need to look at this in a 3-way fashion with performance as a 3-d surface over the 2-D space of trait and environment. Presumably this surface will usually be peaked and non-linear as well (again see Laughlin & Messier 2015)
- Theory – the field of functional traits is astonishingly lacking in motivating theory. When people tell me that natural history or descriptive science is dead, I tell people its just been renamed to functional traits. I personally see descriptive science as essential, but I also see theory and the interaction between theory and description as essential. Key areas we need to develop theory include:
- How exactly filtering on traits works – one of the appealing concepts of traits is that we can move from simply saying a community is a filtered set of the species pool to talking about what is being filtered on. But we aren’t thinking much about the theory of filtering. Papers by Shipley et al 2006 and Laughlin et al 2012 are good starts but not referenced by most workers in the field. And nowhere have we got a theory that balances the environmental filter that decreases variance with the biotic competition filter that increases variance within a community (and yes Jeremy, other possibilities are certainly theoretically possible per Mayfield & Levine 2010, but for good empirical reasons, I believe this is the main phenomenon happening in traits).
- What is the multivariate structure of trait covariance – This is partly an empirical question but there are many opportunities for theory to inform on this too. In part by thinking about …
- Causes of variation – we know variation in traits are due to a combination of genetic variation and adaptive plasticity and that these respond to environments at many scales. But can we say something quantitative?
- Individuals – we are very caught up in using traits as proxies for species but I increasingly think that filtering happens on the individual level and that we need to shift away from thinking about traits at the species level. The same given trait value (say the optimal value in some environment) can be provided by any of several species, each species of which shows consider variability in traits and therefore having significant overlap in the trait distributions between species.This idea can be found in Clark 2010 and Messier et al 2010 among many others. This might seem subtle, but it is a pretty radical idea to move away from populations to individuals to understand community structure.
- Interaction traits, reproduction traits and other kinds of traits – most of the traits studied are physiological/structural in nature. This is probably because one of the major roots of functional traits has been seeking to predict the ecosystem function of plants (e.g. CO2 fixation, water flux). But if we are going to develop a fully trait-based theory of ecology we need to address all aspects of an organism including traits related to species interactions (e.g. root depth for competition, chemical defenses for herbivory, floral traits for pollination and reproduction, and even behavioral traits like risk aversion).
- Traits beyond plants – the trait literature is dominated by botanists. There is a ton of work in the animal world that deals with morphology and behavior. And some of it is starting to be called “functional traits.” The hegemony of one term is not important, but the animal and plant people thinking about these things (whatever they’re called) need to spend more time communicating and learning from each other.
So there you have it. If you want to predict outcomes (e.g. invasion, abundance, being found at location X or in environment Y, and etc) based on traits, its easy. You just have to recognize that it happens in interaction with the environment and many other traits (many of which we haven’t even started studying) and figure out what the appropriate level of traits to study for the scale of the question. Sounds easy right? No, of course not. When is good science ever easy? That’s the problem with bandwagons. Anybody want off the trait bandwagon before we get to that destination? Anybody want on if they know that is the destination?
What do you think? Are traits a bandwagon? Is it about to crash? What will be the story uncovered by those picking up the pieces? Anything I forgot? Anything I should have omitted?
PS – I don’t usually do acknowledgements on informal blog posts, but it is necessary for this one. My thinking on traits has been profoundly influenced by many people. First among them would be Julie Messier who is technically my student but I am sure I have learned more from her than vice versa. And she currently has shared with me several draft ms that make important progress on #2, #4 and #5. I also have to highlight my frequent collaborators, Marty Lechowicz and Brian Enquist. Also influencing me greatly at key points are Cyrille Violle, Marc Westoby, Evan Weiher. And this field is advancing by the work of many other great researchers (some of whom I’ve mentioned above) who were there before the bandwagon started (and many before I got on) and will still be there after it crashes but whom I won’t try to name for fear of leaving somebody out. Despite it being a bandwagon right now, there is no lack of smart people trying hard to steer constructively!