Friday links: side projects > main projects, tuning your scientific bogosity detector, and more (UPDATED)

Also this week: Canadian research funding can’t go home again, what should I put in my NSF annual report?, Axios Review explained, impressions of #ESA100, new data on the prevalence of p-hacking, and more.

From Meg:

This post from the NSF DEBrief blog has useful information on what should go into an annual and final report. This will definitely help me figure out what they are looking for in those reports.

scitrigrrl had a post at Tenure, She Wrote on how academia helped her with triathlons. Reason #1: “Mental toughness matters as much as anything else [in academia and sports]” Indeed!

Here’s an old post from Terry McGlynn that people on the job market this year might be interested in, where he asks: are teaching universities the farm league for R1 universities? In it, he says he doesn’t think he has a single ideal job. I’d say the same is true for me. I love doing research, but I also love teaching.

I was going to link to Stephen Heard’s side projects post and the ESA poll about #ESA100, but see Jeremy has those already. So, just keep on reading for those. (Jeremy adds: “Those are my links! Mine, I say!” [grabs links, runs away])

From Jeremy:

Arjun Raj argues that you’ll only confuse yourself by reading too much of the literature, because a lot of it is wrong. I don’t know that I’d say that, exactly, but I would say something similar: that you’ll get confused if you don’t read critically. So I really like how Raj follows up with suggestions on how to tell if a paper or subfield is bogus. Some of these suggestions are specific to his own field of molecular biology, but others are more broadly applicable or have analogues in other fields. Here are condensed/paraphrased versions of some of the more broadly-applicable ones:

  • If some obvious next-step observation or experiment is missing, be suspicious.
  • New methods should be validated on data from systems in which the correct answer is already well-known, and/or on simulated data generated by known processes.
  • Dig carefully into the supplementary material. That’s often where the bodies are buried.

Perhaps at some point I should post on my own bogosity detection heuristics. I also agree 110% with this passage near the end (emphasis added):

Making decisions based on the literature means deciding what avenues not to follow up on, and I think that most good molecular biologists learn this early on. Even more importantly, they develop the social networks to get the insider’s perspective on what to trust and what to ignore. As a beginning trainee, though, you typically will have neither the experience nor the network to make these decisions. My advice would be to pick a PI who asks these same sorts of questions. Then keep asking yourself these questions during your training. Seek out critical people and bounce your ideas off of them. At the same time, don’t become one of those people who just rips every paper to shreds in journal club. The point is to learn to exhibit sound judgement and find a way forward, and that also means sifting out the good stuff and threading it together across multiple papers.

Stephen Heard’s side projects seem to have as much or even more impact than his main projects. He muses on his mixed feelings about this.

Sticking with Stephen: here he explains why you might want to be an associate editor at a journal. As a former associate editor at Oikos, I’d add: it’s a real feather in your cap in the eyes of department heads, deans, and colleagues. It’s a chance to have some influence on the direction of the field (especially at leading selective journals). And it gives you an early look at a broad range of the latest work in your field, so helps keep you in touch with what others are thinking and doing. Being an associate editor is a win-win for the field as a whole and for you personally. If you’re invited to do it (at a journal you care about), you should probably do it.

An interview with Tim Vines, founder of the Axios Review peer review service. I’m an editor for Axios Review, here and here are posts explaining why I support it.

Alex Usher of Higher Education Strategy Consultants argues that the new federal government Canadians might well have in the fall probably won’t just return to the status quo ante on science funding policy. Any new government is not just going to unwind the current government’s obsession with subsidizing short-term industrial R&D. New governments want new ideas and new policies. But I confess I have no idea what those ought to be, since I liked the status quo ante. (ht Worthwhile Canadian Initiative)

Sticking with Alex Usher, here’s why it’s hard for governments to steer universities by using money as an incentive:

Outside of terrorist cells, universities are about the most loosely-coupled organizations on earth.

Jacquelyn Gill’s impressions of #ESA100. I too have noticed that the meeting attendees have become more diverse over the years. As for her wish that some meeting locations not be seen as much more desirable than others: hey, I want the meeting to go back to Spokane just so I can go back to Rock City Pizza (I think that’s what it was called). It ain’t happenin’. Some cities are always going to be more popular than others; that’s life. (UPDATE: I slightly misunderstood Jacquelyn’s comments on the location–she just wishes more people would give seemingly-undesirable locations like Milwaukee and Minneapolis a chance, not that people would stop caring about location entirely or see every location as equally desirable.)

Speaking of #ESA100, here’s ESA’s poll asking attendees about their experiences at the meeting and how it could be improved. There are also a couple of questions about whether you’re an ESA-certified ecologist, which has nothing to do with the meeting and so seemed kind of odd, but whatever. Go fill it out, I did. Related: my old post on why the ESA meeting ends with a half day on Friday (short answer: it’s complicated), along with a little poll on what to do about it. Unfortunately, I don’t foresee much traction for the option I and the plurality of poll respondents favored. (ht Margaret Kosmala, via the comments)

Some striking evidence on the prevalence and importance of p-hacking: NIH started requiring preregistration of all randomized controlled clinical trials in 2000. The rules oblige preregistration of statistical analyses, which should cut down greatly on intentional or unintentional p-hacking. Result: 17/30 cardiovascular disease trials conducted before 2000 found significant benefits of the tested treatment. Only 2/25 have done so since 2000. And none of the most obvious confounding variables explain the difference. I could definitely see using this example in intro stats courses. (ht Retraction Watch)

Speaking of p-hacking: Data Colada argues that, if you’re looking for statistical evidence of p-hacking in a large sample of papers, you should not look at the distribution of all p-values (which is what several analyses that I’ve linked to in the past have done). Instead, you want to focus only on those p-values that might be expected to be p-hacked. I usually agree with Data Colada, but I think I disagree on this one.

Relatedly: this week’s Science includes a paper from the Open Science Collaboration, which conducted pre-registered replications of 100 cheap-to-replicate experiments published in leading psychology journals. The replications had high power to detect the originally-estimated effect sizes. It was all very carefully done, in part by involving the authors of the original studies to make sure that the original protocols were followed as closely as possible. The headline results are sobering, though not surprising, and there are some interesting nuances:

  • Only 36% of the replications were statistically significant at the 0.05 level, vs. 97% of the original studies. That’s compared to the 89% significant replications that would’ve been expected if every original study had accurately estimated the true effect.
  • The replication P-values weren’t uniformly distributed, but they were very widely scattered on the (0,1) interval
  • The mean effect size of the replications was less than half that of the original studies, with 83% of the replications finding effect sizes smaller than those originally reported
  • Only 41% of the replication 95% confidence intervals contained the original effect size, and only 39% of the replications were subjectively rated as having replicated the original result
  • Some of the replications found statistically-significant effects in the opposite direction from the original studies
  • Original and replication effect sizes were significantly positively rank-correlated (Spearman’s r=0.51)
  • The lower the p-value of the original, the more likely it was to replicate. Cognitive psychology experiments replicated much more often than social psychology experiments, though this may be due at least in part to among-field differences in typical study design. Effects rated as more “surprising” replicated less often.

Overall, the results suggest at least some psychology experiments are studying real effects, but a combination of low-powered experiments (relative to the true effect sizes), possible p-hacking (unintentional or otherwise), and publication bias results in a published literature giving a very distorted picture of the world. That this study happened, that it’s making such a splash, and that so many psychologists–including most of the original authors–are supportive rather than defensive is terrific. It’s a sign of a culture change in psychology, one that I suspect is proceeding more rapidly than it otherwise would’ve thanks in part to blogs and other online discussions. News articles about the results from Science and FiveThirtyEight. This would make a great case study for an intro stats course.

A while back I wrote that we’re currently living through a “culture clash” when it comes to so-called “post-publication review”. I argued that we need, but don’t currently have, agreed norms on what it is and how to do it. There’s ongoing discussion of this issue in bioinformatics, a field in which some prominent bloggers take the view that public attacks on the scientific competence and professional integrity of others are an essential part of scientific discourse. This got me thinking about how just a few prominent online voices can set the tone and define what’s acceptable in entire subfields. I hope that we set the right tone here at Dynamic Ecology.

Hoisted from the comments: Mathew Leibold and I on variance partitioning in metacommunity ecology

As part of my #ESA100 reflections, I commented that variance partitioning was dead as a way to infer the drivers of metacommunity structure. Mathew Leibold didn’t get what I was on about at all. Understandably–my remarks were brief and in retrospect not as clear as they should’ve been.* He was kind enough to take the time to comment at length on what he sees as the main problems with variance partitioning and how it’s currently applied, which gave me the chance to clarify my own views. I agree with Mathew on many points.

Wanted to highlight this exchange because I think it addresses an important issue in ecology. Understanding metacommunities is a really important job for community ecology, and right now variance partitioning is probably the single most popular tool for the job. It’s vitally important that we use that tool in effective ways, improve it if we can, and have a good understanding of what it can’t and can’t do. Mathew and I agree that:

  • There’s been excessive enthusiasm for using variance partitioning as a diagnostic test for metacommunity theories. There are too many possible “kinds” of metacommunities–far more than the small number of “paradigm” special cases on which existing theory focuses–for variance partitioning to be used as a diagnostic tool.
  • Insufficient attention has been given to how to interpret variance partitioning even if various statistical issues with it are addressed. (I’d add–don’t know if Mathew would agree–that this is a common problem in ecology. When a new statistical tool is developed, subsequent work tends to focus on identifying and resolving technical statistical issues with that tool. Which is fine, but tends to have the unfortunate side effect that equally or even more important non-statistical issues of how to interpret the tool tend to get neglected. Or worse, get mistaken for technical statistical issues.)
  • Variance partitioning remains a potentially useful statistical tool, and future work needs to focus on how to interpret and use that tool most effectively. (Not as a standalone diagnostic tool, but as one line of evidence among others, I’d say.)

I also think this exchange of comments was a nice example of how blogging can contribute to scientific discussion. So I wanted to highlight it for that reason as well.

*Mathew’s a friend and knows how my brain works. So if he can’t tell what the hell I’m on about, probably lots of other people couldn’t tell either. Which is my bad.

Please don’t use Wiley’s ArticleShare service to spam strangers

I just became aware of a new (?) service from Wiley called ArticleShare. Most Wiley journals now let an author choose up to 10 individuals who will have free access to the article. Those individuals get an auto-generated email from Wiley, offering them free access to the article, courtesy of the author. The email also includes some unrelated marketing verbiage from Wiley about their author-pays open access options.

I can see where this service could be useful, particularly to authors whose friends and close colleagues otherwise wouldn’t have free access to the article. But like most things that can be useful, it can also be abused. Wiley markets this service as a way to increase your impact. I suggest that you not think of it that way. In particular, I suggest that you be hesitant about using this service to send your article to people you don’t know.

Most academics are time- and attention-limited, and so are very careful about how they allocate those scarce resources. That’s especially true for senior academics. Most academics also already have their own ways of identifying articles they want to read. And academics at most developed country colleges and universities already have free access to many journals via institutional subscriptions. Finally, “self promotion” of one’s own work is a topic on which there’s a wide range of views, both about exactly what constitutes “self promotion” and whether “self promotion” is a good thing. And how the “self promotion” is done matters a lot.

So here’s my suggestion: don’t think of sending your paper to people you don’t know–or asking Wiley to do so for you–as a way to publicize your work. Focus on what the person you’re thinking of sending the paper to is likely to want, not on what you want. Before sending your paper to someone you don’t know, ask yourself the following questions:

  • Is this person likely to be very interested in my paper?
  • Would this person be likely to miss my paper unless I sent it to them?

If the answer to both questions is “yes”, I suggest that you not use Wiley’s ArticleShare service (or any similar service that any other publisher offers). That kind of impersonal approach risks coming off as spam. Instead, I suggest emailing a pdf along with a personal message explaining why you’re sending it (i.e. why you think he/she will be interested in your paper, and why you thought he/she might miss it) Even if you do think of this primarily as a way to publicize your work, you’re more likely to get someone you don’t know to pay attention if you take the time to send a personal explanation.

Just my two cents; I’m guessing at least some and possibly many of you will disagree (which is fine; there’s often scope for reasonable disagreement on professional etiquette). And before you point it out, yes, it’s possible that I’m just getting old. :-)

Here are the best posts you missed while you were in the field this summer

Welcome back from the field, or that great conference, or vacation, or wherever you’ve been for the past three months! Time to get back in the groove–including the blog reading groove, we hope. To help you out, here are some of our best posts from earlier this summer. They’re still timely, and many haven’t even been much commented on yet. Grab a cup of your beverage of choice and get caught up!

Here’s the best post idea I’ve had in months: I polled readers on whether various big ideas in ecology (island biogeography theory, MaxEnt, optimal foraging theory, metabolic theory…) were successful or not. The results were absolutely fascinating. If you’re only going to catch up on one post from me, that’s the one to read.

I identified the five (later updated to six) roads to generality in ecology.

Meg crowdsourced some difficult lecture planning by asking what influences the realized niche.

Brian argued that “functional traits” are a bandwagon and tried to steer the bandwagon to keep it from crashing.

Finally, we celebrated our blogging birthday by reflecting on how our approach to blogging has changed over time.

How do you define omnivory? And what do you teach about trophic levels?

As I wrote about last week, sometimes teaching forces me to think harder about concepts than I would otherwise. Last week’s example was the niche. This week’s is omnivory.

First, there is the question of how to define omnivory. What do you use as your working definition?

 

And what do you teach your students?

My understanding is that the generally accepted definitions are that an omnivore (without any qualifyiers) is an organism that feeds on plants and animals, whereas a “trophic omnivore” is an organism that feeds at multiple trophic levels. The textbook we use (Morris et al.’s How Life Works) gives the definition for omnivore (“eating both plants and animals”), but doesn’t cover trophic omnivory. But, in my opinion, trophic omnivory is a more useful concept to discuss. I’m a little conflicted on whether to introduce both definitions, or just go with one or the other. Right now, I’m planning on giving students both definitions.

Let’s assume for now that we will work with the trophic omnivory definition in class, which is what I did last year. As students grappled with the concept in their discussion sections, they came up with some interesting questions: what if an animal eats two omnivores? Is that also an omnivore? In one discussion group, they tried working this through with an example of barnacles and mussels, who are both omnivores, since they eat phytoplankton and zooplankton. Assuming both barnacles and mussels eat 50% phytoplankton and 50% zoops (and, yes, we’re lumping lots of things together there, but work with me here), they’re both feeding at a trophic level of 1.5. The students were wrestling with whether a sea star feeding on barnacles and mussels is then an omnivore (since it fed on two omnivores) or something else (since it’s feeding on one trophic level). Clearly the students were thinking well about the concepts! So, even if they couldn’t reach a resolution, I was happy to hear about this discussion. They also spent time trying to figure out what trophic level a venus fly trap is at. I told their discussion instructor that she could blow their minds by teaching them about the pitcher plant-mammal mutualism, where shrews provides a substantial amount of nitrogen to the plant by defecating in the pitcher. I have no idea what trophic level that plant is at! (David Attenborough has a video on the “toilet” pitcher plant. Clearly I need to work that into lecture somehow!)

All of this really comes down to problems with us trying to simplify concepts and apply discrete labels to things that are really pretty messy out in nature. Things like intraguild predation are really common, and lots of organisms do things like feed on other (live) organisms and also on detritus. Then again, while real food webs are messy, there is evidence that most trophic positions do fall around integer values. So, I don’t plan to just throw up my hands and say things are complicated so we can’t use any terms. At the Intro Bio level, I think there’s value in setting the foundation of having students consider trophic levels. For now, I think I will continue teaching trophic levels and then point out that the reality is more complicated. But I would be interested in hearing what others do!

Friday links: drunk ecologists vs. lampposts, 2015 job wiki, and more

Also this week: overwork, why Karl Popper never answers your email, should journals ban controversial research approaches, great science + jerk = ???, and more.

From Meg:

There’s a new job wiki for tenure track jobs in ecology & evolution for this year.

Anne Curzan (a professor at Michigan) had a piece in the Chronicle on the importance of making time for non-work activities. In it, she mentions something I realized as an assistant professor: my work “to do” list will never be empty, so I might as well go for that run. Or, as Anne says,

The list will still be there tomorrow, and we need to be comfortable with the idea that we will never cross everything off and say with relief, “I have everything finished!” It is not the way academe works.

It’s not that we don’t need to work hard. Of course we need to work hard. But we don’t need to work all the time.

This links with my post on how you do not need to work 80 hours a week to succeed in academia (which is my most popular post by far). And it also relates to this new piece from the Harvard Business Review, which asks,

Is overwork actually doing what we assume it does — resulting in more and better output? Are we actually getting more done?

The conclusion:

In sum, the story of overwork is literally a story of diminishing returns: keep overworking, and you’ll progressively work more stupidly on tasks that are increasingly meaningless.

10 things this instructor loves: a positive spin on what professors like to see in students in their class. One of them is for students to ask questions – yes!

Writing every day is a really good way to be productive. Maren Friesen had a blog post on this, which is based on Robert Boice’s research. I found his book a really interesting read when I was starting my first faculty position.

While at the BEACON Congress in East Lansing earlier this week, I participated in a social media session where people tweeted links to blog posts. I was particularly interested in this one by Kyle Card, who wrote about his experience as a first year graduate student with disabilities. In it, he asks some important questions including:

I realize that many institutions have diversity groups made up of students and faculty who support and encourage minority students, while simultaneously acting as liaisons between these students and the institution itself. However, how many of these groups seek to encourage disabled individuals who may be apprehensive about joining in the first place? Moreover, how many of these groups are actively trying to foster a culture within the institution that is disarming and welcoming to the hesitant, yet qualified, aspiring disabled researcher?

This made me think of Elita Baldridge’s efforts to create an Inclusive Ecology section for ESA, and her poster from the recent Baltimore meeting on facilitating access for chronically ill and disabled ecologists (which can be viewed here.)

A bit outdated now, but still of interest: This Science News piece describes some major changes to NEON (the National Ecological Observatory Network funded by NSF).

From Jeremy:

Should a field’s journals ever adopt a policy banning a popular-but-highly-questionable approach in the field? Or at least adopt a policy obliging authors to explicitly address objections to the approach? Andrew Gelman and Dan Kahan discuss the issue in the context of psychology, but the issue is much broader. I have an old post on this topic in an ecological context, which I think stands up in general although I’ve since learned that one of the specific examples I used is a little unfair. Short version: I generally don’t like it when authors gloss over or ignore criticisms of their approach. But this is one of those grey areas in which reasonable people are likely to disagree in any particular case. Which is why I think obliging authors to discuss criticisms of their approach is better than banning certain approaches. There’s too much risk that a journal will ban something that actually shouldn’t be banned (like, um, all statistical inferences from sample to population).

Gordon Fox and Simoneta Negrete-Yankelevich say that ecologists use statistics as a drunk uses a lamppost: for support rather than illumination. Gives a shout-out to Brian’s notion of “statistical machismo”, which at this point has probably passed “zombie ideas” as our highest-penetrance meme.

Terry McGlynn with a typically-thoughtful post on the pros and cons of self-centered scientists. I have a lengthy comment, musing on whether people, including scientists, are perhaps more compartmented than we generally recognize. So that the things that make Dr. Famous a great scientist have little or nothing to do with the things that make Dr. Famous not so great in other areas of life. And even if people mostly aren’t compartmented, maybe we shouldn’t try to somehow sum up the good and bad things they’ve done. Even if those good and bad things did have a common root in the same personality trait (self-centeredness or whatever). Can’t someone be both a great scientist and a huge jerk without the former somehow making up for the latter or the latter somehow diminishing or canceling out the former?

It’s often said that extraordinary claims demand extraordinary evidence. Tim Poisot’s inner contrarian says that extraordinary claims are no big deal. Very good post. The slogan “extraordinary claims demand extraordinary evidence” is one of those things that just seems totally obvious–until you stop to think about it.

How to game the false discovery rate.

Frequent guest blogger and commenter Margaret Kosmala just launched her latest citizen science project, Season Spotter.

And finally, this week in Links of Interest Only to Me: Existential Comics. Although the third and fourth ones in this little series will give a chuckle to any scientist who knows a little philosophy of science. :-)

What influences the realized niche?

The niche is a basic concept in ecology, and seems like something that everyone should be able to easily define – and to define consistently. Yet, recent conversations have made me realize that there’s a lot more variation than I would have expected in how the niche – especially the realized niche – is defined.

The crux of the debate seems to come down to whether people say the realized niche is limited by just competition, or whether they say it’s influenced by all interspecific interactions. There’s also a debate on whether dispersal plays a role. Once I realized this and started talking to others about it, there were two main kinds of reactions:
1) “OMG, yes, isn’t it weird that some people define it that other way? Clearly they are wrong.”
2) “Wait, what? Not everyone agrees that X is what determines the realized niche?”

To back up: The original Hutchinson “Concluding remarks” paper focuses on competition as the factor that limits organisms’ distribution; that is, in that paper, the realized niche is smaller than the fundamental niche due to competition. Since then, though, there has been a lot of work updating the niche concept. This includes relatively recent work, and work that adds in mutualism, which can make it so that the realized niche is larger than the fundamental niche.

When I teach about niches, I teach that the fundamental niche is where an organism can occur based on abiotic conditions, and the realized niche is where it actually does occur, given the presence of interacting species. I tend to gloss over the effect of mutualists on the niche in my Intro Bio course, but I used to focus on that more when I taught Ecology. This mostly matches what the textbook we use (Morris et al’s How Life Works) says:

The fundamental niche comprises the full range of climate conditions and food resources that permits the individuals in a species to live. In nature, however, many species do not occupy all the habitats permitted by their anatomy and physiology. That is because other species compete for available resources, prey on the organisms in question, or influence their growth and reproduction, reducing the range actually occupied. This actual range of habitats occupied by a species is called its realized niche (Fig. 47.2).

The main thing I think is left out of that definition is the possibility for facilitation to make the realized niche larger than the fundamental niche, but, again, that is not a topic I tend to emphasize at the freshman level. The way I teach about the niche also matches what is in Begon et al., which is my go-to reference for checking on this sort of thing:

“Usually, a species has a larger ecological niche in the absence of competitors and predators than it has in their presence. In other words, there are certain combinations of conditions and resources that can allow a species to maintain a viable population. This led Hutchinson to distinguish between the fundamental and realized niche. The former describes the overall potentialities of a species; the latter describes the more limited spectrum of conditions and resources that allow it to persist, even in the presence of competitors and predators.” (page 31 of the 4th edition)

So, given all that, I was surprised to learn that others (including some of my colleagues here at Michigan) hold the view that the realized niche is only influenced by competition. In recent conversations I’ve had with others about this, several people at other institutions said they shared my view but also have colleagues who hold the view that the realized niche is only influenced by competition. This has me wondering what the split is in terms of how many people hold the different views. Hence this post.

 

I can see that it might be possible that people might think one thing but teach another in the interest of trying to keep things simpler. So, I’m also wondering:

 

At the risk of biasing the poll, I will say that I find the view that only competition restricts the realized niche to be surprising. What, then, explains why really large bodied Daphnia don’t co-occur with fish? It’s not that they’re not good competitors.

Finally, another issue that has come up during these discussions is whether the realized niche is influenced by dispersal. Let’s consider this clicker question that I’ve used in class:

60 common starlings were released in Central Park in NYC in 1890 by someone trying to introduce all the birds mentioned in Shakespeare’s plays to the New World. There are now millions of starlings in the US, and they are common in Ann Arbor. Prior to 1890, Ann Arbor was:
A) part of the realized niche of starlings but not their fundamental niche.
B) part of the fundamental niche of starlings but not their realized niche.
C) part of the fundamental and realized niches of starlings.
D) not part of either the fundamental or realized niche of starlings.

I give the correct answer as B. But some colleagues of mine argue that dispersal does not influence the realized niche. Everyone is in agreement that Ann Arbor was within the fundamental niche of starlings prior to 1890. The question is whether it’s okay to say that the realized niche of starlings was extended by them being introduced into Central Park. I say yes, but others say no. What do you think?

 

I can understand how this is more of a gray area – if we are saying that the realized niche is influenced by interspecific interactions, then it’s not clear what to do about the influence of dispersal. But, then again, how else would you characterize the change in distribution of starlings? I’d love to hear people’s perspectives in the comments.

All of this is reminding me again of one of the things that I really like about teaching – it forces me to think harder about things that I thought I knew. I will be interested in hearing more about what others think about the realized niche, and how they teach about it!

What can a journal Editor-in-Chief do to attract you to submit to the journal? a poll

As briefly mentioned previously in this blog, I have accepted the position of Editor in Chief (EiC) at Global Ecology and Biogeography. I, of course, think it is a fantastic journal (objectively it ranks top in my field and top 10 in all of ecology) thanks to the great work of outgoing EiC David Currie. As you might imagine taking on this new role and my ensuing contract negotiations with the journal owner (Wiley) have caused me to think a lot about exactly what the job of EiC should entail. This is a question of current relevance not just to me but all of ecology and science; the world of academic publishing is changing so quickly that everything in academic publishing is being rethought these days, including the role of EiC. The recently announced movement of the ESA journals to Wiley is a case in point. While this will not result in significant changes to the editorial staff or processes, anytime there is such major institutional change, roles and expectations will be revisited. I expect many of you have thought little if at all about the EiC at journals, but I am incented to provoke you to think about it and am curious to hear your thoughts.

First a quick review for those less familiar with publishing (skip to the poll if you know all of this). Journals typically have an EiC and a panel of associate or handling editors (hereafter AE). The typical flow is:

  1. A paper is submitted electronically
  2. EiC evaluates the paper for quality and goodness of fit and either issues an editorial reject without review or assigns it to a handling editor. These days the EiC editorial rejects 30-90% of all the submitted manuscripts with 50% being a quite typical number (publishing hint: cover letters didn’t use to matter much, but they are now critical in making it past this first screen)
  3. If the EiC decides to send it to review, s/he assigns it to a specific AE (publishing hint: recommending AEs who are expert in the topic of the paper is helpful, but the EiC knows the AEs quite well so this is not particularly subject to gaming – further hint: your letter better be more snappy than your abstract, not just a rehash because that is the one other thing they will read).
  4. The AE may choose to recommend editorial reject without review as well, although typically this is much rarer than the EiC doing it (but maybe 5-10% of all submissions).
  5. The AE provides a list of 5 or so potential reviewers (publishing hint: this is critical  to the ultimate decision, but I have no clue on how to game this aspect of who gets picked as reviewers – I don’t think it can be gamed).
  6. An editorial assistant, increasingly often based at the publisher’s office, will contact the prospective reviewers until (usually) 2 people say yes. Sometimes it may take asking as much as 10-15 people (especially in the middle of the summer). In my experience getting reviewers to say yes has nothing to say about the quality of the paper – so don’t take it as a bad sign if you get a note saying there have been delays in finding reviewers.
  7. Once the reviews are back, the admin will contact the AE to submit a recommendation.
  8. The AE will read the recommendations and should read the paper in full and then make a recommendation (the dramatic accept/major/minor revision/reject that everybody pays attention to, but also a summary of the reviews and a focused list of the most important, must have changes that you should pay a lot of attention to).
  9. The recommendation then goes to the EiC who makes a final decision. Many EiC follows the AE recommendations unless there are serious red flags, but a few insert their own evaluations into the process.

Some journals also have Deputy EiC – and at some journals these DEiC effectively act like fancy AEs while at other journals they are effectively co-EiC. Journals also have a managing editor who is responsible for the business side. In most society journals the managing editor reports to the society, but in journals owned by the publishing company the managing editor is part of the publishing company.

So, everybody who has ever submitted a paper is likely pretty clear on the roles of the reviewers and the AE. What exactly does the EiC do or should they do? I have my own opinions, which I will review in a few days in the comments, but I am curious as a reader and author, what is most important to you that the EiC devotes her/his energies on? (everything in the poll below is a job of the EiC, but obviously some are more important than others) So to put it another way, which features would make you more likely to submit to a journal if you know the EiC was prioritizing time on them?

Please take the poll below. (Note: mss=manuscripts)

#ESA100 impressions (UPDATED)

My take-home impressions from #ESA100:

Science-related:

  • I’ve been expecting and hoping for this for a couple of years, and this is the year it finally happened: modern coexistence theory, as developed by Peter Chesson and collaborators, is going mainstream. I’ll even go out on a limb and predict that it’s the next big thing in community ecology. Deborah Goldberg stood up in front of a huge Ignite session crowd and named it as one of the two most important ideas in community ecology right now. A number of people besides the usual suspects gave talks on it, including about how to apply it to new problems. Steve Ellner has invented a new statistical approach that should make estimates of the temporal storage effect (a particularly important component of modern coexistence theory) both easier to do and more accurate. And Peter Chesson presented what may be a major extension of the theory. I’m planning to do my part to get this bandwagon rolling–and help steer it clear of pitfalls–by writing a series of posts explaining modern coexistence theory with minimal (but not zero) math. The emphasis will be on giving you the gist, but in a more precise way than is possible if you just avoid math entirely or rely entirely on illustrative examples. I did the first few a while back, so while you wait for me to write the rest now would be a good time to review the old ones (or read them for the first time).
  • Variance partitioning as a way to infer the processes driving metacommunity structure is dead. At least it should be, in my view. It’s now failed three major attempts to validate it using simulated data generated by known processes–Gilbert & Bennett 2010, Smith & Lundholm 2010, and now Eric Sokol’s very good talk at this meeting. And the reasons it fails probably aren’t fixable. Others would disagree, of course. And Eric himself thinks it might be possible to use other statistical approaches to infer process from pattern here, but personally I’ll believe it when I see it. Variance partitioning as a way to infer the processes driving metacommunity structure was a creative idea worth trying out. But we’ve tried it out, and it doesn’t work, not well enough to be useful at any rate. We should stop doing it. And before you say it, no, the fact that we’ve got lots of data sitting around that it would be really nice to make use of is not a good reason to keep on keepin’ on. If an approach doesn’t work, it doesn’t work, no matter how great it would be if the approach actually did work. And no, the purported lack of alternative approaches to accomplish the same goal isn’t a good reason to keep on keepin’ on either. If an approach doesn’t work, it doesn’t work, even if there aren’t any other approaches that would work. Plus, there actually are plenty of alternative ways to study the processes generating metacommunity structure–you can do all sorts of different experiments, you can collect all sorts of other data, you can do all sorts of other analyses, you can do theoretical modeling… (UPDATE: my comments on variance partitioning aren’t as clear as they should’ve been. What’s dead, in my view, is one popular use of variance partitioning–as a diagnostic tool for metacommunity structure. See the comments and this post for more on this.)
  • A few other talks I really enjoyed: Michael Cortez has a wonderfully simple, elegant idea for how to partition the stability of eco-evolutionary systems. His approach lets us address questions like whether evolution stabilizes or destabilizes the ecological dynamics (and vice-versa). Brett Melbourne showed tightly-linked models and experiments on how well a species in a changing environment will track the shifting environmental conditions to which it is best-adapted. Always cool to see someone develop a simple model that totally nails what’s going on. The alarming upshot is that standard “niche modeling” approaches for predicting species’ range shifts in response to climate change are likely to fail especially badly for precisely those species we’re most concerned about. Even in the absence of more familiar complications like interspecific interactions and barriers to dispersal. Lauren Shoemaker’s talk on how demographic and environmental stochasticity can alter the strength of spatial coexistence mechanisms was very good too. (Note: I saw lots of other very good talks, and I’m sure I missed many as well. Please don’t read anything into it if I didn’t list your talk here, even if you saw me in the audience.)

Meeting-related:

  • Biggest ESA meeting ever, or very close to it, from what I hear. More sessions than Portland a few years ago, which would seem to imply at least as many attendees.
  • The quality of Ignite talks is more variable than that of regular talks, I think for various reasons.
  • Thanks again to Ulli Hain and Emma Young for the guest posts on where to eat and drink. Those posts got a lot of views, and I heard from a lot of people who followed their advice and were glad they did. I followed several of their suggestions and can confirm that, yeah, the crab cakes at Faidley’s are amazing, and Pitango’s gelato is so good it should be illegal. :-)
  • My one quibble with the organization this year: I didn’t like having big plenary lectures–including Mercedes Pascual’s MacArthur Award lecture–scheduled at noon. I don’t like forcing attendees to choose between lunch and the MacArthur Award lecture (or between a late lunch and the first half of the afternoon sessions). A big reason people come to ESA is to see their colleagues and friends, which they do over meals.
  • I think the over/under on attendance in Ft. Lauderdale next year is 2500. With a big meeting this year, and a popular location (Portland) coming up in 2017, I suspect attendance in Ft. Lauderdale is going to be limited to folks who never miss an ESA meeting. That’s not a criticism of the choice of location–there are good reasons why the meeting needs to move around the country, and why it’s usually held in hot places. It’s just the reality–the meeting isn’t going to be equally huge every year.

Finally, a big thank you to the organizers, who have a big difficult job and who do it very well. I love the ESA meeting, and this year was no exception!

p.s. I’m on holiday until Aug. 21. Posting will remain light and comment moderation may be slow.

Ethan White wins #ESA100

So. Much. Win.

p.s. If you no idea what the hell this is about, see the end of this post.