About Brian McGill

I am a macroecologist at the University of Maine. I study how human-caused global change (especially global warming and land cover change) affect communities, biodiversity and our global ecology.

The state of academic publishing in 3 graphs, 6 trends, and 4 thoughts

Eleven years ago I shared a fairly heavily researched summary of the state of academic publishing. I mostly argued that OA (aka author pays) was a red herring and that we should really pay attention to the profit motives (or not) of the publisher. I would argue that analysis mostly still holds, but a lot has changed. Here are three new data graphs, five major trends I see since then, followed by some of my reflections on what this all means and briefly, what we should do.

Figure 1 – the growth of # of papers published in total and broken out by publisher 2011-2021. Green are for-profit publishers. Yellow/Gold are newer OA only publishers (for profit except PLOS). Blue are not-for-profit publishers. Colored region indicates papers in ISI. White block indicates additional papers in Scopus. There are probably some different proportions among publishers for in-Scopus-not-in-ISI papers, but I don’t think the fundamental story changes much.Sources https://www.stm-assoc.org/oa-dashboard/uptake-of-open-access/ provides total publications in Scopus; Table 1 of https://link.springer.com/article/10.1007/s11192-022-04586-1/tables/1 for ISI journals by publisher.

Figure 2 – increasing fraction of papers published Gold OA (pay to publish). Source https://www.stm-assoc.org/oa-dashboard/uptake-of-open-access/

Figure 3 – brand extension of Nature journals – from the one core Nature journal in 2009 to 34 in 2024. I did not track years 2019-2022. Note that Nature was bought by Springer in 2015 which appears to have induced a step change in behavior. This is just Nature branding peer reviewed journals – it doesn’t include npg journals, Nature-branded news letters, or many others. Source Nature web pages (https://www.nature.com/nature/history-of-nature and https://partnerships.nature.com/blog/celebrating-a-decade-of-growth-nature-new-journal-launches/)

Five trends

  1. Publishing is growing exponentially – While the number of scientists is also growing exponentially, it is at a slower rate than papers. We are producing more papers per scientist every year. This is a profoundly important fact. Every ecologist knows the power and unsustainability of exponential growth. This also makes it abundantly clear that the publishers only deserve half the blame. Scientists have created a Red Queen situation in which we’re aggressively chasing opportunities to publish. Do we really need 1,000,000 (about +40%) more publications than 10 years ago! (Figure 1).
  2. Publishing is highly concentrated with monopoly profits and still concentrating – Five for profit publishers (Elsevier, Springer Nature, Wiley, Taylor Francis and MDPI) publish over 50% of all publications. In economics this is called an oligopoly, where a small number of companies control the market. Worse, because scientists “have” to publish in top journals, publishers have even more power than the oligopoly nature suggests. One can measure the degree of concentration and power by the profit margins. The 20-30% profit margins are among the highest in all industries. Pharmaceuticals, computers, and tobacco are really the only other industries that comes close (10-25%), and most other industries fall at 10% or lower. From Figure 1, you can see that the for profit publishers, and specifically the big 2 traditional, Elsevier and Springer Nature, and the new OA for profits, especially MDPI but also Hindawi, Frontiers, BMJ etc, are still gaining share. Nearly everybody else (Wiley, society, university, PLOS) are holding share but not gaining meaningfully (note that not gaining ground in an exponentially growing market place is tantamount to losing share).
  3. Growth of Gold OA – the movement towards Open Access (I think it is more accurately called “pay to publish”) is clearly the future (Figure 2). Its share of papers published is on a trend to pass traditional subscription-funded publications soon. Green OA (author shares their paper on a website) has flatlined. Gold OA remains a mix of hybrid OA journals (author pays extra for OA on top of subscriber revenues) and pure 100% author pays OA journals (new OA publishers like MDPI, but also, increasingly, “flipping” previously subscription journals like Oikos and Ecography).
  4. Society journals are outsourcing – although it looks in Figure 1 like society journals are holding steady these are some very old, very big societies like the American Chemical Society and the American Geophysical Union and IEEE. The AAAS (publisher of Science) and the British Royal Society are the only societies still self-publishing with any relevance to ecology and evolution that I can think of. Nearly every other society journal has outsourced publishing to a company. In most cases in ecology this is Wiley (ESA, BES and Nordic Society). Evolution recently left Wiley to go to Oxford University Press and the American Naturalist Society remains with University of Chicago Press, but they are the notable exceptions that have remained free of the for profit world. This should probably be taken as a hint to people that think publishing is easy or cheap. It’s not. And there are economies of scale.
  5. Capture and brand extension – Such corporate words show the degree of corporatization we live in for publishing. Capture is my term that refers to the fact that big publishers increasingly make it very easy to slide from one of their journals to another by not requiring resubmission and often by sharing reviews between journals. It’s really just a less extreme version of gyms that make it hard to quit. Of course these transfers are never to another company. This is all about increasing their market share by making it easy for the author to stay with them. Brand extension (Figure 3) is the related phenomenon of having more and more journals that interlink. Nature is the extreme of this (Figure 3). The core journal Nature has extended to include Nature Communications, Nature Ecology and Evolution, Nature Sustainability, Nature Climate Change (and those are just the journals relevant to ecology – nature reports 34, and that doesn’t include the npg series or a bunch of others!). Brand extension and capture go hand-in-hand. And when a top brand extends, it can have devastating ripple effects. Notably, the creation of Nature Ecology and Evolution (NEE) coupled with the willingness of authors to stroke their egos by getting in a Nature journal is probably single-handedly responsible for the declining impact factors of most of the upper tier ecology journals such as Ecology Letters and Ecology. Note that one can now argue to pursue NEE based on impact factor (to the extent that is a credible argument), but initially that wasn’t even valid. As a new journal NEE had no then a low impact factor. It was literally the brand name that drew people in at first.
  6. Read and publish (aka transformative agreements) are the future – As I have discussed previously it is theoretically possible that if you could take all the money in academic publishing that was spent on “pay to read” (i.e. journal subscriptions) and instantly and globally flip it to author pays (aka Open Access), it should in theory work. The hard part is how a multiplayer society makes that transition. I think the answer is now clear. Libraries are signing “read and publish” (previously called transformative agreements) which are just what they sound like. The library pays a sum of money and its academics and students can read journals/papers from that publisher and they can publish without additional charge with that publisher (essentially no per paper Article Processing charge or APC). These agreements have taken off. Much of Europe now has these agreements with many publishers. And some good actor publishers have entered into these agreements with many, many institutions (e.g. Cambridge University Press even has one with my mid-level American university, University of Maine). Many bad actors are being much more calculating. They are starting off with big players (whole countries in Germany, large systems like the University of Calfornia system in the US) and negotiating very hard and including non-disclosure agreements so that nobody knows how much is being paid. And progress is slow with few institutions having them (e.g. Elsevier). So my low priority, literally poor university may not get one for a decade at the rate things are going.

Four thoughts

Here are a few of my main thoughts:

  1. Things are continuing to get worse and scientists are a lot to blame. We’re still growing publications exponentially. The industry is still consolidating even after 30+ years of consolidation. And it is consolidating into the hands of giant for-profit-companies. Academics are seriously complicit in this. China pays cash to scientists for publishing in big journals. Europe and North America it’s much more indirect, but clearly potent. I’m willing to bet the fraction of scientists who walk away from Nature Ecology and Evolution to go to a society journal is under 20-30%. Early career people might or might not have little choice, but that’s a lot of senior academics who arguably have a lot of choice making arguably bad choices.
  2. Open access has turned into a great way for companies to extract more money – This is more speculative. For sure in the early part of the transition, in hybrid journals, large APC charges (often $3000 and up) were charged on top of the subscription prices of the journals that were also going up. That’s a lot of extra cash! It is harder to tell as we transition to Read and Publish since those deals, at least with the large for profits, are all secret. But given the stories about how hard those negotiations are, I’d bet a lot of money that we are not in the world where a fixed amount of money flips from pay to read to pay to publish. Corporations are exploiting the change in rules to claw out even more earnings. Academics are not united or savvy enough to use this same change in rules to extract anything.
  3. Open access has been a disaster. Scientists never really wanted it. We have ended up here for two reasons. First, pipe dreaming academics who believed in the mirage of “Diamond OA” (nobody pays and it is free to publish). Guess what – publishing a paper costs money – $500-$2000 depending on how much it is subsidized by volunteer scientists. We don’t really want Bill Gates etc. to pay for diamond OA. And universities and especially libraries are already overextended. There is no free publishing. The second and, in my opinion most to blame, are the European science grant funders who banded together and came up with Plan S and other schemes to force their scientists to only publish OA. At least in Europe the funding agencies mostly held scientists harmless by paying, and because of the captive audience, publishers went to European countries first for Read and Publish agreements. So European scientists haven’t been hurt too badly. But North America has so far refused to go down the same path, leaving North American scientists without grants (a majority of them) with an ever shrinking pool of subscription-based journals to publish in. And scientists from less rich countries are hurt even worse. Let’s get honest. How long before every university in Africa is covered by a Read and Publish agreement from the for profit companies? Decades? Never? And the odds an African scientist can come up with $2000 in the meantime? Zero.
  4. Paying to publish creates bad incentives. The rapid growth of the for profit OAs (MDPI, Hindawi, Frontiers, etc) is maybe the biggest signal of this (notwithstanding there are some entirely reputable journals with those companies, but there are also a healthy dose of so-called predatory journals in there too). The correlation between the price you pay to publish OA and the prestige of the journal is another bad sign (second figure in this). Nature charges around $12,000 for OA. Given how bad acting for profit companies have been in this field, just think of all the incentives created by paying per paper published (publish everything because each paper brings in money, lower costs which basically boils down to lower the extent of the review process, prioritize appealing to authors over delivering quality science). Note the non OA-only publishers are clearly pushing in this direction too (there were a lot of discussion about increasing the number of papers published when I was an EiC at a Wiley journal).

The bottom line is for profit companies are eating our lunch with our active participation. They’re experts at extracting more and more money. And unfortunately academics are highly complicit. Nature Ecology and Evolution didn’t go from zero to highest rated journal in ecology in five years with no participation from academics.

What do we do going forward?

  1. Make peace with read and publish deals. The push for OA was a really bad, naïve idea. But it is a done deal now. The best thing we can do is push for more transparency around the Read and Publish deals and more speed for them to spread universally. And we need to make sure for profit companies offer reasonable read and publish deals in poorer countries.
  2. Societies have to start owning their power. For sure they need to take care of themselves and their own profitability. But they also have to be good actors and ultimately their primary function is to serve science and scientists. What could BES and ESA extract from Wiley if they bargained jointly? How about APC waivers (or better automatic Read and Publish deals) for scientists from the 75% poorest countries in the world? Or APC waivers for PhD students? Or what if they left for Oxford/Cambridge/Chicago? Sadly, I hear no conversation of this around societies. Indeed, I’m not in the innermost circles, but I hear disappointingly little receptiveness to acting for change. I think they’re too focused on their own survival. Plaudits to the Society for the Study of Evolution (and the American Naturalist Society) for setting a good example.
  3. Scientists have to wake up and change their behavior. We need to submit our papers, read papers, and volunteer our time to journals owned by good actors and not bad actors. This largely overlaps with for profit vs not for profit, but not entirely. And Wiley, with a large number of society journals, remains a conundrum in this classification. This is a classic altruism type of problem though, as it costs the individual to benefit science collectively. So it’s not easy to predict scientists will change soon. Just exactly how bad will the fever have to get before it breaks? In the meantime, if you have tenure, how much are you doing to change? Publish with society journals. Publish with good actor companies. Don’t let yourself be captured or fall into brand extensions. Don’t fall for the impact factor ruse (IF is a journal metric, not a paper metric – highly cited papers appear in low IF journals and the vast majority of papers in high IF journals are barely cited).
  4. Support disruptive efforts – don’t just stay away from some journals. Support ones that are trying to disrupt the publishing ecosystem for the benefit of academics. Public Library of Science (PLoS), PeerJ, the unfortunately defunct Axios, and specific journals like Evolutionary Ecology Research and Frontiers in Biogeography (UPDATE: as pointed out to me in the comments EER seems to be winding down and PeerJ got bought by for-profit Taylor & Francis – this shows how hard it is to break out in a new direction and why authors need to support them).
  5. We’ve got to stop publishing so much. Quite aside from chasing high-impact, bad-actor journals, we also have to stop Red-Queening ourselves into unsustainable publication rates. We’ve got to start embracing slow science, where we value quality over quantity. It’s bad for science as it becomes more and more impossible to keep up with the literature and the average quality of a paper necessarily goes down as quantity goes up (scientist time is constant). This frenetic need to publish is part of what gives power to bad actors. And this is hardly good for work-life balance and mental health either.

What do you think? Is it as bad as I describe? Do you have a different opinion about OA? How do you think we change things?

Are there big ideas or new big questions in ecology any more?

Jeremy made a compelling case that the typical scientist produces modest contributions to the field but that is enough (it is still leaving the world better than we found it). But several commentors, while acknowledging that in a field with thousands of scientists most of us aren’t going to do more than Bill Murray in Groundhog Day, still felt that a vision of science that doesn’t also include some big advances was unsatisfying (me among them). So the question emerges, are there big advances happening in ecology right now? or will there be in the immediate future?

Obviously this is a somewhat subjective question, but I think it is not entirely subjective. While you wouldn’t get 100% consensus, I bet there are ideas in the recent past that could get at least 70% consensus as big ideas that changed the field that emerged in recent decades. Metapopulations/metacommunities (technically emerged in 1969 but major ripple effects for decades after including the follow-on concept of metacommunities that emerged in the 2000s). Macroecology in the 1990s. Species distribution modelling. Functional traits. Global change ecology or biodiversity in the Anthropocene. Biodiversity and ecosystem function (so big it has an acronym, BEF). Now some people are fans of those ideas, and some aren’t. But I think most of us could agree they had a big impact on the literature and directions of research. And they all started in the late 1990s and bloomed in the decade of the 2000s (and are all still going today). So what are the new big ideas of the 2010s and 2020s? or have we stopped generating new ideas?

First, I think it is important to distinguish big questions from big ideas. Big questions may be hot and drive the field. But they start with a specific question: how does x work? or what is the nature of y? I think the global change ecology or biodiversity in the Anthropocene question is an example of a fairly emergent new big question, “how are humans changing biodiversity” (definitely going back to the ’00s but still going strong). Mike mentioned this as a candidate in the Groundhog Post. And we are and will advance on this question. But it is a question. Not a new idea or framework or combination of processes. So it is not changing how people think about the ecological world and processes outside of that question (at least yet). There are also classic big questions (e.g. why are there more species in the tropics). These big questions certainly drive research and are important. And it is good to work on classic big questions. On some level you have to work on a big question. It’s hard to get your PhD or get hired as a faculty member if you cannot tie your work to a big question. And don’t get me wrong. Big questions are exciting and motivating. But if they’ve been around for 100 years (latitudinal gradient, community phylogenetics, traits, etc all arguably go back to Darwin), then that says something (not necessarily good nor bad) about our field, and it would be good to own that ecology is a field that just keeps grinding away at a handful of major questions. So in this post, I’m really only interested in NEW big questions (without in anyway devaluing longstanding big questions).

But a big idea is different. It is a new conceptual framework that lets us reorient our view of the world. The theory of island biogeography (again its own acronym: TIBG) is one. The question was fairly mundane – why do some islands have more species than others. But the idea of TIBG (a dynamic community of species immigrating and going extinct with various factors controlling those rates) was colossal. It spawned fields ranging from reserve design to renewed interest in species area relationships, to neutral theory. It changed the whole set of cupboards and hooks on which we slot and organize our understanding of nature. Metapopulations was a big idea too. It didn’t initially answer any question, but it has also reorganized our thinking of the world (and interestingly has some real connections with TIBG). Big ideas have reach far outside of where they start.

Sometimes its hard to distinguish a big idea from a big question initially. Was community phylogenetics a big idea (species interacting in communities have a phylogenetic/macroevolutionary history that is measurable) or a big question (what is the phylogenetic structure of communities). But as time went on, I think it emerged that it was a question, and not one with super clear answers in many cases.

I can identify plenty of candidates for big questions. Just a few examples (admittedly very tilted to my own research interests).

  1. What determines the number of species that can coexist? This includes the why are there more species in the tropics. But also local versions. What is the role of productivity? Of dispersal? Of macroevolution? This has been around in some form since the 1800s and if I had to put a number on it we’re about 50% of the way to answering it.
  2. What are humans doing to biodiversity? Which human impacts matter the most? What aspects of diversity are influenced the most (it’s mostly not species richness)? Can we predict future outcomes of biodiversity change? Yes you can argue this is just a more specific, applied version of . But there are reasons it stands alone too.
  3. What determines the species geographic range. A fundamental unit we have talked about for more than century but we have next to know theory linked to test on this topic. Also around since the 1800s but I would say we’re only 10-20% of the way to answering it.

But only one of those big questions (#2) is NEW instead of classic. And I already identified some big ideas that emerged in the 00s (functional traits, neutral theory, BEF, community phylogenetics). But I can’t identify any big ideas (again process-based frameworks that change how perceive the world). And other than the biodiversity in the Anthropoecene, maybe some questions related to mutualism and disease which were woefully understudied until recently, I’m not sure I can identify emergent new big questions either.

There are a number of theories about why this might be:

  1. Ecology has picked the low hanging fruit and new big questions and big ideas are going to be increasingly rarer and rare as time goes forward. Just look at how the scope of questions in physics have gotten more narrow over time. This is good – science should have an endpoint.
  2. There are big ideas and new big questions that I’m missing. Maybe I’m missing them because I’m a single individual with a narrow perspective and they’re right in front of me. Or maybe it takes the whole field some time and the benefit of seeing things in the rear view mirror to identify big ideas and new big questions. I doubt anybody who read Levin’s 1969 paper on metapopulations the year it came out thought it was going to change the world. I think of this as an optimistic point of view – there are still big questions and ideas – you just can’t see them when you’re in the middle.
  3. Ecology still has big ideas and new big questions to emerge, but we are in a local suboptimum where systemic incentive structures, obsession with statistics or big data, 3 year cycles of grants and 5 year cycles of PhDs, the exponential growth of scientists and papers, or other systemic factors are making us less efficient at identifying new big ideas and big questions but if we fix that we can return to the earlier days. (Obviously if this is the case it is really important to identify and fix the systematic constraints).
  4. Ecology is producing new big ideas and big questions at the same rate as the 1950s-1970s so there is no need for angst. Just chill. Because big ideas last for decades, they look bigger in the rear view mirror than when they’re still emerging. But arguing against this, there is some quantitative evidence that research as measured at the unit of a journal paper is less disruptive.
  5. Nothing is new under the sun. Ecology just circles over the same territory again and again (hopefully in a spiral, not a circular rut!). Metacommunities were on the cover of Andrewartha and Birch’s textbook in the 1950s. BEF was a concern of Elton. TIBG was mostly just species area relationships (Arrhenius 1920s) and dynamic community structure driven by immigration and local extinction (paleontology since the 1800s, island biogeography since early 20th century, naturalists forever). SDMs are just a fancier stats version of Grinnell’s 1917 paper on the Niche of the California Thrasher. The frontier of the study of plant competition in 2000 was 13/14ths or 93% already tackled with similar general conclusions by Clements 1929 book Plant Competition (see middle of this post). Newness exists only in the minds of new up and coming researchers who didn’t live through it last time. To be really blunt, newness is just ignorance of the past. So get used to classic questions and classic big ideas periodically reemerging. This is a version of Jeremy’s Groundhog vision on steroids!

So. Does ecology have NEW big ideas or big questions (where define NEW as having fully gained steam in the 2010s or 2020s with initial papers perhaps a decade older)? What are your candidates? I’m really curious to see your suggestions in the comments. Or if we don’t have new big ideas and big questions, why? Which of theories A-E do you favor?

The 13 emotional stages of being an academic

Many readers will be familiar with Kubler-Ross’ five stages of grief model. The research supporting it is mixed, and grief is way too complex to be so nicely linear. Yet many people find comfort in it, I think for two reasons. First it implies that there is a process or sequence and that you will move forward to something different (even if also hard) and won’t stay stuck forever in your current state, which it often feels like you will when you are in it. Second, it normalizes that these are very common, highly shared, even universal experiences; if some stranger can come so close to describing my experience – I feel less alone.

Now obviously I’m no Kubler-Ross and a career is probably not as important as some of the things people are grieving over, but I am going to attempt to do the same thing for the stages of an academic career. Let me know how close you think I came (or how badly I missed).

  1. Grad school years 1 & 2 – excitement and do I belong – This is so cool! But everybody else is so brilliant and I’m just trying to fake it.
  2. Grad school years 3-4 – this is endless – OK, I kinda belong. But everybody else has research results and my project is taking soooo long. I’m going to be the first graduate student in history to take 20 years to finish. And oh by the way what the heck am I doing with my life?
  3. Grad school years 5-6 – too busy to care – Sure I’m going to graduate. I’m doing good work, have gotten positive feedback, and I know the profs don’t want to flunk me. But I don’t really even have time to worry. Sooo busy finishing up and applying for postdocs. Who knew I had a 17th gear and could work this efficiently? Just an occasional flash of angst about getting a postdoc.
  4. Postdoc – is this new or the same? – On the one hand. Wow time to think, breath and focus on new research. On the other hand. The clock is ticking on finding a permanent job. But I’ve been there before. I’ve got this.
  5. Faculty job years 1-2 – this is definitely NEW – OMG. How come nobody told me teaching a new class can take 20 hours a week? How come nobody taught me how to teach? Why is everybody from students to admin assistants looking at me like I know what I’m doing? I have to fill out how many forms to hire an undergrad or purchase a $5 notebook? Really?! Does every faculty member work 60 hours a week for the rest of their lives? If not how do they do it? (hint: they don’t)
  6. Faculty job year 3- phew – OK things are slowing down and becoming manageable. I know what I’m doing. I’m repeat teaching classes. I’m going to make it. This might be the most relaxed I’ve been since I started grad school. Being a professor is awesome!
  7. Faculty job year 4-5 – tenure is coming! – I’m doing alright but the time to submit my tenure packet is now measured in months! I’m not starting up anymore. I’m supposed to be producing. Gulp! Time to write, write, write those papers/grants. Stress!
  8. First years with tenure – now what? – I feel like the greyhound on the race track chasing the mechanical bunny around and around and now the bunny is gone. What am I supposed to chase now? And I’m also just as exhausted as if I’ve been running forever around the track – I need a break. What do I want to be when I grow up? What parts of my job do I enjoy and what parts do I hate? How can I come up with my own definition of success?
  9. Around year 10 tenure track – mid-career, really?! – Wait am I still early career or am I now mid-career? Where did the time go? Am I spending my time wisely/the way I want? And am I an up and coming charger, or am I part of the establishment?
  10. Full professor – now what x2!? – literally the last hoop in front of me is gone. My only instructions from above are “be a good professor serving the tripartite mission (teaching, research, service)”. What the heck does that even mean? And I now know the better question is internally driven: What do I want it to mean? An existential crisis on a broader deeper scale than tenure (but slightly less angsty – time is on your side).
  11. Around 15 years tenure track – stop calling me a senior ecologist! – yeah that whole mid-career thing (#9) was fooling yourself. It doesn’t exist in other people’s minds. You’ve been senior faculty for a while now and you’re only catching up to that fact. The higher administration on campus knows you and your strengths. Your students will perceive you as old and very much part of the establishment. You will get asked if you want to go into administration. You will have to think through if you want to go into administration. Producing papers starts to become rote. You start to wonder whether the world is really improved very much by yet another paper from you or another graduate student trained by you? Is it really just a numbers game? Keep producing more widgets? Maybe there should be numerical limits on a career so we can focus on quality instead of quantity?
  12. Around 25 years tenure track – i’m old – there are very few people still working who have been around longer than I have. I’m super efficient at doing things, but I feel out of touch with the energy and sense of possibility of early career people. Every administration drama that used to seem so important now feels like “seen that before; I’ll just wait it out until it goes away”. I’ve seen intellectual ideas become hot, fade away, and even return again. What is truly new and novel? Many of my friends/colleagues who were just a few years ahead of me are talking about or taking retirement. Ive already reinvented myself a couple of times. Do I have another reinvention in me? or do I want to keep doing more of the same? If the latter, how do I avoid making it feel like playing out the clock?
  13. Retirement – that went fast! – it’s a bit of a different experience in N America where they can’t force you to retire at 65 and some parts of Europe where they can. But to my observation, academics fortunately have an enormous range of ways to define retirement. a) stop teaching and admin and service and just keep doing research full steam, b) full stop – wrap up a couple of papers and start hiking/fishing/grandchildren-ing/etc, c) somewhere in between – keep an office, go in a few hours a day and read a lot, write a few papers you always wanted to write, but take a lot of breaks and travel. None of those sound bad!

OK. Just like Kubler-Ross, I have on some levels done violence to individual stories by boiling things down so linearly and simply. The exact timing and even ordering of events/moods in -#12 become a bit fuzzy. But hopefully you identify with a lot of this. And hopefully it gives you a bit of a sense of not being stuck, of normalizing what you’re feeling at your stage, and maybe even a bit of a roadmap to get a head start on thinking about the future. And I imagine almost every one of these stages could be blown out into a full post. One of the reasons I wrote this post is an opening framing for some posts on the post-tenure stages that I am planning to write. But let me know if there are other stages that you think need more addressing.

What do you think I got right? What do you think I got wrong? I’m well aware this is focused on an academic career. For those of you who went into government or NGO or admin how does it differ?

It’s time to stop making postdocs move for just two years

It’s pretty widely recognized that although academia can be a great job, one of of the biggest structural downsides is how often you have to move to get yourself established. Some of those moves (to go to grad school, to get a tenure track job) are fairly hard to fix due to the sparsity of universities across the landscape (compared to say elementary schools, hospitals, corporations and even government jobs). I am not saying we shouldn’t try to tackle those issues, too, but I am not addressing them here.

The one requirement to move that I think we can, and now should, change is moving for a postdoc. This typically involves picking up and leaving where you’ve been in graduate school around 6 years and had time to put down roots and build a community, then picking up and moving again in just two (sometimes 3 or 4) years, ideally to a tenure track job where you can again put down roots and build a community. And if not for a tenure track job, then most likely for another two year postdoc. Being new to a community and knowing you’re moving on in just a couple of years makes it really tempting to not engage with the community at all, which is really unhealthy.

I remember contemplating my postdoc move well. My son had been born two years before, and we had an incredibly tight community of families with children of the same age. And we knew a postdoc was just for two years. I remember the conversations. Maybe my wife and son should stay in Tucson and I should airplane commute? (but that meant missing early years of my son’s life and was financially unaffordable and generally miserable). Maybe I should take a job two hours up the road at the next university over, and we could come back and visit a lot (but would we really and would that be better or worse than settling in where we were). And worst was knowing that we’d have to move in two years again, and that I would be unable to get a job at whatever institution I did a postdoc at. We had lots of conversations about finding a place that would be nice to live, but not so nice that it was on our top 10 list of places we wanted to end up (as an aside we were extremely happy in East Lansing, and I later ended applying for a job to return there so we estimated incorrectly). If all of these thought processes strike you as screwed up and on a deep level wrong, I agree. That is the system we have today. And although the story might be slightly different for, say, somebody who was single, the same issues of roots and community and having to move again all apply.

However, I would submit that increasingly for years, and for sure post-COVID, there is no real need to make somebody pick up and move from a place they have roots just to be physically in your lab for two years. The ability to interact via zoom (skype in the old days) has improved exponentially, and, if we’re honest, being physically present on campus is not what it once was. A lot of those people you might bump into and have those treasured intangible “random productive conversations” aren’t on campus the same time you are now anyway.

So let me put out what I’m sure will be a controversial claim: we should now default to not requiring people to move to a new location for a postdoc of two or three years. Are there certain special circumstances (e.g. heavily lab driven work) that might require it? – sure. And will some postdocs desire to move and will it be better for everyone if they do move? – sure. But when I run an ad for a typical ecology postdoc (2 years, work on computers and/or at field sites not that close to the office), should I be planning to force that person to uproot their lives and move to where I am? I would suggest no.

I want to be clear that I recognize a remote postdoc has costs in terms of lost interactions. But so does moving and losing community. Its not a question of “perfect in every way”, it’s the real world of balancing pros and cons. And I know that for many PIs this is hard to swallow – “it’s my money, I should hire somebody who is going to get me the best return”. Again, there will be some costs to the PI too. Probably most importantly a bit of a loss mentorship of grad students by postdocs. But again it is a tradeoffs of costs and benefits. There are ways to minimize the costs. And the PI gets to hire the best available person instead of the one who is willing and able to move. The PI also gets to change a system that probably caused a hard few years in their own life too.

Maybe important to note that I am not just speaking hypothetically. My last three postdocs (one funded on their award, two funded on “my” grants) were remote postdocs. And all have worked out excellently. So I want to offer my thoughts on what makes a remote postdoc work well.

When I hire a postdoc, and they ask to work remotely, I have 3 basic requirements for me to say yes:

  1. They have to come spend 6 weeks-3 months on campus early in their postdoc just to get faces to names of some of the other faculty and the administrative assistants who will make their life easier, know the layout of the campus, to perform those primate social bonding rituals of having meals together, cracking jokes together, etc. I recognize this has some costs, but again we’re in a world of tradeoffs. Six weeks, spread out over two visits is a world different than two whole years.
  2. They have to find a lab near where they live that they can have a desk and participate in. Sometimes this can be their PhD lab. If not, I often can help facilitate this (and every colleague I’ve reached out to requesting help with this has been more than happy to accommodate – it’s win-win). Planning only to work from a coffee shop or home 100% of the time is only going to work out for a small subset of people. Having a professional office and people you see face to face is important. And it only adds to the intellectual cross-pollination. I would say my postdocs have used this to varying degrees, but I’ve never regretted the effort to get it arranged.
  3. We will meet every week on zoom. For reference I normally meet with all my grad students and postdocs once a week. But when they’re remote there is a higher premium on it being every week. If they’re local we can say “let’s cancel this week” and know that we’ll still have random conversations about life outside of work and that they can come down the hall and knock on my door if they get stuck. Sticking to the weekly meetings (while not perfectly achievable) becomes more important if somebody is remote. In addition to weekly one-on-one meetings, the postdoc will also attend lab meetings, departmental seminars, etc remotely so they continue to have a shared culture and meet new people coming in to the lab, etc. The postdoc also needs to attend conferences that I and my lab attend.

If you’re seriously thinking about how to make a remote postdoc work, also be sure to check out this “10 steps” paper by Burgio et al 2020.

I have found that with these three rules/shared understandings/compromises in place, remote postdocs work almost as seamlessly as in person postdocs. We have a similar level of comfort and collegiality. Probably the acid test of that is the ability to say no to each other. And interactions can and do spill over into working with graduate students and other faculty on campus. I would say all my postdocs have ended up doing significant mentorship of graduate students despite being remote. And in general with the careful forethought described above, it ends up only being a small cost on the interaction front, but the personal benefits for the postdoc are huge (and I’ve been able to hire my first choice every time).

So what do you think? Is it time? Are there too many problems? Is this an unfair expectation for somebody who is paying for the postdoc?

Prediction Ecology Conference

I have spoken frequently on this blog about the need for ecology to become a more predictive science (e.g. Post #1 Post # 2 Post #3 Post #4 Post #5). For those of you who are interested in this topic, there is a great Gordon Conference on this topic coming up June 4-9 (https://www.grc.org/predictive-ecology-conference/2023/). The speaker line up looks fantastic. Register soon (applications due May 7). The registration fee may seem high, but recognize that includes room and board – all Gordon conferences are on a college campus and sleep in the dorms and eat in the dining hall to ensure a high degree of interaction among the attendees. They’re also usually small (100-150). They’re one of my favorite formats. If you’ve never experienced a Gordon Conference, this is a great chance.

The death knell for the “one right way” approach to statistics? and the need for Sturdy Statistics

Last week Jeremy linked to yet another study where expert researchers (social scientists in this case) were asked to analyze the same dataset. The key findings were: a) that the experts had broad disagreements about the best way to analyze the data, and b) that these differences were consequential in leading to totally different outcomes (positive, negative or no statistically significant effect). This should hardly be news; earlier studies have found this in another social science dataset and in fMRI scans in neurobiology.

You really have to pause and let that in. Established researchers each think a different method of analysis is best, and these different methods give completely different, even opposite answers for the same data on the same question. Even controlling for subtly different questions or experimental designs, the answer you get depends entirely on which person you give the data to for analysis!

This should be the death of “one right way” approach to statistics

It is hard to read these studies any other way than as a major blow to the view/approach that there is one right way to do statistics. Or at least it should be. I assume some will continue to think that there really is “one right way” (theirs) and that the variance occurs because most of the researchers (everybody else) are just plain wrong. But that is a bit too egocentric and lacking of perspective to my mind. I know I’ve offended people over my blogging history on this point (which has not been my intent), but I just find it really impossible to accept that there is only one right way to analyze any statistical problem. Statistics are (probabilistic) models of reality, and it is impossible to have a perfect model, thus all models are involved in tradeoffs. And these new studies just feel like a sledge hammer of evidence against the view there is one right way (even as they make us aware that it is really inconvenient that there is not just one right way).

When you look at the differences among researcher’s approaches reported in the studies, they’re not matters where on could dismiss an approach as being grossly wrong. They’re the kinds of things people debate all the time. Log transform or square root transform or no transform (yes sometimes log or no-log is clearly better, but there are a lot of data sets out there where neither is great and it is a matter of judgment which is better – I’ll give an example below). Or OLS vs logistic vs other GLM. Multivariate regression vs. principal component regression vs regression tree. AIC vs automated vs. researcher variable selection. Include a covariate or leave it out. And etc. There is no such thing as the “one true right way” to navigate these. And as these meta-studies show, they’re not so trivial we can ignore these differences of opinion either – conclusions can change drastically. So, again, these results really should give us pause. Ninety percent of our published articles might have come to an opposite conclusion if somebody else did the stats even with the same data! (And one person was “smart” and the other “dumb” is not really a viable explanation).

Is ecology and evolution different?

Or maybe the ecology literature is safe? For those of us in ecology and evolution, our time to find out is coming. A similar study is underway right now. I suppose logically there could be two outcomes. 1) Same results as previous studies – total researcher-dependent chaos. 2) Different results – the chosen question and dataset has a strong answer and lots of different methods recover the same answer (qualitatively – small changes in effect sizes and p-values are OK). A lot of people in response to Jeremy’s question of what to do about these studies seemed to be really thinking (hoping?) that ecology was different and would come out with outcome .

Personally, I doubt it. I don’t think fields are that different. Different questions within a field are the important difference. All fields sometimes chase large effect sizes, which will give outcome (when you can see the pattern visually in the data, methods aren’t going to change the story), and sometimes fields chase small effects which will give outcome (when the effect sizes are small and you have six control variables, it matters a lot how you analyze the data). But here’s the key: we don’t know after we’ve completed our study with a single analysis path whether our question and results are in outcome (different paths would give different answers) or (different paths would give similar answers). If we knew that, we wouldn’t have done the study!  Sometimes studies of weak effects come up with an estimate of strong effect, and sometimes studies of a strong effect come up with an estimated weak effect. So trying to use statistics to tell us if we are in or is circular. This is a really key point – it might seem that the only way to tell if we are in or is to do some giant metastudy where we get a couple of dozen researchers to analyze the same question on the same dataset. That hardly seems practical! And the study being done on evolutionary ecology and conservation ecology questions could end up either in or (in my hypothesis depending on whether they are giving researchers weak effect or strong effect datasets/problems), so that is not a comprehensive guide for all of ecology and evolution. What we really need is a meta-meta-study that does several dozen of these meta-studies and then analyzes how often vs comes up (Jeremy has these same thoughts). I’m willing to bet pretty heavily that ecology and evolution have publications both that are safe (scenario ) and completely dependent on how the analysis was done (scenario ). In my own research in macroecology, I have been in scenarios where is true and in scenarios where is true.

Couldn’t individual authors just explore alternative analysis paths?

If we can’t afford to conduct a meta-analysis with a few dozen researchers independently analyzing each data set for each unique question (and we surely can’t!), then what alternative is there? There is an obvious alternative. An individual researcher can explore these alternatives themselves. A lot of researchers already do this. I bet every reader of this post has at one time tried with and without a log-transform or OLS vs. GLM assumptions on residuals. And, nominally, a strong majority of ecologists think such robustness checks are a good thing according to Jeremy’s poll. So it’s hardly a novel concept. In short, yes, it is clearly possible for a single author to replicate the main benefits of these meta-analyses by individually performing a bunch of alternative analysis paths.

But there is a deep aversion to doing this in practice. It is labelled with terrible names like “p-hacking” and the “garden of forking paths” (with its implicit reference to temptation in the Garden of Eden). I know in my own experience as a reviewer, I must have had a dozen cases where I thought the outcome reported was dependent on the analysis method and asked for a re-analysis using an alternative method to prove me wrong. Sometimes, the editor backs that request up (or the authors do it voluntarily). But a majority of the time they don’t. Indeed, I would say editors are much more receptive to “the stats are wrong, do them differently” than “the stats are potentially not very sturdy, try it a 2nd way and report both”.

Thus even though it seems we kind of think such single author exploration of alternative analysis approaches is a good idea in a private poll, it’s pretty murky and cloaked in secrecy and disapproval from others in the published literature and peer review process.

And there are of course some strong reasons for this (some valid, some definitely not):

  1. The valid reason is that if an author tries 10 methods and then picks the one with the most preferred results and only reports that, then it is really unethical (and misleading and bad for science), although in private most scientists admit it is pretty common.
  2. The invalid reason is that doing multiple analyses could take a seemingly strong result (p<0.05 is all that matters right?) and turn it into a murky result. It might be significant in some analyses and not significant in others. What happens if the author does the requested robustness check by analyzing the data a 2nd way and loses statistical significance? This is a really bad, but really human, reason to avoid multiple analyses. Ignorance is NOT bliss in science!

So how do we stay away from the bad scenario (reason above) while acknowledging that motive is bad for science in the long run (even if it is optimal for the individual scientist in the short run)?

Well, I think the solution is the same as for exploratory statistics, take it out of the closet and celebrate it as an approach and brag about using it! If we’re supporting and rewarding researchers using this approach, they’re going to report it. And scenario goes away. Unlike exploratory statistics which at least had a name, this statistical approach has been so closeted it doesn’t even have a name.

Sturdy statistics – a better approach to statistics than “one true way”

So I propose the name/banner that I have always used in my head: “sturdy statistics”. (Robust statistics might be a better name but that has already been taken over for a completely different context of using rank statistics and other methods to analyze non-normal data). The goal of sturdy statistics is to produce an analysis that is, well, sturdy! It stands up against challenges. It weathers the elements. Like the folk tale of three little pigs – it is not a house/model made of straw that blows over at the first big puff of challenge (different assumptions and methods). I seek to be like pig whose statistics are made of brick and don’t wobble every time a slightly different approach is used, and – important point – not only am I’m not afraid to have that claim tested, I WANT it tested.

A commitment to sturdy statistics involves:

  1. Running an analysis multiple different ways (an experienced researcher knows what alternative ways will be suggested to them, and we can help graduate students learn these).
  2. If the results are all qualitatively similar (and quantitatively close), then, great!, report that the analyses all converged so the results are really, truly sturdy
  3. If the results are different then this is the hard part where commitment to ethics come in. I think there are two options:
    1. Report the contrasting results (this may make it harder to get published, but I’m not sure it should – it would be more honest than making results appear sturdy by only publishing one analysis path thanks to shutting off reviewers who request alternative analysis paths)
    2. A more fruitful path is likely digging in to understand why the different results happened. This may not be rewarding and essentially leave you at 3a. But in my experience it very often actually leads to deeper scientific understanding which can then lead to a better article (although the forking paths should be reported they don’t have to take center stage if you really figure out what is going on). For example it may turn out the result really depends on the skew in the data and that there is interesting biology out in that tail of the distribution.
  4. As a reviewer or editor, make or support requests for alternative analyses. If they come back the same, then you know you have a really solid result to publish. If the authors come back saying your suggestion gave a different answer and we now understand why, then it is an open scenario to be judged for advancement of science. And if they come back different, well, you’ve done your job as a reviewer and improved science.

Sturdy statistics – An example

I’m going to use a very contrived example on a well-known dataset. Start with the Edgar Anderson (aka Fisher) iris dataset. It has measurements of sepal length and width and petal length and width (4 continuous variables) as well as species ID for 50 individuals in each of 3 species (N=150). It is so famous it has it’s own Wikipedia page and peer reviewed publications on the best way to analyze it. It is most classically used as a way to explore multivariate techniques and to compare/contrast e.g. principal component analysis vs. unsupervised clustering vs discriminant analysis, etc. However, I’m going to keep it really simple and parallel to the more common linear model form of analysis.

So let’s say I want to model Sepal.Length as a function of Sepal.Width (another measure of sepal size), Petal.Length(another measure of overall flower size and specifically length) and species name (in R Sepal.Length~Sepal.Width+Petal.Length+Species). As you will see this is a pretty reasonable thing to do (r2>0.8). But there are some questions. If I plot a histogram of Sepal.Length it looks pretty good, but clearly not quite normal (a bit right-skewed and platykurtic). On the other hand, if I log-transform it, I get something else that is not terrible but platykurtic, bimodal and a bit left-skewed (by the way Box-Cox doesn’t help a lot – any exponent from -2 to 2 is almost equally good). One might also think including species is a cheat or not, so there is a question about whether that should be a covariate. And of course we have fixed vs. random effects (for species). I can very easily come up with 6 different models to run (see R code at the bottom): simple OLS as in the formula already presented, same but log transform Sepal.Width, same as OLS but remove species, or treat species as random (one would hope that is not too different), or use a GLM with a gamma distribution which spans normal and lognormal shapes (but the default link function for gamma is log, but maybe it should be identity). And tellingly, you can find the iris data analyzed most if not all of these ways by people who consider themselves expert enough to write stats tutorials. Below are the results (the coefficients for the two explanatory variables – I left out species intercepts for simplicity – and r2 and p-values where available – i.e. not GLM).

Results of a sturdy analysis of the iris data predicting Sepal Length

What can we make out of this? Well Sepal Width and Petal Length are both pretty strongly positively covarying with Sepal Length and combine to make a pretty predictive (r2>0.8) and highly statistically significant model. That’s true in any of the 6 analyses. Log transforming doesn’t change that story (although the coefficients are a bit different and remain so even with back-transforming but that’s not surprising). Using Gamma-distributed residuals doesn’t really change the story either. This is a sturdy result! Really the biggest instability we observe is that relative strength of Petal Length and Sepal Width change when species is or isn’t included (Petal Length appears more important with species, but Sepal Width is relatively more important without species*). So the relative importance of the two variables is conditional on whether species is included or not – a rather classic result in multivariate regression. If we dig into this deeper we can see that in this dataset two species (virginica and versicolor are largely overlapping (shared slope and intercept at least), while setosa has higher intercept but similar slope vs Sepal Width, but vs. Petal Length, the slope for setosa also varies substantially from the other two so slope estimates would vary depending if you control species out or not (and maybe a variable slope and intercept model should be used). So that one weak instability (non-sturdiness) is actually pointing a bright red sign at an interesting piece of biology that I might have ignored if I had only run one analysis (and additional statistical directions I am not going to pursue in an already too long blog post). This paragraph seems simultaneously like a sturdy result, but at the same time, a sturdiness analysis caused me to dig a bit deeper into the data and learn something biologically interesting. Win all around!

And in a peer review context, that exercise hopefully saves time (reviewers not needing to request additional analyses), is fully transparent on the analyses done (no buried p-hacking), and draws convincing conclusions that leave science in a better place than if I had just chosen one analysis and doubled down on insisting it was right.

Conclusions

TL;DR: Sometimes the answer to a question on a dataset is sturdy against various analysis approaches. Sometimes it’s not. We can’t know a priori which scenario we are in. The logical solution to this is to actually try different analyses and prove our result is “sturdy” – hence an inference approach I call “sturdy statistics”. To avoid this turning into p-hacking it is important that we embrace sturdy statistics and encourage honest reporting of our explorations. But even if you don’t like sturdy statistics, we have to get over the notion of “one right way” to analyze the data and come up with a solution to finding out if multiple, reasonable analysis paths lead to different results or not, and what to do if they do.

What do you think? Do you like sturdy statistics? Do you already practice sturdy statistics (secretly or in the open)? Do you think the risk of sturdy statistics leading to p-hacking is too great? Or is the risk of p-hacking already high and sturdy statistics is a way to reduce its frequency? What needs to change in peer review to support sturdy statistics? Is there an alternative to sturdy statistics to address the many, many reasonable paths through analysis of one data set?

*NB: to really do this just by looking at coefficients I would need standardized independent variables, but the mean and standard deviation of the two variables are close enough and the pattern is strong enough and I am only making relative claims, so I’m going to keep it simple here.

R Code

data(iris)
str(iris)

#simplest model
mols<-lm(Sepal.Length~Sepal.Width+Petal.Length+Species,data=iris)
# log transform sepal length?
mlogols<-lm(log10(Sepal.Length)~Sepal.Width+Petal.Length+Species,data=iris)
#role of species as a covariate?
mnosp<-lm(Sepal.Length~Sepal.Width+Petal.Length,data=iris)
#species as random instead of fixed (shouldn't really differ except d.f.)
library(lme4)
mrndsp<-lmer(Sepal.Length~Sepal.Width+Petal.Length+(1|Species),data=iris)
#Gamma residuals (a good proxy for lognormal) 
# with default log transformation on dependent variable
mgamlog<-glm(Sepal.Length~Sepal.Width+Petal.Length+Species, data=iris,                                                            family=Gamma(link="log"))
#No log transformation on 
mgamident<-glm(Sepal.Length~Sepal.Width+Petal.Length+Species,
data=iris, family=Gamma(link="identity"))
# Is Sepal.Length better log transformed or raw?
hist(iris$Sepal.Length)
hist(log(iris$Sepal.Length))
#hmm not so obvious either way
#do these choices matter?

#pickout relevant pieces of result from either GLM or OLS objects
#return a row in a data frame
report1asdf <- function(mobj) {
  co=coef(mobj)
  if (!is.numeric(co)) {co=as.numeric(co$Species[1,]); co[1]=NA} #GLM var intrcpt
  s=summary(mobj)
  #handle GLM with no p/r2
  if (is.null(s$r.squared)) s$r.squared=NA
  if (is.null(s$fstatistic)) s$fstatistic=c(NA,NA,NA)
  data.frame(
    #CoefInt=co[1],
    CoefSepW=co[2],
    CoefPetL=co[3],
    r2=s$r.squared,
    p=pf(s$fstatistic[1],s$fstatistic[2],s$fstatistic[3],lower.tail=FALSE)
  )
}

#assemble a table as a dataframe then print it out
res<-data.frame(CoefSepW=numeric(),CoefPetL=numeric(), r2=numeric(),p=numeric())
res<-rbind(res,report1asdf(mols))
res<-rbind(res,report1asdf(mlogols))
res<-rbind(res,report1asdf(mnosp))
res<-rbind(res,report1asdf(mrndsp))
res<-rbind(res,report1asdf(mgamlog))
res<-rbind(res,report1asdf(mgamident))
row.names(res)=c("OLS","OLS Log","OLS No Sp.","Rand Sp","GamLog","GamIdent")
print(res)

PhD and postdoc openings in McGill lab at UMaine and postdoc opening in Niles and Gotelli labs at UVM

First the ad for a PhD and Postdoc at UMaine

Two positions are open to work in Brian McGill’s lab at the University of Maine as part of a larger group of eight faculty in Maine and Vermont on a large grant: Barraccuda (Biodiversity and RuRal communities Adapting to Climate Change Using Data Analysis) (OK the acronym is a stretch and misspelled, but it gets the main ideas across!). We are a team of ecologists, social scientists and spatiotemporal data scientists. Goals include: 1) modelling the response to climate change in birds, trees, certain crops, and zoonotic hosts of certain diseases, 2) understanding how rural communities will adapt to the changing environment, 3) improving the toolset for ecologists and social scientists working with spatiotemporal data, and 4) learning how to better communicate these complex results to stakeholders (especially in agriculture). Our subteams are organized around these four themes.

  • The PhD position will have as its primary focus working on theme modelling responses of organisms to climate change. Funding is for one year with extensions possible for two more years depending on satisfactory performance and funds availability. There would also be teaching assistantships and the possibility for other grants to fill in up to 5 years of funding total. Requirements including a bachelors in ecology or related field and either existing skills or a strong desire to learn more data science. Opportunities to work and learn in other areas of the project also exist. A MSc is beneficial but not required. Stipend is $26K/year with annual cost of living increases plus coverage of tuition and health care. Start date is summer 2021.
  • The postdoc position will be more integrative and would work across all four themes of the project (ecological modelling, social sciences, stakeholder engagement and data science). This position would also work closely with the Waring lab at UMaine. Again funding would be for one year with extensions possible for two more years depending on performance and funds availability. Requirements include a PhD in a relevant field (e.g. ecology or related, social sciences related to rural communities, or data science or related) and a strong desire to learn and work in the other two areas. The ability to work both collaboratively and independently is also essential. Salary is $48-55K/year commensurate with experience and with annual cost of living increases plus a strong package of benefits according to the UMPSA agreement including healthcare and retirement plan contributions. Desired start date is summer 2021 (although earlier or slightly later is negotiable).

The University of Maine is located in Orono, ME which has a low cost of living, supports a walkable/bikable lifestyle, and has exceptional access to the outdoors ranging from a river, a lake and a trail network in town to national parks and wilderness not far away. Being part of the Bangor Metropolitan area and a university also results in good access to cultural events and services like health care, restaurants and shopping. We have an airport with direct connections to most East Coast cities and are a four hour car or bus ride away from Boston. We also have great K-12 schools if you are at a life stage where that matters. If you’re looking for clubbing until 2AM and eating in a different restaurant every night of the week, it might not be a fit, but most everybody else finds the quality of life excellent here (it’s pretty cute what they consider to be a “traffic jam” here).

The University of Maine is an equal opportunity employer and members of underrepresented minorities are encouraged to apply. To apply please submit a cover letter explaining fit to and interest in the project as well as a CV as a single PDF to mail@brianmcgill.org. Graduate student applicants should also include a transcript (GRE scores are optional but may be submitted if the student wishes and the same for TOEFL scores if not a native English speaker). Please note that if selected, the graduate student applicant will also need to apply to either the School of Biology and Ecology or the Ecology and Environmental Studies PhD program, but this can be done later. Review of applications will start February 19th and continue until the positions are filled. Please contact Brian McGill at mail@brianmcgill.org with questions.

And the ad for a postdoc at UVM (University of Vermont)

Post-Doctoral Position- Species Distribution Modeling of Biodiversity and Adaptation of Farmers and Rural Communities To Climate Change

The University of Vermont is seeking qualified applicants for a two-year post-doctorate position, with potential for renewal for another two years, to use species distribution modeling to understand how biodiversity, farmers and rural communities adapt to the challenges of climate change.  The project includes the aggregation and development of largescale datasets of biodiversity, farmer behavior and perceptions across US states, construction of mechanistic, spatially explicit models of range shifts with climate adaptation, and application of these models to farmer and rural community responses to climate change.

Background

Funded through a National Science Foundation grant, the research project with collaborators at University of Vermont (Dr. Meredith Niles, Dr. Nicholas Gotelli, Dr. Laurent Hébert-Dufresne) and University of Maine (Dr. Tim Waring, Dr. Brian McGill, Dr. Kati Corlew, Dr. Matthew Dube), seeks to understand how both rural human communities and species populations will respond to challenges posed by climate change [1]. The project will synthesize large amounts of data and develop new species distribution models to predict climate-driven shifts in species ranges as well as the responses and cultural adaptations of human communities. The project will also work with farmers and rural communities to understand their perspectives of the projected outcomes and responses. A successful applicant will work with a multidisciplinary team of biologists, social scientists and complexity researchers in Maine and Vermont.

Aims

The two main aims of this position are 1) to develop mechanistic, spatially explicit models of species range shifts, and 2) to develop a better understanding of the interaction of humans with biodiversity change and the ability of farmers and rural communities to adapt to climate change. This requires the assembly and analysis of species occurrence data (birds, trees, crops, and diseases), and datasets related to land use and farmer behavior.  Tasks include the identification of existing public datasets, the curation, aggregation, and synthesis of multiple data types, and the generation of novel species distribution models and indicators of climate adaptation and associated behaviors.  In addition, the post-doctorate will  help to integrate these data with evolutionary models of cultural adaptation to climate change and engage with agricultural and rural communities, including in presentation of results to diverse stakeholders and policy makers.

Position

The position is one of five new hires that form the core of the four-year research project funded by the National Science Foundation. The post-doctorate will be co-advised at the University of Vermont by Dr. Meredith Niles (www.meredithtniles.com) in the Food Systems Program of the Department of Nutrition and Food Sciences, and Dr. Nicholas J. Gotelli (http://www.uvm.edu/~ngotelli/homepage.html) in the Department of Biology. The Niles and Gotelli labs have a strong commitment to interdisciplinary research, biodiversity modeling, food systems science, and open access principles.  Salary range will be $48,000-$52,000), depending on experience.  There are a number of generous benefits associated with the position, which can be found at: https://www.uvm.edu/hrs/postdoctoral-associates-benefits-overview .  The post-doctorate will also have opportunities for professional development and travel associated with the project, as relevant, as well as engagement with other professors on the project, especially Dr. Tim Waring, Dr. Laurent Hébert-Dufresne, and Dr. Kati Corlew.

Requirements

Essential

  • Successful completion of a PhD in a relevant field of biology, social science, or data science
  • Demonstrated research and academic excellence evidenced by existing publications in relevant topics
  • Experience constructing, fitting, testing, and comparing species distribution models with species occurrence data
  • Excellent data science and social science quantitative skills
  • Experience with data aggregation and curation, especially across diverse types of datasets
  • Significant experience with Python and familiarity with other languages such as R, SQL, Stata, etc.
  • Excellent communication skills and ability to work with an interdisciplinary team across multiple institutions
  • Self-directed and ability to lead projects and learn new skills
  • Mature, organized, professional and courteous

Desired

  • Experience in interdisciplinary approaches to human behavior, especially in social-ecological systems
  • Experience working with farmers or rural communities
  • Strong interest and experience in data visualizations
  • Understanding, or interest, in stakeholder engagement
  • Understanding, or interest, in qualitative methods, including focus groups
  • Enthusiasm for open data and science practices

Application:

Please address questions and completed applications electronically to Dr. Meredith Niles (mtniles@uvm.edu) and Dr. Nicholas Gotelli (ngotelli@uvm.edu). Applications should include:

  1. A cover letter detailing your interest in the position, how you meet the essential and desired requirements, and details of past research projects
  2. A CV or resume, including three references (with name, phone, email).

Review of materials will begin February 15th 2021 and continue until the position is filled.

Here we go again – the planet is practically dead

So the 2020 version of the Living Planet Report has been released to massive headlines blaring catastrophe. The central claim is that vertebrate (i.e. fish, amphibian, reptile, bird, mammal) local populations declined, on average, by 68% from 1970 to 2016 (the report is released 4 years after the end of the data). The authors of the report have done a much better job of getting out the notion that this is an average decline. IE they’re not claiming that there are 68% fewer vertebrate individuals on the planet, but that the average decline is 68% (but see footnote)*.

To invert their claim, the average vertebrate population in 2016 is 32%  (100%-68%) of the size that it was in 1970. If we look at the 2018 report it says that the average vertebrate population in 2014 is 40% of what it was in 1970. And the average vertebrate population in 2010 is 48% of what it was in 1970. So if a population in 1970 was of size N then, 2010=0.48N, 2014=0.40N, and 2016=0.32N. Wow! That is a 52% decline in the 40 years from 1970 to 2010, a 16.6% decline in four years from 2010 to 2014 and a remarkable 20% decline from 2014 to 2016. The math is a little complex because it is exponential, not linear, decline but that gives a 1.82% decline per year from 1970 to 2010, a 4.46% annual decline from 2010-2014, and a 10.6% per-year decline from 2014-2016. So not only are there huge declines, but the declines appear to be accelerating (admittedly with small samples for recent years). If we are conservative in the face of this accelerating trend and hold declines constant for the next 10 years (from 2016 so to 2026) at 10.6%/year and start in 2016 at 32% of 1970 numbers then we are down to 10% of the 1970 numbers by 2026. Do you believe that! In 6 years from now the average population will be just 10% of what it was in 1970. (To be clear, the LPI authors did not make this claim – I did, but it is just a 10 year extrapolation from their numbers). You would think such a decline would be more obvious to the casual observer. I’m old enough to remember 1970 and have spent a lot of time in the woods in my life. If there was a 20% decline (or increase) I’m not sure my fallible memory would reliable detect the change (in fact I’m pretty sure it wouldn’t). But if there were 90% less birds on average than my childhood, I would have thought I would have noticed. You would also think the world would be absolutely exploding with things vertebrates eat (e.g. insects and plants).

If this isn’t happening, then what is going on? Well for starters, it is pretty dicey to take short term rates and extrapolate them when things grow or decline exponentially. If you do that you are liable to find out everything is extinct or at infinity pretty quickly. So lets go back to the core claim straight from the report – there has been a 68% decline in the average vertebrate population since 1970. Not quite as extreme, but you would still think I (and a lot of other people) would have noticed declines in vertebrates of this extent not to mention the boom of insects and plants as they’re freed from predation.

If you don’t trust my fond recollections of my childhood nor my extrapolation to what should have happened to insects and plants (as you definitely shouldn’t!), then how about this. The LPI core result is completely different than other studies (not cited in the Living Planet Report for what it is worth). Several, like the LPI, track thousands of populations over decades. All (like the LPI) suffer from some observer bias – scientists have more data in temperate regions and near cities and for bigger animals, but there has been no evidence to date that this fact is biasing the results of any of the three studies. First, here is a plot very similar to the LPI plot but for invertebrates in the UK by Outhwaite and colleagues in Nature Ecology and Evolution:

Now this is invertebrates, not mammals, but what we see is 3 broad groups have abundances higher than they did in 1970 (freshwater species showing a spectacular recovery possibly due to clean water laws), and one broad group that is down just a smidge. The overall balance across all 4 groups is a 10% INCREASE.

Here is a paper by Dornelas and colleagues in Ecology Letters (disclosure I am a co-author):

They (we) used a slightly different method – we calculated the slope of the timeseries and then plotted histograms of the slopes. Note that there is a lot of variability with some real declines and real increases, but the overall trend across populations is strongly centered on (i.e. averages on) about zero (neither up nor down). In fact the title of that paper is “A balance of winners and losers in the Anthropocene” and finds that 85% of the populations didn’t show a trend significantly differently from zero, 8% significantly increased, and 7% significantly decreased. A lot of churn of which species are up or down, but NOT an across the board catastrophic decline. Maybe this is because Outhwaite and Dornelas didn’t study vertebrates? Unlikely. Dornelas et al did pull out different taxa and found that reptiles, amphibians and mammals skewed to more increases than decreases and no real difference from zero in birds and fish (their Figure 4). Or check out Leung et al who analyzed a subset of the LPI data (hence all vertebrates) focusing on the well sampled North American and European regions using a different methodology who got more groups increasing than declining. Or check out Daskalova et al who also found winners and losers were balanced (and most species were neutral). Even the most extreme result of the studies that exclusively use longer term data to look at this question that I am aware of (van Klink et al) shows a 35% decline over 45 years for terrestrial insects and 60% increase over the same period in aquatic insects. I think it is an interesting and challenging question why these studies received little press (despite also being published in high profile journals), but the LPI gets enormous coverage every time it comes out.

These 5 other studies more closely match my childhood memories. There could be weaker trends (+ or – 10 or 20%). And for sure I could be seeing different species (winners replacing lowers). But these 5 studies completely contradict the LPI result (all 5 find a robust mix of increases and decreases and most find something like a balance between increases and decreases). So what is going on?

For one thing, I think the LPI bites off too much – it tries to reduce the state of vertebrates across continents and species to a single number (aka index). That has to sweep a lot of complexity under the rug! There is underlying variability in the LPI too – they just don’t emphasize it as that is not their point. And to a large extent these other papers are just unpacking that complexity by exposing the underlying high variability in trends.

But those other papers find a more neutral balance while the LPI most definitely does not. Something more has to be going on. It could be their data (but some of the aforementioned papers used the same or a subset of the data). Or it could be their methodology (but some of the aforementioned papers used similar methodologies). Personally, I think it is a complex interaction between the data they are putting in and the weaknesses of the methodology (in the sense that every methodology has weaknesses, not that their methodology is fundamentally flawed or wrong). There may be more to say about this in the future. But for now, I hope we can at least pause and think and do a sanity check.

I want to leave no doubt that I am convinced humans are hammering the planet and the vertebrates (and invertebrates and plants) that live on it. We’re removing >50% of the [terrestrial] primary production each year, have removed more than 50% of the tree biomass, modified >50% of the land, use more than 50% of the freshwater, have doubled the amount of nitrogen entering the biosphere each year and nearly doubled the amount of CO2 in the atmosphere since pre-industrial times. But I also don’t think it is possible for there to be a 68% decline in 46 years leading to a projection of a 90% decline over 56 years (10 years from now) nor does a 20% decline in the last two years seem possible. The consequences of 68-90% gone is just too large not to be observed anecdotally and through indirect effects. And the 68-90% decline story just doesn’t align with other major, comprehensive, 1000s of datasets analyses of this question.

What I believe the data show is we’re creating winners and losers – some really big winners and some really big losers and a lot in between, and that’s bad – humans ARE massively modifying the planet in ways that all but the most biodiversity-hating people care about, and the extinctions we are causing are irreversible,so please don’t cite this blog as evidence that “everything is OK”. Its not. Is there room for an “in between”  (bad but not catastrophe) message?

But either way, please think twice before reporting that vertebrates are disappearing from the planet at these incredible rates. Because the logical conclusion is that nothing will be left in a very short time (decade or two) and that doesn’t pass the common sense test. This is not an “all scientists agree” scenario. I personally think the balance of evidence  (such as cited above) points pretty strongly against the LPI conclusion. I worry how many more years scientists (and reporters) can report catastrophic trendlines that predict little to no life of any sort on the planet within our lifetimes and not have people notice that this isn’t actually happening.

 

Note: I am indebted to many colleagues who have talked about this topic with me over the years, some of them co-authors on the paper cited here, some of them co-authors on forthcoming papers, some of them not co-authors, but I want to stress that the opinions here are controversial and my own so I am not listing them here.

 

* The report averaged rates of decline in populations, not total decline in number of individuals (unlike this catastrophic headline). But shouldn’t they be the same thing? Well yes if there were the same number of individuals in each population and each species then a 68% decline of 100 here (to 32) and a 68% decline of 100 there (to 32) would still result in a 68% decline (from 200 to 64). But we know in fact number of individual varies wildly (100x-1000x) across populations and species. So It would be a 68% of 1000 to 320 and a 68% decline of 10 to 3.2 giving 1010 to 323.2 which is STILL 68%. But now the fact the 68% is an average comes in. What if the 1000 declined by 60% to 400 and the 10 declined by 76% to 2.4 or 1010 to 402.4. That’s not a 68% decline but a 60.2% decline even though average the rates 60% and 76% still give an average 68% decline. We don’t know for sure whether large populations are more likely to decline or small populations are more likely to decline, but we do know that at least in birds abundant species are declining while rare species are increasing, so if you assume that it would mean things are actually even worse than the 68% decline in terms of total number of vertebrate individuals increasing, but we don’t know for sure. But I don’t think this is the central reason why the LPI numbers don’t match my childhood memories, nor other studies. With such large data and no truly strong correlations between abundance and decline, most of this comes out in the wash. So theoretically this could be a mathematical reason the total number of individuals has decreased by less than 68% even when the average decline across all populations is 68%. But I don’t think it likely. In fact I think in a weird way, arguing this is a way of distancing the LPI from what it is really claiming/implying.

Ecologists discussing science of coronavirus pandemic – open thread

I don’t know about you but as an ecologist, I am not an expert in disease dynamics nor part of the inner community rapidly exchanging ideas and data. But as an ecologist I have a better handle on notions of population growth, species interactions, individual encounter rates, etc than the average population (and probably the average scientist) and I have felt in a frustrating vacuum of information.

To address this, we’re trying something new here at Dynamic Ecology – an open thread, the main purpose of which is to have a place for the community to have a conversation. Our comments sections have long been the most interesting part of the blog, so now we’re creating a direct path to comments without your having to read 1000s of words of bloviation from me!

First, a few thoughts to give some common terminology/framing to the questions. I think ecologists all know about the power of exponential growth (although this is new and still poorly grasped to most of the world). R0 is the discrete growth rate with no immunity (naive population) and no efforts at social distancing. Best estimates I have seen for Covid 19 is about R0=2.5 which is a good bit higher than flu (and a good bit lower than measles). It seems to be becoming clearer that R0 is as high as it is because people can be infectious before they show symptoms (or even if they never show symptoms like children). Once immunities start to build up or quarantine/social distancing measures start to be put in place a lower growth rate Re (effective growth rate) is observed.  So as far as I can tell there are three strategies.

  1. Squeeze it – extreme social distancing to reduce Re<1. This seems to be what China as well as Japan and South Korea are doing (probably not coincidentally all Asian countries that got hit most by SARS and MERS).
  2. Let it burn – do nothing to lower Re=2.5. Sadly many (all?) countries started down this road – with exponential growth the speed of reaction required seems to be faster than governments can handle.
  3. Stretch it – social distancing to get Re~1.2 (nb 1.2 is an example, not a carefully calculated number, just a wild guess proxy as it is about what influenza does) so that the case load does not exceed hospital capacity. This is what everybody is talking about as “flattening the curve”.

With the stretch it and let it burn strategies the number of people who get sick and then have immunity rises to about 1-1/R0 or about 60% of the population (assuming getting sick once confers immunity – assumed right now but a few counter examples are out there). Then the effective growth rate Re drops below 1 and “herd immunity kicks in”. Individuals can still get sick but it can’t become a self-sustaining epidemic. The primary difference between let it burn and stretch it is the rate at which people get sick which is inversely correlated with how long the epidemic lasts.

I’ve posed several questions below to get this started. I’m not an expert. So the answers to some of these may be obvious in which case, I’d love to know the answer. But I have not seen the answers to any of these despite voracious reading. If they’re not so obvious I expect we could all learn from discussing them.

If you want to respond to a question stay in the same thread (even if the nesting stops at 3 levels). If you want to pose a new question, start a new thread. This is NOT a place for politics, so anything stronger than “many governments have been incompetent at X” (e.g. naming specific individuals, blaming one party or another, or getting distracted off science) will be deleted.

Nominate somebody for International Biogeography Society Awards!

As has been pointed out on this blog before, it does matter who we recognize for society awards. And one of the strongest filters on that is who is nominated. Award committees can’t give an award to somebody who isn’t nominated. It does take a little time and effort to nominate somebody, but not a lot (comparable to writing a letter of reference).

The International Biogeography Society gives out awards at its biennial conference. There is an Alfred Russell Wallace award for lifetime achievement and the MacArthur & Wilson award which targets “relatively early career” (<12 years from PhD).

You can find details on the awards and how to nominate somebody at: https://www.biogeography.org/news/news/2019-call-for-awards/

The deadline is November 29th to nominate somebody for the awards given at the next IBS meeting in Vancouver January 2021 (put it on your calendar to attend too!).

So what are you waiting for? Nominate a deserving biogeographer.