Service: How much and what kinds?

Something that I need to continually evaluate is how much service I’m doing and what specific kinds of service I want to do or should be doing. The questions I almost always have in mind while doing so are “Am I doing too much?” and “Should I take on this new task?”

In terms of deciding how much service to do, I think this is always going to be hard and this is where it’s key to ask department mentors and/or chairs for guidance.* But this, on its own, doesn’t always work. While some chairs are really great about making sure to protect their junior faculty from lots of service, others are less so. (I suspect that, in many cases, it’s just that the chair is torn between a desire to protect junior faculty and a real need to get things done in the department.) Some strategies that I have found useful when trying to decide on whether to take on a particular service role:

  • Never say “yes” right away. I think I first read this in Boice’s Advice for New Faculty, but I’ve lost that book and can’t check it now. But, for me, this is really key. My initial inclination is almost always to say “yes”, but that is not what is right for me most of the time.
  • Even if you’ve said “no” to 100 things in a row, it doesn’t mean request #101 is the one to say “yes” to. One of the reason’s The Seven Year Postdoc resonated with me is that I felt that she had a good perspective on this sort of thing. By knowing she would only allow herself X trips a year, she was better able to say “no” to requests. I feel like it would be very helpful to have a set amount of service in mind, since it would help to set limits like this. The problem is that service tasks are so variable (and it can be so hard to predict at the outset how much time they’ll take) that I have no idea how to actually do this.
  • If you feel like you are already doing at least your fair share of department service but have been asked to do another thing, write out everything you are currently doing and bring that to your chair and mentors. In one case, I felt like I was already doing a lot of service, but was asked to take on another pretty time-intensive service task. I wrote up all the department service I’d done the previous year and that I had agreed to do in the following year already. Once I listed it all, they found someone else for the new task. (Pat Schloss recently had similar advice on twitter, where he said, “Unsolicited advice to Asst. Profs: learn to say NO to all requests to do service until you have a chance to ask your dept chair/mentors”)
  • Related to the above, if it’s something you’d like to do, it’s always an option to say that you would be happy to take that on in exchange for them finding someone else to do one of your other service tasks.
  • Think of what other things you won’t be able to do if you take on this service. Is it worth it? Again, this is hard to answer concretely, but it helps me to think things through. In my experience, taking on additional service eats into non-work time and research time. Sometimes that’s worth it, but other times it’s not.

Lest that give you the impression that I have this all down and am great at turning down service requests, I am sure I am doing “too much” service. I think part of this is because I haven’t sufficiently specialized in terms of the service I’m doing. I do all the things I “should” do (e.g., department service, reviewing proposals and manuscripts, serving on dissertation committees). I enjoy many of those things (especially being on dissertation committees), but they take up a lot of time. Other things I do because of a combination of feeling like I “should” do them and thinking they’re really important (e.g., serving as an Associate Editor, reviewing tenure dossiers**, society-level service). And then there are the other things I do because I really care about them and feel like they’re an important way I can make a difference. Blogging and activities related to diversifying STEM both definitely fall in this category. These both take up a fair amount of time, and, if I didn’t do them, I think I would still be doing “enough” service. But I feel like these activities have the potential to have a bigger impact than anything else I do.

I have currently been struggling with this question again as I consider two service-y tasks. One task relates to teaching, which is something I care a lot about. It also would have the potential benefit of having me meet more people from across disciplines, and would be service within the university but outside my department, which I’m currently lacking. But, ultimately, I decided that my participation wouldn’t have that much of an effect, and turned that opportunity down. The second one is an opportunity to get more involved in a group that I think is fantastic and doing really important work. Thinking it through, I’ve been torn between recognizing I don’t have enough time or energy to take on all the interesting service roles, with feeling like this could be a really valuable thing to be part of. In the end, after thinking through the things that I couldn’t do if I took on this one (and considering how stressed I would feel if I added more to my plate), I decided to turn it down (albeit reluctantly).

I think one issue for me is that I haven’t sufficiently specialized in my service. Or, more accurately, I think that I’m trying to be both a generalist and a specialist. I am doing the generalist things that I “should” do, and the specialist things that I care a lot about. Probably one or the other would be sufficient. But should I really start turning down more review requests or step down from an editorial board so that I can do more work related to diversifying STEM? I truly don’t know the answer to that right now.

I would love to hear from others how they decide on how much service to do and what kinds of service to do, and on how those changed over time. Do you try to specialize? Are you a generalist? Has that changed over time?

 

* Of course, there are fewer and fewer relevant mentors the more senior you get.

** I originally thought these requests wouldn’t appear until I was a Full Professor. I was wrong.

How not to start your next ecology or evolution talk (UPDATED)

The beginning of a scientific talk should grab the audience’s attention. Or rather, hold the audience’s attention, since ordinarily you have the audience’s attention when you start talking. How do you do that?

Here are some common pitfalls to avoid, if you’re talking about your ecological or evolutionary research, although some of my advice applies more broadly.

  • Don’t start with an outline of your talk. There are exceptions to this rule, but they are exceptions. Unless your talk has some unusual structure, such that the audience will get confused unless they’re told the structure in advance (e.g., you’ll actually be giving several mini-talks and then tying them together at the end), you probably don’t need an outline slide.
  • Don’t start with a definition. Definitions are dry and technical. Dry, technical material tends to lose the audience even if you’ve convinced them that the material is essential–and if you’re starting with a definition, you haven’t yet convinced the audience of that. Plus, starting with a definition means you’re starting with text rather than a visual, which is deadly unless the text is really engaging. And no, you shouldn’t start with a definition even if the term you’re defining is central to your talk, or is defined in different ways by different people. Just because it’s crucial to define a term at some point doesn’t mean it’s crucial to start by defining the term. And even if you do need to start by defining a term, you don’t have to start by literally putting up a definition and reading it. For instance, when I talk about spatial synchrony of cyclic population dynamics, I start with the broad concept of synchrony. Which I introduce with the story of Huygens’ clocks, before moving on to a little collage of other examples of synchrony (e.g., synchronized swimming, synchronous neuronal firing leading to brain seizures). In other words, not only don’t I start with an unnecessary definition slide, I don’t even start by talking about ecology.*
  • Don’t start with a collage of pretty pictures illustrating “biodiversity”. This has become a visual cliche through overuse.** Seriously, if somebody invented a drinking game for ecology talks, the first rule of the game would be “Speaker begins with a collage of pictures illustrating ‘biodiversity’–Drink.” As noted by a commenter, the evolutionary version of this cliche is a collage of pictures illustrating the phenotypic diversity of the clade you’re studying. Note that it’s great to use pictures to illustrate differences among species in whatever specific bit of biology you’re talking about. By all means use pictures to show the difference between heterstylous and non-heterostylous flowers or whatever. That’s using pictures to convey information. As opposed to using pictures in a cliched way to just say “Hey, there are lots of different-looking species out there, that sure is interesting.”
  • Don’t start with a quote from Darwin about your topic. Another overused cliche.
  • Don’t start with a famous textbook example. Speaking of avoiding cliches…If you’re talking about niche partitioning, don’t just unthinkingly start your talk with that famous diagram of MacArthur’s showing where in the trees different species of warblers forage. If you’re talking about character displacement, don’t start with John Gould’s famous drawing of the bills of different species of Darwin’s finches, or with a picture of benthic and limnetic sticklebacks.** If you’re talking about keystone predation, don’t start with Bob Paine’s famous Pisaster experiment. Etc. Or if you do feel the need to start with a classic example, don’t belabor it. “Classic” isn’t far removed from “cliche”.

So instead of following the example of every other talk you’ve ever seen, get creative! A bit of showmanship in scientific presentations is a good thing. Maybe start with a non-ecological example, as I often do. Maybe start with something interactive (I’ve seen Tony Ives do this). Maybe start with classical Greek poetry (I’ve seen Peter and Rosemary Grant do this). Maybe start with a video (Meg has a bunch of suggestions for you). Maybe start with a good but non-famous example rather than a famous one. Whatever. Just make sure you avoid (i) the same old thing, (ii) pointless things, and (iii) boring things.

UPDATE: See the comments for more suggestions of ecology talk cliches. Like “Graph of Web of Science search, showing increasing number of papers on the topic over time, not corrected for increasing number of papers on all topics”.

p.s. to the easily worried: If you’ve ever given a talk that started in one of the ways I just listed, please don’t think “OMG, I made a horrible blunder, I’m so embarrassed!” It’s not a big deal. As I said, I’ve done these things myself, several times, and so have lots of other ecologists, so you’re in good company. You didn’t make an embarrassing mistake, you just missed an opportunity to make your introductory remarks a bit more engaging.

*This is a specific example of a general method for coming up with a creative way to start your next ecology talk: start with a non-ecological example of whatever concept you’re talking about. For instance, when I talk about my work applying the Price equation to ecology, I start with a pithy quote from physicist Leon Lederman, expressing the hope that one day all the fundamental forces of physics will be unified in a single equation suitable for printing on a t-shirt. Then I note that physicists have yet to achieve Lederman’s dream–but evolutionary biologists have, in the form of the Price equation. I then hold up a t-shirt with the Price equation printed on it (aside: props are another good way to grab and hold the audience’s attention). This analogy helps clarify what the Price equation is and why it’s important or useful. And it holds the audience’s interest, because it’s not the sort of thing one expects to hear at the start of an ecology talk. That’s also why I start my synchrony talk by talking about Huygens’ clocks and synchronized swimming: it’s a way of conveying essential information while holding the audience’s attention with something unexpected.

**I’ve done this. We’re all sinners.

Friday links: women and STEM awards, grant review is not a crapshoot, and more

Also this week: underwater thesis defense (yes, really), database-defeating data (yes, really), why scientific papers should be longer (yes, arguably), how penguins ruined nature documentaries, and more. Including this week’s musical guest, They Might Be Giants!

From Meg:

There are just three wolves left on Isle Royale*, meaning that the predator part of the longest running predator-prey study is likely to end soon.

(* If you want to pronounce this like a native, you should pronounce it the way you’d say Isle Royal. Ah, Michigan pronunciations.)

MacLean’s had a piece on why there are still far too few women in STEM, which featured work by Alex Bond. One of the points the piece makes is that women are “consistently passed over for recognition”. Their focus is on women in Canada, but this applies in the US, too. Related to that, I’m glad that ProfLikeSubstance is also calling attention to the poor gender ratio of NSF Waterman Awardees.

I’m really glad to hear that the terHorst Lab at Cal State-Northridge organized an event to create Wikipedia pages for women in ecology and evolution! This old post of mine has a list in the comments of women who people have proposed as needing Wikipedia pages (or improvements to existing pages).

Seminars for most of the speakers from the UMich EEB Early Career Scientist Symposium (which focused on the microbiome) are mostly available on youtube! Talks by Seth Bordenstein, Katherine Amato, Kevin Kohl, Kelly Weinersmith, Rachel Vannette, Justine Garcia, and Georgiana May are available.

PhD comics on how to write an email to your instructor or TA. (ht: Holly Kindsvater)

From Jeremy:

A lot of people think that grant review is a crapshoot, because review panel ratings of funded grants often don’t correlate strongly with the subsequent impact of the work funded by those grants. But that’s a silly criticism, because the whole point of grant review panels is to make (relatively) coarse distinctions so as to decide what to fund, not to make (relatively) fine distinctions among funded proposals. A natural experiment at NIH provides an opportunity to test how good grant review panels are at deciding what to fund. Back in 2009 stimulus funding led NIH to fund a bunch of proposals that wouldn’t otherwise have been funded. Compared to regular funded proposals, those stimulus-funded proposals led to fewer publications on average, and fewer high-impact publications, and the gap is larger if you look at impact per dollar. The mean differences aren’t small, at least not to my eyes, though your mileage may vary, and of course there’s substantial variation around the means. Regular proposals also had higher variance in impact than stimulus-funded proposals, which means NIH can’t be said to be risk averse in its choice of proposals to fund. And if you think that NIH is biased towards experienced investigators, think again–stimulus-funded proposals were more likely to be led by experienced PIs than were regular funded proposals. I’d be very curious to see an analogous study for NSF. (ht Retraction Watch)

p.s. to previous: And just now–late Thursday night–I see that different authors have just published a Science paper looking at a different NIH dataset and reached broadly the same conclusion even though they restricted attention to funded grants. No doubt one could debate the analysis and its interpretation, probably by focusing on the substantial variation in impact that isn’t explained by review panel scores. But together, these two studies look to me like a strike against the view that grant review is such a crapshoot, and/or so biased towards big names, as to be useless. Related old post here.

Speaking of peer review, here’s a brief and interesting history of peer review at the world’s oldest scientific journal.

How March of the Penguins ruined US nature documentaries.

How long does a scientific paper need to be? Includes some thoughtful pushback against the view, expressed in the comments here, that short papers are more readable. Also hits on something we don’t talk about enough: how online supplements are changing how we write papers. I disagree with the author that online supplements are always a good thing on balance.

One oft-repeated criticism of conventional frequentist statistical tests is that their design encourages mindless, rote use. So I was interested to read about mindless, rote use of a Bayesian approach in psychology. An illustration of how the undoubted abuses of frequentist statistics are not caused by frequentist statistics, but rather are symptoms of other issues that wouldn’t be fixed by switching to other statistical approaches. Here, the issue is the need for agreed conventions in how we construct and interpret statistical hypothesis tests, and associated default settings in statistical software.

An MSc student at the University of Victoria will defend his thesis underwater. No, he’s not a marine ecologist. I wonder what happens if someone on his committee asks him to go to the board. :-) (ht Marginal Revolution)

This makes me want to change my last name to NA, just to troll programmers. :-) (ht Brad DeLong)

And finally, the fact that I’m excited about this dates me in multiple ways: They Might Be Giants have a new album out! Here’s a sample, which I justify linking to on the grounds that the video includes a couple of jokes our readers will particularly appreciate:

Important information for lab undergrads

When I first started at Georgia Tech, someone recommended to me that I do an orientation with all new members of the lab, where I went over basic information. This was a really good suggestion (even if I can no longer remember who it was who suggested it!), and got me started on a file where I include the basic information that I want all undergrads who are joining my lab to know. I have expanded this file over the years, so that it is now substantially longer than that first sheet that I used at Georgia Tech.

However, when I moved to Michigan, I got out of the habit of going through this information formally with all students when they joined the lab. That was a mistake, and I plan on resuming the practice this summer. When I was recently updating the file, it occurred to me that it might be useful to share it here, both so that other people who are interested in doing something similar can have a template, and so that people can suggest important things that might be missing. Please suggest changes!

 

Important Lab Information for Duffy Lab Undergraduates

General Lab Information:

  1. We want everyone in the lab to be excited about their research project and to understand what we do and why we do it. If you’re ever unsure about why something is being done (or why it’s being done in a particular way), PLEASE ASK! Ideally, you should ask right away. But, if you realize later that you are confused, asking later is better than not asking at all. We have a great lab group, and people are always willing to help each other out and to answer questions.
  2. Meghan’s cell is XXX-XXX-XXXX. If there is a true emergency (e.g., fire, serious injury, etc.), call 911, then call Meghan if possible. If there is a lab emergency (e.g., the lab is unusually hot, there’s a mysterious puddle on the floor, an environmental chamber is misbehaving), call Meghan. If it’s an emergency, a call at any time is fine. But if it’s not an emergency, please do not call or text between 9PM and 7AM!
  3. Safety: There are signs on the lab doors that tell you about safety equipment and regulations. The lab also contains the Material Safety Data Sheets (MSDSs) for all the chemicals in the lab. These are in a blue binder on the shelf above Katie’s desk. If you are ever unsure about whether something is safe or have concerns about safety, please ask!
  4. Training: All students need to complete two online safety training modules. The two lab safety modules you need to take are:
    1. BLS025w
    2. BLS101w
      Please go to: http://www.oseh.umich.edu/training/mylinc.shtml to take those courses. You must do this by the end of your first week working in the lab. Email the certificates of completion (a screen cap or pdf) to Meghan when you have finished the courses.

General Lab Policies:

  1. Lab notebooks:
    1. All lab members must use lab notebooks; these will be provided by the lab, belong to the lab, and must stay in the lab at all times (including after you finish working in the lab). Lab notebooks should never leave the lab! If you need a copy of information (e.g., to enter data at home), this is a great opportunity to scan it or take a photo of the relevant pages.
    2. Write details for everything you do, and keep things organized. Write lots of details — you can never have too many details and you will remember much less 6 months from now than you think you will! This will help you a lot when you work on your end-of-semester writeup. It will also help everyone later if we need to go back and figure out a specific detail regarding what was done. You should write enough information that we can reproduce what you did without needing to send you any emails. Always write more information than you think you need to write! We’ve never looked back at an old lab notebook and thought, “Wow, I wish they’d written less.” We have definitely looked back at an old lab notebook and thought, “Wow, I wish they’d written more.”
    3. Never go back and change anything in your lab notebook at a later date
    4. Don’t leave blank spaces – if you accidentally skip a page, draw a cross through it.
    5. Staple attachments in to the lab notebook
    6. If you make a mistake (and we all do at some point!), please write details in the lab notebook and notify your mentor. We have all made mistakes. The most important thing is that we acknowledge them, so that we can take that into account when continuing with the study and when looking at the data.
    7. Related to the above: we all build on each other’s data. That means that it is very important for you to collect data carefully and to record notes carefully, and to note when mistakes are made. If you have any concerns about data collection, procedures, or anything else, please tell Meg.
    8. Some of the most exciting results we’ve collected are the ones we never would have expected. Keep an open mind when collecting data. If you see something you didn’t expect, record the data and then tell someone else about it. We’ve had some really neat research avenues opened up by undergrad observations!
  2. Data: (Thou shalt not be careless with thine data!)
    1. All data must be backed up at least weekly; this means that one copy should be in the lab and one copy should be elsewhere (such as on a server). An easy way to do this is to have a file on the lab desktop computer, since this automatically gets backed up to the cloud every day.
    2. Any data that hasn’t been entered yet should be scanned and/or photographed with a cell phone as soon as possible. Ideally, you should snap a photo of the new lab notebook entries and data sheets at the end of the day.
    3. Data should be entered into a computer (and proofed) routinely (aim for daily)
    4. All computer files should be backed up regularly (at least weekly); again, if they are on the lab computer, they will automatically be backed up. Backups should be stored in a location different than where the computer is (the cloud is an easy solution to this!)
    5. Include metadata along with your datafiles. What is metadata? It is the data about the data. For example, it might be a text file explaining what data is contained in each of the csv files, and which R scripts go along with those data.
  3. Field work:
    1. Always have a buddy when you go into the field! This buddy will almost always be Katie, a grad student, or a postdoc. Only people who can swim are allowed to go out in the boat. You should always have life jackets with you.
    2. Get off the lake at the first sign of thunder or lightning! Do not try to just finish up that one last thing — get off the lake right away.
    3. Be careful lifting boats and equipment. Lift with your legs, not your back!

End-of-semester information:

  1. All students should write up a summary of their semester’s work at the end of the semester. This should include a brief introduction to the project, a methods section describing what you did (please be detailed!), a results section, and a brief discussion/conclusions section. You must get a draft of this to your mentor at least two weeks before the end of the semester. If you would like examples, please ask Meghan.
  2. For UROP students: please make sure you communicate with your mentor well ahead of any deadlines. At a minimum, you must get a first draft of your research abstract to your mentor two weeks before it is due. You must also get a draft of your poster to your mentor two weeks before it is due. You must write your own first draft — this must be entirely your work! Your mentor will then help you with editing your abstract and poster. Expect to go back and forth several times — this is completely normal and an important part of developing scientific writing and presentation skills.
  3. For students completing an Honors Thesis: make sure you communicate with your mentor well ahead of any deadlines. All first drafts are due to your mentor at least two weeks before they are due. For the thesis itself, talk with your mentor at the beginning of the semester in which you will turn in the thesis to come up with a set of target dates for drafts of different sections of the manuscript. Ideally, you will spend one semester writing up an introduction and methods relating to what you are doing, and then a second semester writing up the results and discussion. You must write your own first draft of everything — this must be entirely your work! Your mentor will then help you with editing. Expect to go back and forth multiple times — this is completely normal and an important part of developing scientific writing and presentation skills.

Other information:

  1. Undergraduates are strongly encouraged to attend lab meetings. Attendance is not required, but we do hope you’ll join us!
  2. Related to the above: we routinely have lab meetings related to the process of science (how do we go from the data I’m collecting to a publication?), skills (e.g., working on an “elevator pitch” — that is, a succinct summary of your research), and ethics (e.g., what counts as plagiarism? Who is harmed when data are falsified?) If any of these topics are of interest to you, or if you have other ideas for a lab meeting, please suggest them!
  3. Undergrads new to the lab can be confused about what to call Professor Duffy/Dr. Duffy/Meghan/Meg/Daphnia Wrangler-In-Chief. Any of those are fine (though the last one has a nice ring to it, doesn’t it?) Most people in the lab go with Meg, but undergrads sometimes feel more comfortable sticking with Professor Duffy or Dr. Duffy. Go with whichever you feel most comfortable with, and, if that changes over time, that’s fine, too.
  4. Meg is happy to talk about your career goals, summer plans, letters of recommendation, etc. Just send an email to set up a time. (You can also stop by my office, but there’s a chance that I will have something else scheduled if you use this approach.) In cases where I don’t know the answer to questions you have, I will try very hard to put you in touch with people or resources that can help you.
  5. Please show up on time for meetings.

Other resources:

  1. These two blog posts are aimed at undergrads who are starting to do research in labs. They’re worth reading!
    1. http://www.scicurious.org/undergrad-herding/
    2. https://profsnarky.wordpress.com/2012/09/05/so-you-got-a-job-with-your-prof-advice-for-undergrads/ (written by Prof Snarky so it’s, well, snarky)

Ecologists disagree on whether co-authors should agree

Last week I asked what should be done if co-authors disagree on what their paper should say.* My own view is that all co-authors should agree with and stand behind everything in their paper, so that in the event of a serious, irresolvable disagreement, some co-authors would have to withdraw from the paper. An alternative view is that authorship just indicates that you contributed to the ms in some appropriately-substantial way, not that you agree with everything in it. And there might be other views on the matter as well.

I was curious to get a sense of the range of views out there, so I included a little poll in the post. Most of the votes likely are in at this point. As of this writing, about 24 h after the poll went up, we have 149 votes, distributed as follows:

  • 57% think all co-authors should agree on everything in the paper, and withdraw their names if they don’t
  • 33% think authorship doesn’t mean that you agree with everything in the ms, just that you contributed to it
  • 10% have some other view

I suspected there’d be a lot of disagreement on this–and I was right! Obviously, the poll respondents aren’t a random sample from any well-defined population. But the poll certainly suggests that there’s no consensus on this issue among ecologists.

One consequence of this disagreement is that it makes authorship a little hard to interpret. When I see someone’s name on a paper, I assume that they agree with everything in it and stand behind it. Apparently I shouldn’t assume that! Indeed, I can imagine a situation in which some of the co-authors on a paper assume that the other co-authors agree with everything in the paper, when in fact they don’t. I have no idea if that’s ever happened, but it seems possible.

I’m curious if the poll results would’ve been different if you could go back in time a couple of decades. I wonder if the view that co-authors should agree on everything in the paper is declining in prevalence. That is, as collaboration becomes more common (for both good and bad reasons), norms of appropriate author behavior are shifting in ways that facilitate collaboration. Wildly speculating, I really have no idea.

Formal statements of author contribution are becoming increasingly common, as an antidote to changing authorship practices that make ordered lists of authors less informative summaries of author contributions. Perhaps we also need formal statements of which bits of the paper each author agrees with? After all, back when most papers had just one author, you could safely assume that the author agreed with and stood behind everything in the ms. Nowadays, apparently that’s no longer the case. I’m still deciding if or how much I’m kidding about this…

*I actually have no idea how common it is for co-authors to seriously disagree, and how often those disagreements are resolved in one way vs. another. Maybe I should do a poll on this?

What’s the largest number of reviewers you’ve had on a single paper?

The comments on Jeremy’s post last week on manuscripts often getting the same reviewers at multiple journals got me wondering about something that is related: what’s the range on the number of reviewers a paper has in a single round of review at a single journal? And what about the number of reviewers for a paper across all rounds of review at a single journal? And, if it’s possible to figure out, what about the number of reviewers for a single journal across all rounds of review across all journals?

First, a poll for that first question (note: for purposes of this poll, don’t count the associate editor, even if she/he did a substantive review):

For me, the answer is 5. I’m guessing that what happened is that the associate editor invited 5 people, assuming 2-3 would accept, and hit some sort of reviewer jackpot where they all agreed to the review. Perhaps because of this experience, I never ask more than 3 people at a time when I’m handling a paper as an AE. (This is only relevant for me at Ecology & Evolution. Fortunately for me, AmNat’s editorial office handles all the emails related to review requests, so I don’t need to make that decision with manuscripts for them.) The downside to not asking more people at once is that it can slow down the process of finding two reviewers. So, there’s a tradeoff.

Now, the second question (again, for purposes of this poll, don’t count the associate editor, even if she/he did a substantive review):

To be clear, I am not referring to cases where a manuscript bounces to another journal. Sometimes the number of referees increases during revisions. For example, the paper may get sent to one of the original reviewers and to a new reviewer. I’ve heard one horror story of where a high-profile journal did this through many rounds of review.

The last poll question will probably be harder for some people to answer/estimate, but I’m still curious:

For this, I realize you might have to guess a bit more, but it will still be interesting to see.

I’ll be curious to see what others have experienced!

Don’t introduce your paper by saying that many people have long been interested in the topic (UPATEDx2)

Scientific papers often start by noting that lots of people are interested in the topic: “Topic X is of wide interest in ecology”, or some similar phrase. Sometimes they also talk about changes over time in how many people are interested in the topic, for instance by writing “Topic X has long been of central interest in ecology” or “Much recent research in ecology considers topic Y”.

Please, please don’t do this.*

As a colleague of mine likes to say, your paper should tell the reader about biology, not biologists. That is, your paper should introduce the biological topic and explain why it’s interesting and important, not say that other people think the topic is interesting and important. No, not even if everyone since the dawn of time has thought the topic interesting and important. Science is not a popularity contest. If the topic really is interesting and important, then you should be able to explain why, in which case the fact that other people also think the topic is interesting and important is at best superfluous. And if the topic is not interesting and important, pointing out that lots of other people think it’s interesting and important just shows that lots of people care about boring and unimportant things. Or at best, that your topic is a bandwagon.

For instance, one line of research in my lab concerns spatial synchrony in population ecology. Populations of the same species separated by hundreds or even thousands of km often exhibit positively-correlated fluctuations in abundance. Which is frickin’ amazing when you think about it. (UPDATE: Judging from the comments, that last sentence is confusing readers. My bad. The important thing about synchrony is not that I personally think it’s amazing, or that many others do too. The important thing is that it’s a real phenomenon (not just noise), and that it’s unexplained.) It’s like “action at a distance” in physics–how can such widely-separated systems behave as if they’re somehow connected? Such mysterious behavior cries out for an explanation. That’s why spatial synchrony is worth studying.** Not because spatial synchrony has long been of interest in ecology, or because much recent research in ecology addresses spatial synchrony, or etc.

The difference here can be subtle. For instance, there’s ongoing disagreement over whether short-distance dispersal leading to phase-locking is a plausible explanation for the observed long-distance synchrony of population cycles in nature (as opposed to in theoretical models or tightly-controlled microcosms). Alternatively, though not mutually-exclusively, long-distance synchrony of population cycles might be due to the long-distance synchrony of weather fluctuations, known as the Moran effect. If I was writing a paper on spatial synchrony, I might refer to this ongoing disagreement and use it as motivation for my own work. But it’s important to be precise here, and cite the disagreement for the right reasons. The motivation for further work is that there’s an interesting biological question–the causes of long-distance synchrony of population cycles–that hasn’t yet been answered. Resolving disagreement among the people working on this question is not a good motivation for further work. The goal of science is to figure out how the world works, not to produce agreement among scientists as to how the world works. Those are two different things, although it can sometimes be difficult to tell the difference between them in practice (e.g., it’s hard to recognize if a question hasn’t been answered, if everyone working in the field thinks it’s been answered). So here, it would be better to say something like “There are two alternative, though not mutually exclusive, explanations for long-distance synchrony of population cycles…” Rather than “Ecologists disagree about the causes of long-distance synchrony of population cycles…” The former phrasing is better because it keeps the focus on biology, rather than on what biologists think about biology.

From my own experience, I can tell you that it’s hard to avoid slipping into talking about biologists rather than biology. You have to constantly guard against it, or at least I do. This is a good mental habit to get into. It makes you alert to bandwagons and zombie ideas, and so keeps you from jumping on them or falling for them.*** It also helps you develop the courage of your own convictions and the ability to articulate them. Writing about biologists rather than biology is a crutch. It’s something you do when you don’t really know–and I mean really know–why your topic is worth studying.

p.s. This advice applies to talks and posters too.

UPDATEx2: As noted in the comments, I’m not saying that you shouldn’t talk about the history of research on your topic. The whole comment thread is great, actually, you should read it. :-)

*Note that I’m sure I’ve done it myself, though I haven’t gone back and checked. We are all sinners.

**Well, I could and sometimes do wave my arms about the applied importance of synchronized disease or crop pest outbreaks and argue that my work will improve our ability to predict/manage/prevent those things. Which doesn’t make such arm waving a good thing. Again, we are all sinners.

***In general, I think graduate students in particular tend to overrate the importance of working on “hot” topics. At the risk of overgeneralizing from my own example, I am living proof that you don’t have to work on “hot” topics, or use popular approaches or systems, to have a career in ecology. Spatial synchrony for instance has never been an especially “hot” topic, protist microcosms have never been a popular study system (just the opposite, in fact), and hardly anybody even understands the Price equation. What’s important is that you work on a topic for good reasons that you can articulate. One of the hardest things to do for graduate students who want to go on in academia is to become familiar with the current state and history of their field, while retaining/gaining the ability to think critically and independently. Also while gaining/retaining the confidence that thinking critically and independently, rather than following the crowd, is actually good for their academic careers rather than bad. (Note that “thinking independently” is not at all the same as “not knowing or willfully ignoring what everyone else thinks”, and that “thinking critically” is not at all the same as “thinking everybody else is wrong about everything”. The foundation of independent and critical thought is a broad and deep grasp of previous thinking.)

Friday links: surviving science, the ultimate citation, why everything is broken, and more

Also this week: depressing news on gender balance in major scientific awards, when trainees go bad, the history of the passive voice, and more. Oh, and identify any insect with this one handy picture. :-)

From Meg:

While I was glad to read that funding to support the Keeling curve measurements for three more years has been secured, I was a surprised to read that it was in question in the first place.

12 (really 13) Guidelines for Surviving Science. These are great! #5 reminds me of a conversation I had with someone about choosing mentors and collaborators: Imagine a 2 x 2 grid where you have nice/not nice on one side and smart/not smart on the other. Aim for nice & smart. Avoid the quadrant of doom.

After learning that there were no women finalists for the second year in a row, two scientists resigned from the selection committee for the Canadian Science and Engineering Hall of Fame. A lack of women recipients of a prominent award is something I’ve written about before. And, just yesterday, NSF announced its newest Waterman Award winner. The streak is now at 12 consecutive male winners.

I enjoyed this post on steps towards cleaner, better-organized code. (ht: Nina Wale) Related to this, a suggestion a colleague recently gave me is to aim to go one step more elegant/refined than what you would have done on your own. That is, don’t have amazingly elegant code as your goal. But if, each time, you aim to go one step beyond where you can easily get, you’ll learn a lot and, over time, become pretty good at programming. I like that idea.

From Jeremy:

Emilio Bruna, EiC at Biotropica, seconds Brian’s view that honest mistakes happen in science, and that the important thing is to fix them rather than stigmatize anyone:

So please, if you find a mistake in one of your papers let us know. It’s ok, we can fix it.

Arjun Raj explains why everything–peer review, academia, software design, you name it–is “broken”.

Arthropod ecologist Chris Buddle is cited in the latest xkcd “What If”! There are even two jokes about him! Must…control…jealousy… :-) (In seriousness: congratulations Chris!)

Stephen Heard with the story behind his paper on whimsy, jokes, and beauty in scientific writing. Includes an interesting discussion of how the taboo on humor and beauty in scientific writing is maintained even though lots of people–maybe even most people!–disagree with the taboo. Oh, and see the comments, where Stephen answers the question, when did scientists stop writing in the first person (active voice) in favor of the third person (passive voice), and why?

Tenure, She Wrote on every PI’s nightmare (or one of them): when trainees go bad.

Simply Statistics agrees with my hypothesis on why your university has so many administrators and so much red tape: because you asked for it.

Journalist’s guide to insect identification. That’s pretty much how I do it. Definitely close enough for government work. In fact, I bet this is how entomologists do it too, because it’s not as if anyone’s ever going to look close enough to check them. :-) (ht Not Exactly Rocket Science)

Aww, penguins are so cute! Here, penguin, pengu–AAAAAHHHHH!!!11!1 :-) (ht Not Exactly Rocket Science)

What if coauthors disagree about what their ms should say? (UPDATED)

In a recent interview, Richard Lewontin talks about how he and the late Stephen Jay Gould came to write their famous polemic “The Spandrels of San Marco” (ht Small Pond Science). Basically, Lewontin says all the polemical bits were by Gould, and that he only wrote one non-polemical section. And he says Gould went too far in the polemical bits, taking unreasonably extreme positions. A few quotes from Lewontin, to give you the flavor:

Steve and I taught evolution together for years and in a sense we struggled in class constantly because Steve, in my view, was preoccupied with the desire to be considered a very original and great evolutionary theorist. So he would exaggerate and even caricature certain features, which are true but not the way you want to present them…He would fasten on a particular interesting aspect of the evolutionary process and then make it into a kind of rigid, almost vacuous rule…

Most of the Spandrels paper was written by Steve. There is a section in there, which one can easily pick out, where I discuss the various factors and forces of evolution…

This surprises me. Not for the gossip about Gould’s motivations–I’m not much interested in that–but because Lewontin is more or less admitting that he put his name on a paper that he didn’t entirely agree with. Which surprises me because my attitude is very different. I don’t let a paper go out with my name on it unless I agree with every word of it. I figure I’m an author of the whole paper, not just “my” bits of it.

To be clear, my concern here isn’t with the technical soundness of my coauthors’ work (which in some cases I couldn’t actually check even if I wanted to), or with different people writing different bits of an ms. It’s with whether my coauthors and I all agree on the interpretation and implications of our work, and what to do if we don’t.

I’ve been involved in collaborations in which we disagreed about interpretation, sometimes very seriously. But in the end every collaboration with which I’ve been involved has managed to write a paper everyone was happy with.

There are degrees of agreement and disagreement, of course. I’ve had collaborative papers that would’ve been slightly different if I’d been the sole author–there’d have been differences in emphasis, or some points would’ve been phrased differently. Perhaps that’s what’s going on in the case of “Spandrels”. Maybe Lewontin would’ve preferred different phrasing or more (i.e. any!) nuance, but he basically agreed with Gould’s main points so was happy to put his name on the paper.

One way to resolve disagreement among coauthors would be for them to lay out their disagreements in the ms. One occasionally sees papers like this, but only from “adversarial” collaborations between intellectual opponents. There’s no reason in principle why friendly collaborators who only partially disagree couldn’t do the same thing, but I’ve never seen it done. (UPDATE: They say the memory is the first thing to go. Andy Gonzalez comments to remind me that he and Andrew Hendry have a friendly disagreement about the prevalence of local adaptation. They wrote a dialectical paper about it. And see the comments for other examples of adversarial collaborations in which intellectual opponents wrote joint papers clarifying their areas of agreement and disagreement.)

The various meanings of “authorship”, and different standards for authorship, are relevant here (see this old post). If you think of an “author” just as “someone who made a substantial contribution to the work reported in the ms”, then maybe you don’t assume that every author necessarily agrees, or should agree, with everything in the ms. The author list is just a list of people who contributed in various ways to producing various bits of the ms. Not a list of people who agree with everything the ms says.

I’m guessing this is an issue on which folks have very different experiences and views. So here’s a little poll. Do you think coauthors should agree on everything their ms says?

Is it really that important to prevent and correct one-off honest errors in scientific papers?

Wanted to highlight what I think has been a very useful discussion in the comments, because I know many readers don’t read the comments.

Yesterday, Brian noted that mistakes are inevitable in science (it’s a great post, BTW-go read it if you haven’t yet). Which raises the question of how hard to work to prevent mistakes, and correct them when they occur. After all, there’s no free lunch; opportunity costs are ubiquitous. Time, money, and effort you spend checking for and correcting errors is time, money, and effort you could spend doing something else.* I asked this question in the comments, and Brian quite sensibly replied that the more serious the consequences of an error, the more important it is to prevent it:

Certainly in the software engineering world it is widely recognized that it is a lot of work to eliminate errors and that there are trade-offs. If it is the program running a pace-maker it is expected to do just about everything to eliminate errors. But for more mundane programs (e.g. OS X, Word) it is recognized that perfection is too costly.

Which raises the sobering thought that the vast majority of errors in scientific papers aren’t worth putting any effort into detecting or correcting. At least, not any more effort than we already put in. From another comment of mine:

Yes, the consequences of an error must be key here. Which raises the sobering thought that most errors in scientific papers aren’t worth checking for or eliminating! After all, a substantial fraction of papers are never cited, and only a tiny fraction have any appreciable influence even on their own subfield or contribute in any appreciable way to any policy decision or other application.

xkcd once made fun of people who are determined to correct others who are “wrong on the internet” (https://xkcd.com/386/). It’s funny not just because it’s mostly futile to correct the errors of people who are wrong on the internet, but because it’s mostly not worth the effort to do so. [Maybe] most (not all!) one-off errors in scientific papers are like people who are “wrong on the internet”…

What worries me much more are systematic errors afflicting science as a whole, that arise even when individual scientists do their jobs well–zombie ideas and all that.

Curious to hear what folks think of this. Carl Boettiger has already chimed in in the comments, suggesting that my point here is the real argument for sharing data and code. The real reason for sharing data and code is not so that we can detect and correct isolated, one-off errors.** Rather, we share data and code because:

Arguing that individual researchers do more error checking than they already do is both counter to existing incentives and can only slow science down; sharing speeds things up. I love Brian’s thesis here that we need to acknowledge that humans make mistakes. Because publishing code or data makes it easier for others to discover mistakes, it is often cited in anonymous surveys as a major reason researchers don’t share; myself included. Most of this will still be ignored, just as most open source software projects are; but it helps ensure that the really interesting and significant ideas get worked over and refined and debugged into robust pillars of our discipline, and makes it harder for an idea to be both systemic and wrong.

I’m not sure I agree that sharing data and code makes it harder for an idea to be both systemic and wrong. The zombie ideas of which I’m aware in ecology didn’t establish themselves because of lack of data and code sharing. But I like Carl’s general line of thought, I think he’s asking the right questions.

*A small example from my own lab: We count protists live in water samples under a binocular microscope. Summer students who are learning this procedure invariably are very slow at first. They spend a loooong time looking at every sample, terrified of missing any protists that might be there. Which results in them spending lots of wasted time staring at samples that are either empty, or in which they already counted all the protists. Eventually, they learn to speed up, trading off a very slightly increased possibility of missing the occasional protist (a minor error that wouldn’t substantially alter our results) for the sake of counting many more samples. This allows us to conduct experiments with many more treatments and replicates than would otherwise be possible. Which of course guards against other sorts of errors–the errors you make by overinterpreting an experiment that lacks all the treatments you’d ideally want, and the errors you make because you lack statistical power. I think people often forget this–going out of your way to guard against one sort of error often increases the likelihood of other errors. Unfortunately, the same thing is true in other contexts.

**I wonder if a lot of the current push to share your data and code so that others can catch errors in your data and code is a case of looking under the streetlight. It’s now much easier than it used to be to share data and code, so we do more of it and come to care more about what we can accomplish by doing it. Which isn’t a bad thing; it’s a good thing on balance. But like any good thing it has its downsides.