Is blogging with someone a conflict of interest?

Most journals and granting agencies have conflict of interest policies, though in ecology and evolution they’re often fairly non-specific, so that in practice much is left to the discretion of people with decision-making authority. For instance, the Journal of Ecology requires that authors disclose any interest or relationship, financial or otherwise, that might be perceived as influencing an author’s objectivity, but says that the existence of a conflict does not preclude publication. In its guidelines for reviewers, the ESA asks reviewers to decline to review if they don’t feel they can be objective, and to discuss with the editor any previous or present connection with the authors or their institution that might be perceived as creating a conflict.

Here’s my question: would you consider blogging with someone to be a conflict of interest?

The issue’s never come up for me. I haven’t been asked to review any of Brian or Meg’s papers or grant applications since they joined the blog. But it’s possible that it could come up, and I’m genuinely unsure what I’d do if it did. On the one hand, it’s quite common in science for people to review the work of people they know personally, even know quite well. In my experience, personal friendship isn’t ordinarily considered a conflict of interest.* On the other hand, in my experience you are ordinarily considered to have a conflict of interest (or at least a potential one) with anyone with whom you currently collaborate. So are Meg, Brian, and I just friends without any conflict of interest? Or are we “blogging collaborators” who have a conflict of interest that at least ought to be disclosed?

I guess I’d lean towards saying that blogging with someone isn’t a conflict of interest. I do feel like I could evaluate one of Brian or Meg’s papers or grants objectively. But I can also see where others might see that as inappropriate. So I dunno–what do you think?

One hypothetical but tricky wrinkle here is what to do about conflicts of interest if you blog with someone, and one or both of you use pseudonyms. How do you disclose the conflict of interest without breaking pseudonymity? (assuming for the sake of argument that you do consider it a conflict) I guess you just disclose that you have a conflict of interest with the person without saying why?

More broadly, what other ambiguous cases of conflict of interest have you encountered? Broadly speaking, I feel like conflicts of interest fall on a continuum from clear-cut conflicts (like “my research on this drug is sponsored by the drug’s manufacturer, who are paying me a gazillion dollars”) to clear-cut non-conflicts (like “I met the author once for thirty seconds”), with a lot of ambiguity in between. Including ambiguity about whether to even bother disclosing the possibility of a conflict.

*And if you say it should be, you’ve just made it a lot harder to find reviewers. There are many subfields of science in which everybody knows everybody to at least some extent. For instance, good luck finding someone whom I haven’t met, and who works with protist microcosms, to review one of my protist microcosm papers.

Poll: What should a community ecology class cover?

This fall I will be teaching a graduate-level community ecology class for the first time. Most people would say that community ecology is one of the five or so main subdisciplines of ecology along with physiological ecology, population ecology, ecosystem ecology and maybe behavioral ecology.

In the 1970s community ecology was an “in” field. Then in the 1980s and 1990s my perspective is that community ecology was passe. I started graduate school in 1997 and I well remember how all my graduate student peers would say things like “I study species interactions” rather than use the phrase “community ecology”. Now community ecology feels very much like a reinvigorated, “cool” field again, but in part because the lines have blurred with topics like macroecology and global change ecology.

So it has been an interesting exercise for me to think through what exactly should be covered in a community ecology class. Its a bit of a definitional exercise in defining what I think community ecology is today. There is definitely more than enough material to fill a semester these days, so choices must be made. There are two great textbooks on community ecology by Mittelbach and Morin (both reviewed by Jeremy). So I can look at the tables of contents there, but there are some noticeable differences from the choices I will make.

So I thought it would be fun to take a reader survey to see what topics people think belong in an early graduate (e.g. first year graduate student) community ecology class.There are 30+ topics. Each topic could easily take 1 week to cover (in fact could easily be an entire semester seminar), and here at Maine we typically have a 15 week semester, so assuming we’ll squeeze a few topics together, you can pick up to 20 topics (it would be no fun if you could check everything!). I’m sure there are other ways to organize/slice&dice these topics, but this is a reasonable approximation. What would you prioritize in a community ecology class? What are your top 20 priorities for an introductory graduate level community ecology class? Take our poll (NB: I have NOT randomized the order presented to keep related topics close to each other, but please make sure you read to the end and don’t just bias towards the first things you see):

 

Notes and impressions from the ESA meeting

I enjoyed the meeting and got a lot out of it. Thanks very much to the organizers for working so hard to make it happen. Some random thoughts and impressions:

I screwed up my Ignite talk, but it was fine. I stupidly planned for 15 slides at 20 seconds per slide, rather than the specified 20 slides at 15 seconds per slide. Oops. So I had to change it to 15 seconds per slide (them’s the rules!) I turned it into a joke about how 5 minute talks are for wimps and I was going to do my talk in 3:45, because I’m a blogger and I can be brief. It got a laugh, and the talk itself was fine. Between having been pretty well prepared, and the “modular” way in which my talk was structured, it wasn’t that hard to cut it down on the fly. There’s a lesson here for students. At some point, something’s going to wrong during one of your presentations (or one of your classes, if you’re a teacher). It could be an equipment failure, an audience member or student who keeps interrupting with questions…anything. It’s really useful to be able to think on your feet and deal with it.

A couple of thoughts on Ignite talks. They’re more work to prepare than conventional talks. And the 15 seconds per slide rule is a bug, not a feature. I assume the idea is to try to force people to minimize the use of text and figures, and so give a talk where all the visuals are pretty pictures. But even if you do that, the rule imposes a very awkward and distracting pace and rhythm to your talk. People are always either rushing to catch up with their slides, or (more rarely) waiting for their slides to advance, or (most rarely still) reading their talks so as to stay in perfect sync with their slides. That is, unless you do what I did and cheat, by having 2-3 duplicate slides in a row. But if you’re allowed to do that (and how could anyone stop you?), then there’s no point to the 15 seconds per slide rule. Ignite talks remind me a bit of how authors and poets sometimes set themselves the challenge of writing under some very severe constraint, like not using any word with an “e” in it. It’s really difficult, and even when you pull it off the achievement is more in having produced something that’s decent despite the constraint, rather than excellent because of the constraint.

More thoughts on the topic of my Ignite session (theory vs. empiricism) in a separate post, hopefully.

It seemed like a small meeting this year. Maybe even under 3000? There were plenty of empty seats in most of the rooms. (Though not in the session I was in, which was standing room only. I wonder if part of the reason was that a lot of the Ignite sessions seemed to be aimed at fairly narrow audiences this year? So if you wanted to go to an Ignite session, ours was probably going to be your first choice? I dunno, I’m just speculating.)

Because of the small size of the meeting, and because there wasn’t one street where all the bars and restaurants were concentrated, it was pretty easy to go out to eat and drink. I tried several of the recommendations from our recommondations post, they were all excellent. And I thought the food prices were great.

The meeting was split between the convention center and two hotels across the street, which wasn’t ideal. I know there’s a lot that goes into choosing a meeting location, and I don’t necessarily think ESA should just rule out anyplace that can’t fit the whole meeting into its convention center. But personally I do much prefer meetings that aren’t so spread out.

Not many people came to our meetup, but that’s fine, we enjoyed meeting the folks who came by (thanks!) Meg, Brian, and I had dinner after the meetup. First time all three of us all got together face to face. I hope it becomes an annual tradition.

One interesting thing that came out of the meetup is that apparently readers often don’t realize who wrote which post? Meg and I apparently have both have had the funny experience of being complimented on posts the other wrote. Maybe Brian has too? Do we need to do more to make clear who wrote which post? Besides, you know, having the author’s name right below the title (in admittedly-tiny type)? :-)

I also saw Meg speak for the first time. Her talk was really good. Brian was scheduled almost opposite me so I couldn’t go, sadly.

Other really standout talks for me: Kathy Cottingham’s opening plenary, Greg Dwyer’s Ignite talk in my session (good points made very entertainingly), George Sugihara (wonderful animated videos from his son, explaining the key ideas), Monica Granados (very creative and thought provoking), Peter Adler (just excellent all around), and Rae Winfree (a collaborator who explained some ideas of mine more clearly and succinctly than I ever have).

I stayed for the whole meeting. Friday morning attendance was low as usual, although it seemed pretty good in the poster session and in the oral session I saw. I still think the ESA should swap the Monday morning and Friday morning activities (possibly with the awards ceremony moving to a late afternoon or early evening slot on Sunday or Monday). That way people are more likely to stay to the end of the meeting (because otherwise you’d be skipping an entire day), and the only people who need to stay for Friday morning are those attending workshops and other activities for which you have to sign up. Lots of other people like this idea too. Worst case scenario, you try it once and if it doesn’t work, you go back to the current schedule.

Finally, anyone know why the meeting was a week later this year, and is a week later again next year? I’m sure some people like having an extra week in the field, but other people have to start teaching in mid-August. And I have personal reasons for much preferring that the meeting start earlier in August.

What were your impressions of the meeting?

Book review: Experimental Evolution and the Nature of Biodiversity by Rees Kassen

Here’s something new for this blog: a timely book review. Rees Kassen‘s Experimental Evolution and the Nature of Biodiversity has just been published. Here’s my review.

Full disclosure: Rees is a friend, I spent a semester visiting his lab back in 2010. He was kind enough to send me a free copy of his book. I tried not to let it affect my review one way or the other, and I hope I managed to do that.

The book reviews what we’ve learned about evolutionary adaptation and diversification from experimental evolution of microbes. Connecting adaptation and diversification is an old problem, one Darwin himself famously struggled with. Rees is one of the world leaders in experimental microbial evolution, so his lab’s own work figures prominently in the book (without dominating it; the book is very far from just being a compilation of Rees’ own work). The chapters cover:

  • an introduction to experimental evolution (starting with a really cool example dating back to shortly after Darwin’s death)
  • the genetics of adaptation to a single environment
  • divergent selection
  • selection in spatially and temporally variable environments
  • genomics of adaptation
  • phenotypic disparity
  • rate and extent of diversification
  • adaptive radiation
  • genetics and genomics of diversification
  • the nature of biodiversity

Most of the chapters start with a stage-setting vignette to introduce and motivate interest in the topic. For instance, the chapter on adaptation to a single environment starts with the story of how Londoners sheltering in the Underground tunnels in WW II were plagued by mosquitoes that may have adapted to underground habitats, and to feeding on humans, during the 80 years the Underground had existed at that time. I really enjoyed the vignettes and found most of them effective. It’s too bad the approach kind of runs out of steam near the end of the book (there’s no vignette for the chapter on adaptive radiation, and the vignette for the following chapter was interesting but didn’t seem to me to be closely tied to the chapter topic).

Bottom line: I liked the book. Not surprisingly, perhaps, because I’m very much on Rees’ wavelength. He believes that our hypotheses should come from mathematical theory whenever possible. He thinks it’s really important to complement observational and comparative data with direct experimental tests. He believes that good data from a model system are better than no good data at all (and as his book shows, that is often a real choice we’re faced with in science). He believes that one can make useful comparisons between microcosms and other systems by keeping in mind the ways in which microcosms are different than other systems (e.g., large population sizes, adaptation based on new mutations rather than standing variation). He believes that microbial microcosms are simple enough to be tractable, but yet complex enough to be capable of surprising us, and so capable of inspiring new hypotheses as well as testing existing hypotheses. I agree on all counts.

Indeed, I wish I’d written the book myself. And I mean that almost literally, because this is kind of the evolutionary equivalent of an ecology book I proposed to write a few years ago, pulling together everything ecologists have learned from microcosm experiments. But Rees’ book is better mine would have been, I think. One reason for that is that Rees’ book is about a fairly well-developed and unified body of theory, that’s been directly tested in a sufficient number of sufficiently-similar experiments that one can do meta-analyses on the results. I don’t know that you could say the same for my proposed book.

Those meta-analyses are the core contribution of Rees’ book, to my mind. There are about 10 meta-analyses in the book, depending on precisely how you count, many of which could’ve been standalone papers. I can only imagine how much frickin’ work it must’ve been to compile the data! If you want to know how often fitness trade-offs evolve under divergent selection (invariably), whether adaptation to a fitness peak typically involves fixation of few or many mutations (few), what the typical rate of substitution is during an adaptive walk, and much more, this book has the numbers.

The other bit of the book that really stood out for me was the extension of Fisher’s geometric model to multiple phenotypic optima, thereby converting the model into a tool for studying the consequences of divergent selection (e.g., the contrasting selection pressures imposed by two different habitats). This is a lovely idea, credited to unpublished work by G. Martin. Simple, elegant, and powerful–I can’t wait to see it further developed.

The book is a satisfying story of an ongoing, successful research program. On topics on which we have well-developed theory, microbial evolution experiments usually behave more or less as theory predicts, at least on average (there’s often a lot of variation around the average, which is something I wish Rees had discussed a bit more). The book also points out the most interesting and needed directions for future research, as a good book of this sort should do. It’ll be a gold mine for grad students looking to get up to speed on the literature and on the lookout for project ideas.

There are some weak points, though they’re far outnumbered by the strong points. There’s perhaps a bit too much repetition, with some of the same concepts and examples reintroduced in two or three places. But then, a reader who was completely new to this material might appreciate the repetition. The chapters on diversification (the second half of the book) in general weren’t quite as strong as the chapters on adaptation, probably because Rees had less material to review. So there’s less meta-analysis and more qualitative discussion of isolated examples. And I had several quibbles with the chapter on spatial and temporal variation in selection. Rees’ explanation of why geometric rather than arithmetic mean absolute fitness is of interest in temporally-varying environments isn’t as precise as I’d have liked. I wish this chapter had been clearer up front (rather than partway through) about the difference between selection that merely varies in direction in space or time, and selection that can actually stably maintain genetic variation. But I admit that’s a personal hangup of mine. I also would’ve liked to see this chapter compare spatially- or temporally-varying selection to selection in non-varying environments with the same average conditions as the varying environments, since otherwise you’re confounding the effects of environmental variance with effects of average environmental conditions. Apparently most theory or experimentation on this topic doesn’t make that comparison (unless I misunderstood something?) But that’s another personal hangup of mine. Finally, throughout the book I found myself wanting more comparison of the results of microbial evolution experiments with results from other systems. Rees’ comparative remarks often are quite brief. A few more comparative remarks might have helped to “sell” skeptical readers on the value of the experimental evolution approach. But then again, I suspect that for many topics there’s just no comparable data from other systems, and so probably there’s not much that could be said by way of comparison.

The writing is solid–simple, straightforward, and clear. The writing in several of the vignettes is really nice. In his understated way, Rees is a fine storyteller. I found myself wishing (greedily, I know) that Rees had adopted the voice of the vignettes throughout the book. The book is not at all technical, and so is quite accessible. There are hardly any equations and jargon is kept to a minimum (including genomics and sequencing jargon, thank god–I freely admit I find that stuff impenetrable). The figures are greyscale, which is sometimes ok, but sometimes you have to squint to distinguish different shades of grey. The cover art is cool, it recalls the multiple optima extension of Fisher’s geometric model that’s one of the highlights of the book.

Anyone who does experimental evolution needs a copy of this book. Who else will want to read it? In particular, why should an ecologist want to read it? I can think of a few reasons:

  • You’re broad-minded and you want to know something about how evolutionary biologists who are interested in ecology think about biodiversity. That caveat “interested in ecology” is important. Rees doesn’t take selection coefficients and population sizes as god-given. Rather, much of the book is about how the ecology of the system (and ongoing evolution) sets those parameters, thereby affecting the future course of evolution. For instance, he talks about how predators change the selection pressures to which prey are subject, while also reducing prey population sizes and so reducing the expected supply rate of beneficial mutations. And if you’re the sort of ecologist who sees genetics and genomics as far removed from anything you could possibly be interested in, well, I think you might be pleasantly surprised if you read this book.
  • You just want to understand evolution better. In particular, I think many ecologists would be surprised by, and learn a lot from, Rees’ emphasis on fitness and its evolution. For instance the idea that fitness trade-offs (i.e. high relative fitness in one environment is associated with low relative fitness in a different environment) are a result of natural selection rather than a constraint on natural selection (see this old post). And how fitness trade-offs can emerge even if the same traits are favored in all environments (just to differing degrees in different environments).
  • You’re into eco-evolutionary dynamics. Rees doesn’t use that term, and doesn’t talk a lot about coevolution (though he does talk about it a bit), but if you’re serious about the “evolution” bit of “eco-evolutionary”, you’ll want this book.
  • You buy the suggestion of Mark Vellend and others that community ecology can learn a lot from (asexual) population and evolutionary genetics. Right now, community ecologists who believe this are focusing on neutral models and elaborations thereof. I think the first community ecologist who starts translating other sorts of evolutionary genetic models into community ecology terms could (deservedly) make a splash. For instance, I bet the multiple-optima version of Fisher’s geometric model could be used to make novel predictions about community assembly and structure in spatially heterogeneous environments.

The book is softcover and it’s not expensive, so if you’re interested you should definitely buy a copy.

Friday links

From Jeremy:

The Festival of Bad Ad Hoc Hypotheses (BAHfest) is coming to Cambridge, MA and San Francisco this October. If you didn’t know, the festival is dedicated to “well-argued and thoroughly researched but completely incorrect evolutionary theory.” I love, love, love this idea, and not just because making fun of bad evolutionary psychology is God’s work. I can’t go, but I’ll have to settle for watching the videos of the talks. Like this one, which is hilarious. (ht Nothing in Biology)

Methods in Ecology and Evolution has a video interview with Ben Bolker and Mark Brewer on statistical machismo. Always good to know #WWBBD? (ht BioDiverse Perspectives)

Why your university’s enterprise software sucks.

Meetup tonight will be in the Aquatic Ecology section booth in the poster hall

We just decided that the meetup tonight at 5:30 in the aquatic ecology section booth in the poster hall. It’s on the wall that’s all the way on your left as you walk in.

It’s possible the booth will be in use because it’s shared with the disease ecology section, in which case we’ll go with the original plan of just commandeering a table (hopefully near the booth).

Come on by and chat with us!

Semi-hiatus for the ESA meeting

Just FYI: Dynamic Ecology will be on semi-hiatus during the ESA meeting. We’re not planning any meeting preview posts this year. And we’re not planning to post during the meeting either, though I suppose we might change our minds if the spirit moves us.

Sorry if this disappoints any of you. But there’s just not enough of you. Very few people read our posts about the ESA meeting. And writing those posts takes time and mental energy that I can’t really spare during the meeting. I’m busy from the moment I wake up until I get back to my hotel room late in the evening, and just don’t have the energy to stay up for another hour writing a post that hardly anyone will read. I’m sure it’s the same for Brian and Meg. If you want to follow the meeting from afar, you should be on Twitter following the #ESA2014 hashtag; that’s where the action will be (though EEB and Flow has promised to liveblog the meeting).

We’ll probably do some sort of wrap-up post after the meeting ends.

p.s. If you’ll be at the meeting and want your Dynamic Ecology fix, come say hi to Meg, Brian and I at the Dynamic Ecology meetup on Wed. 5:30-6:30 in the poster hall!

Friday links: highly significant increase in marginal significance, hidden female authors, the evolution of Groot, and more

Also this week: smartphone microscopes, ranking universities way back when, grade inflation and what to do about it, trading sex for co-authorship, why it feels like you work 80 hours per week even though you don’t, how to write fast, world’s oldest path diagram, academic urban legends, shark vs. shark-cam…[deep breath]…and more. The internet was on fire this week! Oh, and how to defend your thesis. Using a broadsword. :-)

From Jeremy:

Ranking US colleges and universities in 1911, and now. The more things change, the more they stay the same, at least for private universities. Some fascinating history here, of which I was totally unaware (click through to the post that the linked post discusses).

Here’s something that has changed about US universities since 1911 (well, ok, 1940), and in a big way: the grades they give. “A” is now the most common grade (!), and almost 80% of grades are A’s or B’s. The linked post discusses small studies of two policies to combat grade inflation: obliging profs to grade on a curve, and providing the average mark in the class on the student’s transcript. I was also interested to see data confirming the stereotype that grades run lower in the hard sciences than in other fields. (Interested because it contrasts with my own personal experience. When I was an undergrad at Williams, the average grades in sciences, social sciences, and humanities there were almost exactly the same.) And I was depressed but unsurprised to see that, when you force high-grading departments to lower their grades, students stop majoring in those fields and give those profs harsher teaching evaluations. Conversely, when you start publicizing the higher-grading departments on student transcripts, more students major in those departments.

Andrew Gelman on “the scientific surprise two-step“: researchers defending their results from statistical criticism often emphasize that their results were expected based on well-established scientific theory (as opposed to far-fetched results discoverable only via p-hacking). But when pitching their results to selective journals and their reviewers and readers, those same authors often emphasize how surprising and novel their results are. Discuss.

The percentage of papers reporting marginally significant p values in the abstract has increased dramatically since 1990. So has the percentage of papers reporting marginally non-significant p values in the abstract, though the increase is much smaller. The results are mostly due to changes in biology abstracts, rather than to physical science or social science abstracts. Argue amongst yourselves whether that says something good or bad about biology. (ht Retraction Watch)

How sloppy citation practices propagate academic urban legends. Citation practices aren’t the only problem, of course–there are plenty of academic myths that don’t arise from sloppy citations.

Are female authors of scientific papers more likely than male authors to hide their gender by using their first initial? This analysis suggests that the answer is yes, though the estimated gap in first initial usage probability is small. The analysis depends on a mixture model and I don’t have a good sense for how well that approach works here, but as far as I can tell it seems like it works fairly well.

The EEB and Flow’s guide to preparing for, and surviving, the ESA meeting.

The 2013 ISI impact factors for the top 40 ecology journals. I’m kind of morbidly curious if anyone will slam me merely for linking to this (Go right ahead! It’s been a long time since someone’s ripped me in the comments, I kind of miss it.) I find these data most interesting because of the changes over time, and what those changes suggest about our collective reading and publishing habits. I’m old enough to remember when Ecology Letters and Methods in Ecology and Evolution didn’t exist and the biogeography journals were well below journals like Ecology and Am Nat (weren’t they?) rather than well above them. Presumably the rise of the biogeography journals is at least in part because of the increased global change focus of ecology? Also worth noting that lumping all these journals together as “ecology” journals runs roughshod over a lot of variation in their audiences and goals. Some of these journals are in totally different fields than others.

An anonymous survey of 400 European economists finds that 94% have engaged in at least one dubious research practice. The more commonly-admitted practices include refraining from citing work that contradicts your own (admitted by 20%) and copying your own previous work without citing it (admitted by 25%). 7% admit to using tricks to tweak the outcomes of statistical tests. And 1-2% admit requesting or offering sex in exchange for co-authorship or promotion (! man I hope that’s an overestimate due to sampling error…) Perceived pressure to publish was positively associated with the admission of several dubious practices. (ht Marginal Revolution)

Arjun Raj asks a good question: since academics don’t actually work 80 hour weeks (or anything close), how come they feel like they do? This seems like a good example of how myths and urban legends arise–it’s people believing stuff that just feels right to them.

Another from Raj: how to write fast.

Here are the path diagrams from Sewell Wright’s original (1921) path analysis paper. (ht David Giles)

Myth-busting: the evidence that making your paper open access causes it to be cited more often is really weak. I’m with Phil Davis on this one: if this is a question we sincerely care about answering, someone should figure out how to do a proper randomized controlled trial (well, another one; the one that’s been done finds no effect of open access). Because just collecting more observational, correlational evidence is not going to shed any light on the matter.

From what plant (or fungus) did Groot evolve? Much as I love the suggestion that he evolved from kudzu, my money’s on some kind of human-plant hybridization event. Because, dude, species from different phyla totally can hybridize, thereby instantly creating whole new taxa without the need for any of that Darwinian evolution crap. :-)

We’re gonna need a bigger boat shark-cam. :-)

And finally, xkcd has good advice for your thesis defense. Presumably the snake fight portion. :-)

From Meg:

This article talks about the value of writing in 10 minute chunks of time. This relates to something that I included in my post on navigating the tenure track. Doing a little bit of writing every day is advice I was given as a grad student, and is advice I’ve passed on to people in my lab. Though, as Ellen Simms pointed out on twitter, perhaps the biggest challenge is finding ways to find data analysis in – that is very hard to get done in 10 minute chunks. (Related: my scramble to get an ESA talk together this week, resulting in me only a few Friday links!)

The Guardian has a piece on how an all-male panel is no way to honor Rachel Carson. I agree!

Scicurious tweeted about this microscope attachment for smartphones that has 30x magnification. Sounds pretty neat!

Self-promotion in science: poll results and commentary

At this point we’ve probably gotten about as many responses as we’re going to get to our poll on self-promotion in science. Thanks to everyone who took the poll! Here are the results, with some commentary.

First, as a commenter noted, the poll defined self-promotion as a bad thing. That was purely for the sake of keeping the poll simple. It just seemed complicated to try to ask which activities are self-promotion, and which activities are good or bad things. Also, I wanted to focus on whether people approved or disapproved of these activities as self promotion, as opposed to for some other reason. For instance, some people disapprove of submitting to Science, Nature, and PNAS for reasons that have nothing to do with self-promotion on the part of authors (e.g., some people disapprove of publishing in non-open access journals). But the commenters who didn’t like the framing of the poll have a point. Heck, I personally think that some activities are good things despite their self-promotional effects, or even because of them. For instance, as another commenter noted, having a high profile can be an effective means to the end of getting policy makers and the general public to take note of important scientific information. As I said in the original post, the poll was merely intended as a conversation starter. I think and hope it served that purpose, but admit that a better-framed poll might have served the same purpose more effectively.

Ok, on to the poll results…

First thing that jumped out at me was the level of disagreement. There wasn’t unanimous agreement about any of the activities I listed. And for the large majority, there were at least a few people who thought it was fine and not self-promotion, and at least a few people who disapproved of it as self-promotion. And I don’t think that’s an artifact of the way the post was framed, although it could be in part. But there was much more disagreement about some activities than others.

  • Publishing in Nature, Science, or PNAS rather than a discipline-specific journal: A large majority (88% as of this writing) think this is fine and not self-promotion, with a large majority of the remainder merely having reservations.
  • Blogging, but not about your own work: 95% think this is fine and not self-promotion, with the remainder merely having some reservations. Probably not surprising, given that this was a poll of blog readers. :-) But I suspect that even if you polled more widely, you wouldn’t find too many people who think that merely having a blog constitutes self-promotion.
  • Blogging about your own work: As you’d expect, fewer people are ok with this than are ok with blogging about other things: 63% approve of this as not self-promotional, with 34% having reservations and 4% disapproving of it as self-promotion. Personally, I have reservations. That’s why I don’t blog about my own work, unless I’m using my own work as an example to illustrate a larger point or something. I wouldn’t feel like I was “adding any value” if I blogged about my own work. That would feel to me like “pure” self promotion, without any other purpose. Anybody who might want to read my papers already has their own ways of filtering the literature and doesn’t need me to point them towards my papers. And if anybody just wants to read a summary of one of my papers, well, that’s what the abstract is there for (and anyone who lacks the technical expertise to understand my abstracts probably isn’t going to want to read even a summary of one of my papers). But I hasten to add that I don’t have any problem with other people blogging about their own work. Different strokes for different folks and all that. Plus, if unlike me you’re blogging as a form of outreach to non-scientists, or to influence policy makers, you’re definitely going to want to write non-technical summaries of your work. That adds a lot of value for your audience. It also raises your profile, which is useful because that encourages your audience to take notice of the important scientific points you’re making.
  • Tweeting, but not about your own work: Not surprisingly, the vast majority of you (87%) don’t see tweeting as self-promotional in and of itself, and the remainder merely have reservations. And I’d be surprised if a broader poll of non-blog readers gave dramatically different results on this. Although it’s interesting that apparently there are a few of you who are ok with blogging as not at all self-promotional, but who have reservations about tweeting. Which puzzles me; I’d be curious to hear comments on this.
  • Tweeting about your own work: As with blogging, about 63% of you are fine with tweeting about your own work, with 34% having reservations and 4% disapproving of this as self-promotion.
  • Sending a pdf of your work to other people in your field: I expected this one to be controversial, and it was: 63% approve as not self-promotion, 24% had reservations, 13% disapprove as self-promotion. Personally, I’m not comfortable doing this, for the same reason I’m not comfortable blogging about my own work (only more so, because emailing someone seems more intrusive to me than putting up a blog post). I’d be curious to hear any anecdotes or data on how common this practice is. I think people have only sent me pdfs of their work like two or three times in my whole life.
  • Doing interviews for popular media: I’m actually kind of surprised this wasn’t more controversial, since there’s a stereotype that lots of academic scientists disapprove of anyone who has a public profile. But 81% of respondents didn’t see doing interviews for popular media as self-promotion. Which I think makes sense. After all, people who want to interview you usually come to you, not the other way around, so it’s “promotional” but not really self-promotion. The vast majority of the remainder merely had reservations.
  • Allowing your employer to send out press releases about your work: Almost exactly the same responses as for doing interviews with popular media.
  • Inviting big names in your field to your talk or poster at a conference: Another one that I thought would be controversial. Indeed, this was one of the most controversial practices on the list: 47% approving as not self-promotional, 35% with reservations, 18% disapproving as self promotional. Personally, I do occasionally invite people to my talks, and more often to my students’ talks. But I do so only when I (or my students) really want to pick someone’s brain or really want feedback from someone whom I think would be a really good source of feedback. I don’t do this just for the sake of increasing attendance at my talk or helping my students meet random famous people or whatever (“networking” for the sake of networking doesn’t do anything for your career). But motivations are slippery and hard to pin down, even to ourselves, never mind to others.
  • As a reviewer, suggesting that authors cite your papers: The first practice on the list that respondents mostly disapprove of: 41% disapprove as self-promotional and 52% have reservations; only 7% think it’s fine. Personally, as for other items, I think motivations are key here. I have occasionally suggested that authors cite my work, but only when I thought that it was a serious oversight for them not to do so (i.e. the same reason I’d suggest that the authors cite someone else’s work).
  • As an author, citing your own paper when other citations might be equally or more appropriate: Another practice respondents mostly disapprove of: 55% have reservations and 35% disapprove as self-promotional. Personally, I have reservations. I don’t like irrelevant self-citations. But on the other hand, it’s rarely possible or desirable to cite every relevant reference, so you have to pick and choose somehow.
  • Commenting on blogs, where the comments do not primarily comprise references to your own work: As you’d expect, people are mostly ok with this: 88% think it’s fine and not self-promotion, with the vast bulk of the remainder merely having reservations. And it may reassure the very small number of you who see this as self-promotion to know that it’s a very ineffective form of self-promotion, because many blog readers don’t read the comments. :-)
  • Commenting on blogs, where the comment primarily comprises references and links to your own work: One of the most controversial practices, which kind of surprises me, though perhaps it shouldn’t have. 38% see this as fine, 46% have reservations, 16% disapprove of it as self-promotion. I sometimes do this myself. Basically, if I find myself wanting to comment, but the comment is something I’ve already said elsewhere, I’ll often just link to what I’ve said previously. It just seems faster than typing it out again. And I don’t see why I should refrain from commenting at all just because I happen to have said something relevant previously. Although I would never comment on someone else’s blog just to say “hey, come check out my blog too!” or anything like that.
  • Tweeting about your own work to people who don’t follow you on Twitter: I expected people to be pretty negative about this, and they were. Only 17% approve, with 39% having reservations and 44% disapproving of it as self-promotion. This is somewhat like emailing a stranger a pdf they didn’t ask for. But I’d be curious to hear from, say, science journalists what they think of this practice. For what it’s worth, I think this practice is rare, but I wouldn’t really know as I’m not on Twitter (Aside: very occasionally, somebody will tweet something to @DynamicEcology asking us to blog about it or retweet it. I just ignore such requests. Our Twitter account is mostly just a robot we use to announce new posts.)
  • Nominating yourself for awards: This one and the next one were especially interesting to me because they were the only ones on which my own views are in a small minority. Only 18% of people (including me) think this is fine; 25% have reservations and 57% disapprove of this as self-promotion. I can certainly see why this would seem like self-promotion of the worst sort. On the other hand, I know from personal experience that awards committees often are short on nominations from any source and so would love it if people would nominate themselves (e.g., the ESA Buell and Braun awards often are surprisingly short on nominees). Far from being overwhelmed with frivolous nominations, they’re desperate for candidates–any candidates–to consider and so are more than happy for people to put forward their own names. Plus, I guess I don’t really see much difference between nominating yourself for an award and applying for a competitive grant or submitting a paper to a selective journal. In all those cases, you think your work might meet some high standard (e.g. it’s in the top X% of the pool of candidates), and so you ask others to judge whether it actually does. In other words, while I’d ordinarily hesitate to do something that serves no purpose other than to promote my own work, the existence of an award means that somebody’s saying “Hey, we want to honor and promote somebody’s work!” So given that other people have decided that they want to honor and promote somebody, that somebody might as well be you! Well, as long as you’re remotely competitive–I think it would be silly to waste an award committee’s time by applying for an award for which you’re obviously uncompetitive. On the other hand, people probably underestimate their chances at least as often as they overestimate their chances. Having said that, I’ve never actually nominated myself for an award, or asked anyone to nominate me.
  • Asking others to nominate you for awards: People are only slightly more ok with this than they are with the previous one: 23% think it’s fine and not self-promotional, 44% have reservations, 33% disapprove.

As I say, don’t take these numbers as gospel. And it’s certainly not as if anything that most people approve of is thereby ok, or that anything most people disapprove of is thereby not ok. The numbers are just to give you some rough sense of people’s views, as a starting point for discussion.