Guest post: Strategies for helping your research reach a wider audience

Note from Meghan:  This is a guest post from Richard B. Primack and Caitlin McDonough MacKenzie; Richard has written guest posts for us before, including one on using a professional editor. This guest post is on a topic that I get asked about regularly when I travel for seminar trips, so I suspect it will be of interest to readers. I’ve added some thoughts of my own throughout the post below.

 

As scientists, we love our research and want to share our findings far and wide. As ecologists and conservation biologists, we especially hope that our findings affect policy, management, or everyday stewardship. And funding agencies remind us that we must ensure our research has broader impacts that benefit society, beyond just publishing scientific papers. But how do we effectively communicate our research? Here, we share some tips about how researchers can communicate research to the media, and reach audiences beyond peer-reviewed journal readers. We use examples from a recent paper of ours published with co-authors.

Make your research exciting—identify your hook. In our recent paper, Phenological mismatch with trees reduces wildflower carbon budgets, published in Ecology Letters, we emphasized that we are building on the observations of Henry David Thoreau; Thoreau was the “hook” that we use to attract much of the interest in our research.

Make the message easy to understand—tell a story. We wrote a press release that told a story about our research and highlighted key points in non-technical language and without jargon. Even though Richard’s academic home of Boston University does not generally issue press releases about scientific papers, our summary helped reporters quickly understand our work, its significance, and potential angles that could interest readers or listeners.

(From Meghan: if you’re having a hard time finding your hook or story, there are some great resources. Randy Olsen’s And, But, Therefore structure is great, and laid out in detail in his book, Houston, We Have a Narrative. The Aurbach et al. “half life” activity (described here) is also a helpful way to find your message.)

Provide informative, high-quality photos. We take many photos to illustrate our research and the key results. Sometimes these photos are carefully staged to illustrate the research process or results. Reporters are more likely to write a story if excellent photos are available.

A man wearing a baseball cap is crouched down in a field. In one hand, he is holding a field notebook. The other hand is reaching out towards a plant with yellow flowers.

Having good photos, such as this carefully arranged shot of Primack working in the field, helps to create media interest.

(From Meghan: these are so important, and often people forget to take them! I agree that carefully staged photos are valuable. Getting videos is very helpful, too, including for reporters to use as “B roll”. I recently shared various short snippets with a reporter—I was glad to have them, but also wished I had more! Another example of how videos can be helpful comes from this recent story by some of my colleagues at Michigan, which went viral because a student on the trip, Maggie Grundler, thought to pull out her phone and capture a quick video of a very cool interaction.)

Reach out to the media and be responsive.  We emailed our press release and eye-catching photos to contacts in the media. One of them liked the story and wrote an article about our work for the Boston Globe. He was writing the article on tight deadline, so we promptly answered his numerous questions.

(From Meghan: A couple of things related to this: first, reporters are often working on much, much tighter deadlines than we are used to—they might need to file the story by the end of the day they contact you. So, you need to be quick about responding to them, but it also helps to give them as much lead time as possible. Second, reporters generally will not share their story with you ahead of time for you to review. It’s very different than working with a university press officer!)

One thing can lead to another. The Boston Globe writer pitched the story to National Public Radio, and he will interview us for a radio program in April.

(From Meghan: One thing can lead to another….or not, or maybe it does but with a big delay. One of the things I didn’t really appreciate when I first started doing more science communication is that you can spend a lot of time talking to a reporter and it can end up going nowhere. [example 1, example 2] It can be really frustrating! If anyone has advice on how to make this less likely, I’d love to hear it!)

Get with social media. Caitlin tweeted about the article, creating buzz in the twittersphere. We wrote a short summary of our paper for our lab blog—essentially a shorter, more conversational version of the press release—with links to a pdf of our article. Our lab blog has been viewed around 100,000 times in 6 years, so we estimate that this will be 500 views of this story, a nice complement to the Twitter buzz.

Publish on-line. To generate publicity within our Boston University community, we wrote an article for BU Research, using the press release as a starting point. This article further widened the audience who will hear about the research, with relatively little additional effort on our part.

Leverage institutional networks.  The other co-authors of our paper reached out to their universities and media contacts, sharing our press release. The paper received added coverage in institutional publications and websites of the University of Maine and the Carnegie Museum of Natural History.

(From Meghan: another reason this can be useful: one press officer might not be interested or might not have the time, but someone else’s might.)

Send out pdfs.  We emailed a pdf of our paper to 100 colleagues in our field, along with a very short email summarizing the key points of the article, again pulling from the same basic story in the press release and blog and Twitter posts.

Each paper and project are different, but hopefully this post has given you some ideas of things to try.

Other resources:

Compass – https://www.compassscicomm.org

The Op Ed Project – https://www.theopedproject.org/pitching

Cahill Jr, J. F., Lyons, D., & Karst, J. (2011). Finding the “pitch” in ecological writing. The Bulletin of the Ecological Society of America92(2), 196-205.

Merkle, B. G. (2018). Tips for Communicating Your Science with the Press: Approaching Journalists. Bulletin of the Ecological Society of America99(4), 1-4.

First cut results of poll on manuscript rejections: we deal with a lot of rejection

I recently did a poll asking readers about their experiences with manuscript rejections. This was based on thinking about different submission strategies, including wondering about what the “right” amount of rejection is. In this post, I lay out the big picture results, and then end by asking about what further analyses you’re interested in.

There are lots of figures below, but here’s my summary of the key results:

  • respondents to this poll reported a lower acceptance rate at the first journal to which they submitted a manuscript (48.4%) than in the recent Paine & Fox survey (64.8%). They had vastly more respondents (over 12,000!!!), so I trust their number more; other potential factors that might also contribute are discussed below.
  • it’s not uncommon for people to need to submit a paper to 3 or more journals before it’s accepted.
  • it’s surprisingly common (at least to me) for people to take the “aim high, then drop if rejected” strategy
  • people are submitting to stretch journals pretty often—and sometimes it pays off
  • there’s a decent amount of uncertainty in terms of how well a manuscript fits a particular journal (on the part of authors, reviewers, and/or editors). This suggests that the concluding advice of Paine & Fox (“We therefore recommend that authors reduce publication delays by choosing journals appropriate to the significance of their research.”) is sometimes easier said than done.
  • people aren’t totally giving up on manuscripts as often as I might have thought they might (but this might be explained by the demographics of the poll respondents)

Continue reading

Poll on manuscript rejections

My recent post on building confidence, building resilience, and building CVs got me thinking a lot about rejection, including what is the “right” amount of rejection. There’s no clear answer to that question, but I think there are extremes that would not feel right for me. If every manuscript got accepted at the first journal to which I submitted it, I’d suspect I was playing it too safe in my journal choices. But I also definitely would not want every manuscript rejected from multiple journals before it was accepted!

I originally was going to do a poll asking about what percentage of manuscripts you think you should get rejected, versus what percent actually are rejected. But I think that would be easy to guess at, but that probably it would be hard to estimate well. And I realized that it’s probably more interesting to get some sense of what is actually going on. So, instead, I am going to ask about the three most recent papers on your CV. (Three is an attempt to balance not having one weird paper dominate a response with not wanting the number so high that only senior folks could answer the poll.) This will take a little time to answer, I think – I personally would have to think a bit about each of my three most recent papers and to think of their submission histories. If you’re used to plowing through quizzes, this one might take longer.

Continue reading

Building confidence, building resilience, and building CVs

When I was at the biology19 meetings recently, someone said something to me that I can’t stop thinking about: a student’s first manuscript should get sent to a journal where it will be accepted without much of a struggle; the second submission should be more of a struggle, but should get accepted at the first journal to which it was submitted; the third should go somewhere where it gets rejected. The person who said this, Hanna Kokko, acknowledged this was somewhat tongue-in-cheek, and that many factors will end up influencing where someone submits a given manuscript; her real approach is to respect the first author’s own wishes, after a discussion of the pros and cons of different options. But her tongue-in-cheek recommendation is motivated by the recognition that rejections can be a huge hit to one’s confidence, especially when someone is just starting out. I’ve seen (and personally experienced) the enormous confidence hit that can come from serial rejections of a manuscript, again, especially when one is just starting out. So, trying to figure out a strategy to reduce the potential for a big ego blow (while learning to deal with rejection too—but not before one has succeeded twice) makes a lot of sense to me.

Continue reading

Manuscraps: on partially killing one’s manuscript darlings

If you here require a practical rule of me, I will present you with this: Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.

– Arthur Quiller-Couch, “On Style”, 1914

“Murder your darlings” and its variants is common writing advice.* But what do you do if you’re not quite sure you’re ready to part with those darlings? My strategy is the same as Ethan White’s:

I suspect this is a common strategy (certainly the twitter responses suggest it is), though I don’t think it’s one that gets discussed much.

Continue reading

Comparison of ways of visualizing individual-level Likert data: line plots and heat maps and mosaic plots, oh my!

Last week, I wrote a post where I talked about how my training in evolutionary ecology led me to try reaction norms (that is, paired line plots) for plotting paired Likert data. I had already tried a few other options, but didn’t include them in that post, and I got some feedback on that post that gave me more ideas. There was also a request for code on how to actually generate those plots. So, this post shows four different ways of visualizing individual-level responses to paired Likert-scale questions (paired line plots, dot plots, mosaic plots, and heat maps). It does that for two different comparisons, leading me to the conclusion that the type of plot that works best will depend on your data. I’d love to hear which ones you think work best — there are polls where you can vote for your favorite! And, if you’re working on similar data and want to see code, there’s an associated Github repo, but it comes with the disclaimer that my code is good enough, but definitely not elegant.

Continue reading

Did the other reviewer notice things you didn’t? That doesn’t mean you did a bad job.

Reviewing is something that brings out my imposter syndrome, and I know I’m not alone. Being asked to review implies that someone views us as having expertise in a given area, which means that, if you screw up the review, you will reveal yourself as an imposter (or so our brains tell us). And, for journals that copy reviewers on the decision letter, one way to tell if you’ve messed up and are an imposter is by comparing your review to that of the other reviewer(s). Rarely, I’ve been unable to figure out which was my review, because the reviews were so similar. (Phew, not an imposter!) But what about when the other reviewer notes things I missed? Clearly that means I’m an imposter!

Not necessarily.

For a long time, I viewed it as a failure on my part if the other reviewer caught something I missed. I felt like it indicated that I hadn’t been careful or critical enough. If we aren’t super critical, we aren’t good scientists, right? (I’m being facetious. I don’t actually believe that being harsh = being a good scientist. And it is definitely not the case that the harshest review is the best review!) But what about cases where the other reviewer raises concerns or criticisms that seem important and insightful and constructive. If I missed those, I failed as a reviewer, right?

Again, not necessarily. The reason relates to something covered in a recent blog post by Stephen Heard, where he talks about finding reviewers. In it, he says he only uses one of the reviewers suggested by the authors, and explains that is because:

Continue reading

My goal as a reviewer: pass the Poulin test

As a graduate student, I attended my first infectious disease-themed meeting shortly after receiving the reviews on my first thesis chapter. I was excited about the work, and had sent it to Ecology Letters, which reviewed it but rejected it. I talked about the same study at that meeting. It was a small meeting, and one of the great things about the meeting was getting to interact with senior people in the field. This included Robert Poulin, someone whose work I really admired. I was really excited to get to talk to him! During our conversation, he asked about the status of the work I’d presented at the meeting. I said that it had just been rejected by Ecology Letters and then was about to launch into a vent about the reviewers. As soon as I said (in what I’m sure was an exasperated tone), “One of the reviewers”, he stopped me and said “I was one of the reviewers.” I will be eternally grateful for that.

That moment has stood with me throughout my career. In addition to preventing me from embarrassing myself (more!) in front of him, it taught me a really important lesson about peer review. We complain about Reviewer 2 and shake our fist at that mythical beast, but there’s a decent chance that Reviewer 2 is someone who carefully reviewed the manuscript and thought something was problematic. Or maybe it’s that, with a bit of distance from the work, Reviewer 2 thought the work wasn’t as novel as I did as an author, making rejection from a journal like Ecology Letters completely reasonable.

This interaction taught me an important lesson about how easy it is to think of an anonymous reviewer as an adversary, when there’s a good chance they’re a scientist whose work I admire and whose feedback I would value.

There’s an idea that anonymity leads to animosity. I think that’s more often discussed in terms of the person making the comments – for example, as a reason for the toxic nature of the comments on websites. But it also applies in the other direction – in an anonymous interaction, it can be easy to assume the person writing the comment is unreasonable (unless they think our work is brilliant – then clearly they are totally reasonable!) I think the way the scientific community discusses reviews (including on twitter) probably doesn’t help.

Personally, when I receive reviews, I have to work to put myself in the mindset that these reviews can help my paper, even if they’re negative. There are still occasions where my first reaction is something like “How is it possible for reviewers to be so clueless?!?!” but then, after coming back to the reviews a few weeks later, I realize that the reviewers were pointing out something that we didn’t explain very well or a part of the literature we really should have discussed more or an alternate explanation we hadn’t fully considered.

As I’ve blogged about before, I don’t sign most of my reviews. But I still write them with that interaction I had with Poulin in mind. My goal is to write reviews where, if I ended up in that same situation at a meeting, I would be okay with identifying myself as the reviewer, even in cases where my review was a critical one. In other words, I want to pass what I’ve come to think of as the Poulin test.

Continue reading

Social aspects of writing

Intro: this is the second of a series of posts exploring some common themes in three books: Anne Lamott’s Bird by Bird, Helen Sword’s Air & Light & Time & Space: How Successful Academics Write, and Tad Hills’ Rocket Writes a Story. The first post focused on getting started with a new writing project, rough drafts, and the pleasures of writing. This post focuses on social aspects of writing.

 

Writing is inherently social – at a minimum, your article is read by reviewers and, of course, we write hoping that colleagues will read and understand (and maybe even like!) our article once it comes out. But the process of actually doing the writing can sometimes feel very isolated. Certainly my general approach is to hole up in my office and try to crank out some text. I get feedback from coauthors, but that’s done at a distance and with little interaction outside Word.

So, I was interested to see that Helen Sword has social habits as one of the four components of a strong writing practice. She devotes a chapter specifically to writing among others, talking about writing groups, write-on-site boot camps and retreats, and online writing forums. Each chapter of Sword’s book ends with a “Things to Try” section; for the chapter on writing among others, it includes “start a writing group” and “retreat in the company of others” as two of the four suggestions.

Right after reading that section of Sword’s book, I read a Monday Motivator email from NCFDD (written by Kerry Ann Rockquemore) that also emphasized the social aspects of writing. That email also focused on social aspects of writing, including traditional writing groups, writing accountability groups, write-on-site groups, and boot camps.

Reading those back-to-back made me realize that I severely lacked social components in my writing. I have gotten very used to setting my own goals and not sharing them with anyone else, and to holing up in my office to write. But I also don’t feel like writing is generally a problem for me, so wasn’t sure if I really needed to address the lack of social habits. If there isn’t a problem, why try to fix it?

But then, on a solo morning run, I thought about how much further and faster and more enjoyably I can run on the days where I go with a friend. And I thought about how, when I first got into distance running, I would tell some friends and family members about my race plans, which made me more committed to sticking with my training runs. And I’m much less likely to skip a run if I am meeting a running buddy, which explains why I ended up running in a downpour recently. Could these same social habits help with writing?

Continue reading

Journals have a responsibility to ensure ethical oversight of mental health research (and we do not currently have evidence that grad students are 6x as likely as the general population to have depression and anxiety)

I care deeply about mental health in academia (and have blogged about it in the past, including here and here and here). Given that, I was really interested when a recent paper by Evans et al. came out on graduate student mental health. However, when I read it, two things stood out to me: it didn’t mention IRB approval, and the most striking conclusion – that graduate students experience anxiety and depression at 6x the rate of the general population – is not supported by the study. The key messages of this blog post are:

  1. the authors did have IRB approval to do this work, but Nature Biotechnology did not know that when they published the study. The editor of Nature Biotechnology claims that, since they published this in their Career & Recruitment section, it is not a research article and therefore didn’t require peer review or questions about IRB. This is problematic, as the study is clearly written and presented as presenting new findings, and journals have a responsibility to ensure ethical oversight of work they publish.
  2. While the Evans et al. paper claims “Our results show that graduate students are more than six times as likely to experience depression and anxiety as compared to the general population,” that claim is not supported by their study. Their survey was not a representative sample of the graduate student body (it was a voluntary survey, distributed via social media and email), but they compare it to a representative survey of the general population to get the 6x statistic.

Again, I want to be clear: the authors did have IRB approval for the work, but I only know that because I wrote the authors directly (after being dissatisfied with the responsiveness at Nature Biotechnology), and Nature Biotechnology did not know they had IRB approval when they published the study. In addition, this study does not provide evidence that grad students are six times as likely as the general population to experience depression and anxiety.

Graduate student mental health is really important, so we need to get as accurate a picture as we can of the current situation regarding graduate student mental health. As discussed below, a study (by Levecque et al.) with a more carefully controlled comparison group found a 2.4 increase in risk in graduate students compared to the highly educated general population. This is definitely something that is still a problem and that still needs to be addressed, but it’s not a 6 times greater risk.

To expand on these points more:

Continue reading