Recommitting to email boundaries

In November 2016, I did a poll and wrote a post about how overwhelming email can be. About a quarter of respondents to the poll said they rarely or never feel overwhelmed by email. I am not one of them. I’m in the majority that are overwhelmed by email at least some of the time. Other notable poll findings were:

  • people with more emails in their inbox were more likely to feel overwhelmed by email, and
  • faculty were more likely than grad students and postdocs to have a lot of work-related emails in their inbox.

At the time I wrote up the results of that poll, one of the main strategies I settled on for trying to be less overwhelmed by email was to batch my inbox, so that my emails only arrived once or twice a day. The idea is to treat email like regular mail – a thing that arrives at a given time and that you deal with in a batch (or, um, toss on the table and leave there for a while).

After that poll, I switched to using batched inbox to batch my mail. (It was free when I signed up, but I don’t think it is now.) It was amazing how much less overwhelming email was! I wasn’t getting distracted by emails as they arrived in my inbox, I found I actually got less email than I thought, and dealing with them in batches really reduced the amount of time and energy I spent on email. (I’m not alone. Arjun Raj has a post about how much email filtering helped his peace of mind.)

So, I was a fan. But then I started “cheating” and checking the folder where the batched emails hang out until they get dumped into the inbox. And, in the years since then, I have gone through cycles where I recommit to batching, think “OMG, why did I ever stop doing this?!?! Dealing with emails in bulk is so much better!!!”, then start sliding and going back to more of a system of dealing with emails as they come in (why? why do I do this?!? I know it’s counterproductive!), then get completely overwhelmed by emails, then at some point remember that batching is supposed to help with that, at which point I recommit to it and once again think “OMG, why did I ever stop doing this?!?!”

Continue reading

Guest post: Strategies for helping your research reach a wider audience

Note from Meghan:  This is a guest post from Richard B. Primack and Caitlin McDonough MacKenzie; Richard has written guest posts for us before, including one on using a professional editor. This guest post is on a topic that I get asked about regularly when I travel for seminar trips, so I suspect it will be of interest to readers. I’ve added some thoughts of my own throughout the post below.

 

As scientists, we love our research and want to share our findings far and wide. As ecologists and conservation biologists, we especially hope that our findings affect policy, management, or everyday stewardship. And funding agencies remind us that we must ensure our research has broader impacts that benefit society, beyond just publishing scientific papers. But how do we effectively communicate our research? Here, we share some tips about how researchers can communicate research to the media, and reach audiences beyond peer-reviewed journal readers. We use examples from a recent paper of ours published with co-authors.

Make your research exciting—identify your hook. In our recent paper, Phenological mismatch with trees reduces wildflower carbon budgets, published in Ecology Letters, we emphasized that we are building on the observations of Henry David Thoreau; Thoreau was the “hook” that we use to attract much of the interest in our research.

Make the message easy to understand—tell a story. We wrote a press release that told a story about our research and highlighted key points in non-technical language and without jargon. Even though Richard’s academic home of Boston University does not generally issue press releases about scientific papers, our summary helped reporters quickly understand our work, its significance, and potential angles that could interest readers or listeners.

(From Meghan: if you’re having a hard time finding your hook or story, there are some great resources. Randy Olsen’s And, But, Therefore structure is great, and laid out in detail in his book, Houston, We Have a Narrative. The Aurbach et al. “half life” activity (described here) is also a helpful way to find your message.)

Provide informative, high-quality photos. We take many photos to illustrate our research and the key results. Sometimes these photos are carefully staged to illustrate the research process or results. Reporters are more likely to write a story if excellent photos are available.

A man wearing a baseball cap is crouched down in a field. In one hand, he is holding a field notebook. The other hand is reaching out towards a plant with yellow flowers.

Having good photos, such as this carefully arranged shot of Primack working in the field, helps to create media interest.

(From Meghan: these are so important, and often people forget to take them! I agree that carefully staged photos are valuable. Getting videos is very helpful, too, including for reporters to use as “B roll”. I recently shared various short snippets with a reporter—I was glad to have them, but also wished I had more! Another example of how videos can be helpful comes from this recent story by some of my colleagues at Michigan, which went viral because a student on the trip, Maggie Grundler, thought to pull out her phone and capture a quick video of a very cool interaction.)

Reach out to the media and be responsive.  We emailed our press release and eye-catching photos to contacts in the media. One of them liked the story and wrote an article about our work for the Boston Globe. He was writing the article on tight deadline, so we promptly answered his numerous questions.

(From Meghan: A couple of things related to this: first, reporters are often working on much, much tighter deadlines than we are used to—they might need to file the story by the end of the day they contact you. So, you need to be quick about responding to them, but it also helps to give them as much lead time as possible. Second, reporters generally will not share their story with you ahead of time for you to review. It’s very different than working with a university press officer!)

One thing can lead to another. The Boston Globe writer pitched the story to National Public Radio, and he will interview us for a radio program in April.

(From Meghan: One thing can lead to another….or not, or maybe it does but with a big delay. One of the things I didn’t really appreciate when I first started doing more science communication is that you can spend a lot of time talking to a reporter and it can end up going nowhere. [example 1, example 2] It can be really frustrating! If anyone has advice on how to make this less likely, I’d love to hear it!)

Get with social media. Caitlin tweeted about the article, creating buzz in the twittersphere. We wrote a short summary of our paper for our lab blog—essentially a shorter, more conversational version of the press release—with links to a pdf of our article. Our lab blog has been viewed around 100,000 times in 6 years, so we estimate that this will be 500 views of this story, a nice complement to the Twitter buzz.

Publish on-line. To generate publicity within our Boston University community, we wrote an article for BU Research, using the press release as a starting point. This article further widened the audience who will hear about the research, with relatively little additional effort on our part.

Leverage institutional networks.  The other co-authors of our paper reached out to their universities and media contacts, sharing our press release. The paper received added coverage in institutional publications and websites of the University of Maine and the Carnegie Museum of Natural History.

(From Meghan: another reason this can be useful: one press officer might not be interested or might not have the time, but someone else’s might.)

Send out pdfs.  We emailed a pdf of our paper to 100 colleagues in our field, along with a very short email summarizing the key points of the article, again pulling from the same basic story in the press release and blog and Twitter posts.

Each paper and project are different, but hopefully this post has given you some ideas of things to try.

Other resources:

Compass – https://www.compassscicomm.org

The Op Ed Project – https://www.theopedproject.org/pitching

Cahill Jr, J. F., Lyons, D., & Karst, J. (2011). Finding the “pitch” in ecological writing. The Bulletin of the Ecological Society of America92(2), 196-205.

Merkle, B. G. (2018). Tips for Communicating Your Science with the Press: Approaching Journalists. Bulletin of the Ecological Society of America99(4), 1-4.

If your field experiment has few replicates (and it probably does), intersperse your treatments rather than randomizing them

The last experiment I did as a graduate student was one where I wanted to experimentally test the effect of predation on parasitism. To do this, I set up large (5,000 L) whole water column enclosures (more commonly called “bags”) in a local lake. These are really labor intensive, meaning I could only have about 10 experimental units. I decided to use a replicated regression design, with two replicates of each of five predation levels. These were going to be arranged in two spatial blocks (linear “rafts” of bags), each with one replicate of each predation level treatment.

left picture shows two objects in the distance in a lake; the most obvious thing about them is fencing at the surface; the left picture shows a close up of one of them where you can see five individual bag enclosures

Left: two experimental rafts; right: a close up of one of the rafts, showing the five different bag enclosures

As I got ready to set up the experiment, my advisor asked me how I was going to decide how to arrange the bags. I confidently replied that I was going to randomize them within each block. I mean, that’s obviously how you should assign treatments for an experiment, right? My advisor then asked what I would do if I ended up with the two lowest predation treatments at one end and the two highest predation treatments at the other end of the raft. I paused, and then said something like, “Um, I guess I’d re-randomize?”

This taught me an important experimental design lesson: interspersing treatments is more important than randomizing them. This is especially true when there are relatively small numbers of experimental units*, which is often the case for field experiments. In this case, randomly assigning things is likely to lead to clustering of treatments in a way that could be problematic.

Continue reading

Building confidence, building resilience, and building CVs

When I was at the biology19 meetings recently, someone said something to me that I can’t stop thinking about: a student’s first manuscript should get sent to a journal where it will be accepted without much of a struggle; the second submission should be more of a struggle, but should get accepted at the first journal to which it was submitted; the third should go somewhere where it gets rejected. The person who said this, Hanna Kokko, acknowledged this was somewhat tongue-in-cheek, and that many factors will end up influencing where someone submits a given manuscript; her real approach is to respect the first author’s own wishes, after a discussion of the pros and cons of different options. But her tongue-in-cheek recommendation is motivated by the recognition that rejections can be a huge hit to one’s confidence, especially when someone is just starting out. I’ve seen (and personally experienced) the enormous confidence hit that can come from serial rejections of a manuscript, again, especially when one is just starting out. So, trying to figure out a strategy to reduce the potential for a big ego blow (while learning to deal with rejection too—but not before one has succeeded twice) makes a lot of sense to me.

Continue reading

In which my ink runs out and I realize there are lots of things that are interesting and important, and I cannot do them all

Last Monday, I faced a post-travel inbox filled with emails that needed replies. Some of them were invitations for things that would take up my time, but that seemed interesting or important or valuable or all three. And, then, of course, there were all the other things I needed to do as part of my job – editing manuscripts, writing letters of recommendation, sending emails to get people access to the lab, analyzing data, etc. And it was also the day where my post on seeing a therapist appeared, which led to lots of interactions on social media, via text, and through email. All of that led me to revisit a question that I am constantly asking myself, and that I surely will never stop asking myself: how should I spend my work time?

I couldn’t get this out of my head, and, as I walked to daycare, I realized that there are three questions I should consider as I evaluate whether to do something:

  • Is it officially part of my job?
  • Am I particularly good at it?
  • Do I enjoy doing it?

I thought about how, ideally, I should try to prioritize things where the answer would be “yes” for all three. And I thought about how I spend a lot of time on things where the answer to all three of those questions is “no”.

When I got to daycare, I knew I wanted to think about this more, and was worried I would forget it. So, I pulled out my notebook in the daycare lobby, propped it on top of the stroller, and drew this:

Continue reading

Guest post: I am a scientist. Ask me what I do, not where I am from “originally”.

Note from Meghan: This is a guest post by Gergana Daskalova, a PhD student at the University of Edinburgh.

I recently attended the British Ecological Society Annual Meeting, one of the biggest scientific conferences in the calendar year of an ecologist. Over the course of just one day, I got asked where I am from 18 times. I counted because in just four years of attending conferences, meeting with seminar speakers and engaging in similar activities, I have been asked where I am from way too many times. When the pattern repeated itself on day one of the BES conference, I thought I could do the actual count on day two of the conference. I, like many other of my fellow conference goers, get these questions at a very high frequency probably because our looks or accents give away that “we are not from here”. Though it may seem like an innocent question –  where are you from? – it leaves me feeling like my fellow ecologists are more interested in why I stand out than why I belong.

To counter the question in a productive way and to get the focus back on my science, over the last year, I have made a point of replying that I am from the academic institution where I am doing my PhD. People always follow up with “No, I meant where are you from originally?” The problem is not that I want to hide where I am from, the problem is that in a professional scientific environment, where I am from shouldn’t matter. When people make general chat at conferences with a group of PhD students, most of them get asked what they do. When the conversation makes its way to me, I get asked where I am from. Followed by comments about my country of origin. Cool! Exciting! I’ve never been to that country. Why did you come here? What a poor country. Was it hard living there? The list goes on. Only just over half of the 18 people that asked me where I am from originally then went on to ask me about my work.

Continue reading

Did the other reviewer notice things you didn’t? That doesn’t mean you did a bad job.

Reviewing is something that brings out my imposter syndrome, and I know I’m not alone. Being asked to review implies that someone views us as having expertise in a given area, which means that, if you screw up the review, you will reveal yourself as an imposter (or so our brains tell us). And, for journals that copy reviewers on the decision letter, one way to tell if you’ve messed up and are an imposter is by comparing your review to that of the other reviewer(s). Rarely, I’ve been unable to figure out which was my review, because the reviews were so similar. (Phew, not an imposter!) But what about when the other reviewer notes things I missed? Clearly that means I’m an imposter!

Not necessarily.

For a long time, I viewed it as a failure on my part if the other reviewer caught something I missed. I felt like it indicated that I hadn’t been careful or critical enough. If we aren’t super critical, we aren’t good scientists, right? (I’m being facetious. I don’t actually believe that being harsh = being a good scientist. And it is definitely not the case that the harshest review is the best review!) But what about cases where the other reviewer raises concerns or criticisms that seem important and insightful and constructive. If I missed those, I failed as a reviewer, right?

Again, not necessarily. The reason relates to something covered in a recent blog post by Stephen Heard, where he talks about finding reviewers. In it, he says he only uses one of the reviewers suggested by the authors, and explains that is because:

Continue reading

With public engagement, it’s also okay to start small

Yesterday, I had a post about how it’s okay to start small when it comes to learning R or any other new technical skill. Today’s post takes that same “it’s okay to start small” message and applies it to public engagement.

Sometimes, a colleague will ask about a recent public engagement activity my lab worked on. After I describe it, they sometimes say something like “I’d like to do more outreach work, but my lab isn’t as big as yours – I don’t have those people to help me!” Often, that is said with a sense of resignation that it won’t be possible for them to do outreach. Or perhaps the conversation centers around an upcoming NSF proposal, where a colleague is trying to figure out what they could propose for the broader impacts section, feeling like they want (or need) to propose something, but that there’s no way for them to do that if they are just starting out or haven’t done much public engagement in the past. In these conversations, my messages are:

  • it’s okay to start small, and
  • take advantage of existing opportunities.

Continue reading

When learning R (or any other new task), it’s okay to start small: aim for improvement, not perfection

When I first thought about switching to R and doing reproducible data analysis, the idea was daunting. As a grad student, I couldn’t figure out how to even get my data into R. How would I figure out that plus mixed model analyses plus how to make figures in ggplot, with version control and a beautiful github rep for all of my work?! What I eventually accepted is: it’s okay to start small. Or, as a colleague of mine suggests: for any given project, aim to do one thing in R that you couldn’t before.

I’m not sure why I set the bar so high for initially learning R. When I was first learning how to knit (actually knit, with yarn and needles, not the R version of knit), I knit a square washcloth, not a sweater. So when learning R, why was I expecting I’d be able to start out with the coding version of knitting a sweater with multiple colors, a fancy pattern, and buttons?

File:Fair Isle knitwear geograph-3936603-by-Julian-Paren.jpg

Julian Paren / Fair Isle knitwear in the Shetland Museum / CC BY-SA 2.0 via wikimedia.org

Continue reading

Guest post: Coding Club – trying to overcome the fear factor in teaching and learning quantitative skills

This is a guest blog post by ecologists Isla Myers-Smith and Gergana Daskalova from the University of Edinburgh. In case you missed it, they wrote a wonderful guest post this summer on iPads and digital data collection in the field.

Ecology is a fast-paced science, with possibly hundreds of relevant papers published every week and new techniques and quantitative skills being developed all the time. It is easy to feel very behind and overwhelmed. The quantitative skills taught in undergraduate and graduate programs in ecology often lag behind those used in the literature. As ecologists at different stages of our academic careers, who have felt (and still do sometimes) pretty behind the eight ball in terms of quantitative skills, we wanted to do something about that for our students and peers. And that is how we came up with the idea of Coding Club.

How did it all begin?

Just about two years ago we had an idea. What if we set up an informal group and a website to teach key quantitative skills that could be useful to undergrads, grad students, postdocs, profs and ecologists working outside of academia? What if that website was built in a way that anyone could contribute tutorials or help to make the existing tutorials better? What if we taught people how to learn in their own working environment and how to develop their workflow using best practices in open science like version control from the very beginning? What if this content was aimed at people who felt afraid, anxious and behind in their own quantitative skills development. This was the beginning of Coding Club.

screen cap of the homepage for Coding Club; header says: Coding Club: A positive peer-learning community

The Coding Club website where we host all of our tutorials on data manipulation, data visualisation, modelling and more!

 

What is Coding Club?

Coding Club combines online and in-person resources to help teach quantitative skills to ecologists at all career stages. We have focused on trying to overcome “code fear” and “statistics anxiety”. Statistics anxiety – the worry about a lack of quantitative skills – and code fear – the fear of programming – can prevent people from learning. By building a sense of community around the development of skills, we hope to overcome the fear factor of ecology involving more code and math than people sometimes expect.

left panel shows six people posed, smiling at the camera; upper right panel shows a computer lab with people at work and someone at front; lower right shows three women talking and smiling

Part of the Coding Club team and snapshots of some of our workshops. Check out our team page for the full list of undergraduates, postgraduates and profs that have contributed to Coding Club! Photo credit for image on left: Sam Sills

 

Peer-to-peer teaching helps to reduce the fear factor

In Coding Club, we focus on peer teaching and interaction rather than having “trained experts” leading workshops as we feel people engage more when they are less intimidated. All of our teaching materials are developed by people who are actively learning data science skills at the same time as teaching them. We avoid hierarchy (though we love content on hierarchical modelling!) and encourage participation across different career stages from undergrad students through to PhD students, postdocs and staff. Moving away from the professor-student model and allowing everyone to engage as teachers and learners can be a pretty powerful way to break down barriers.

Coding Club covers a growing number of different quantitative skills

The Coding Club website contains a growing list of tutorials aimed at all levels of quantitative skills useful for ecologists and beyond. We cover topics from intro to advanced R tutorials, version control, data visualization to working with large datasets. We have a lot of R content but we don’t just do R! We are currently working on developing more tutorials using Python for process-based modelling and the Google Earth Engine for remote sensing analyses. We have been using the tutorials to teach in-person workshops at the British Ecological Society conference and at universities around the UK, but the tutorials are there online for everyone to use, provide feedback on or suggest revisions through GitHub. We are always looking for people to develop new content as well!

four badges, one for sharing quantitative skills, one for meta-analysis & bayesian statistics, one for spatial and population data, and one for pandas

A sample of the Coding Club tutorials, including a tutorial on how to make tutorials on GitHub. Data visualisation, mixed effects models, Stan models and more over here.

 

Quantitative learning should be active and not passive

We believe that the best way to teach coding and quantitative skills is through a problem-based approach that is question driven. We try to avoid approaches like ‘live coding’ as it encourages learners to be very passive with the subject matter and we believe this results in lower retention of the new material. To effectively learn a new skill, it is vitally important to know why you might want to learn that skill in the first place and to have a question that you want to answer to motivate you to learn. We also recognize that people learn in different ways and at different paces. In our in-person sessions, we encourage people to take as long or as little time as they wish to complete the tutorials. We believe this casual, non-compulsory and non-assessed nature of Coding Club also helps to reduce the fear and anxiety associated with quantitative skills.

 

Picture5

Coding our way towards finding out how population trends vary among different taxa, with cookies along the way. Not pictured: the standard error cookie. We forgot to make one, but of course we are all for reporting the uncertainty around effect sizes!

 

Quantitative skills are not hard – they just take some work to learn

We believe teaching quantitative skills is all about overcoming fear and building confidence. We try to avoid labeling skills as “hard” or “easy”, because we don’t want people labeling themselves as quantitative or not, or pre-judging the limits to their own capabilities. We aim to train people to be able to answer their own questions, resolve their own coding problems and seek out new skill sets independently. We are trying to teach people to train themselves beyond the timespan of a single workshop or course. Finally, we don’t think there is only one way to teach quantitative skills and promoting a diversity of approaches will reach the most people.

 

Coding Club has exceeded our expectations!

As of October 2018, the Coding Club website has received over 160,000 visits from over 73,000 unique IP addresses from over 180 countries. Our tutorials have been contributed by people from multiple universities (University of Edinburgh, University of Aberdeen, McGill University, Ghent University, Aarhus University) and used for quantitative training across several institutions so far (University of Edinburgh, University of Aberdeen, University of St Andrews, Queens University Belfast, Dartmouth College, Hebrew University, Calvin College, Centre for Ecology and Hydrology and more), and we are hoping to reach out further! If we can set up a network of people at universities and research institutes around the world who can work together to develop quantitative training from the ground up, then maybe we will all feel just a little less overwhelmed by our fast-paced discipline.

World map showing numbers of visitor, represented as blue dots. The dots are especially dark and big over the UK, but include visitors from around the world

The international audience of Coding Club – it’s been great to get feedback from people using our tutorials around the world!

 

The start of the new academic year feels like a fresh start. A chance to purchase some new office supplies, catch up on all the science missed over the summer, start a new work routine to enhance productivity and to set yourself some new challenges. Now that the term has started, maybe it is time for you to take the plunge and learn a new quantitative skill.

 

Are you a student or group of students wanting to increase your own quantitative skills? Are you someone who has a cool analytical technique that you want to share with your peers? Are you a prof. who wants to encourage your students and mentees academic development? Are you someone who feels like the quantitative training you got years ago is not enough for the ecological research today and want to brush up on your skills? Do you have thoughts on how we can improve quantitative training in ecology? If you answered yes to any of these questions, please comment below, check out the Coding Club website and get in touch if you are keen to join the team!