This is a guest post by Jonathan Barros, Briana Martin-Villa, Lexi Golden, Jonathan Hernandez, & Callie Chappell.
During this challenging time of COVID-19, our lives have been turned upside down. Jobs have been lost or radically altered, loved ones have fallen ill, and our daily routines have been upended. In light of these challenges, our research (especially if it is not COVID-related) may not seem that important. In this blog post, we would like to highlight why right now, undergraduate research experiences are especially important, and how good mentorship practices can help students through this challenging time. This post was written collaboratively by a team of undergraduate researchers at Stanford University and their mentor, a Ph.D. student. Based on our experiences working together over the summer, we would like to share some suggestions and best practices for mentors collaborating with undergraduate researchers working remotely.
Recently, a friend who was working on a grant proposal asked if I have the specific experiments in mind first and then come up with the framing from there, or if I have the big picture framing in mind and develop the specific experiments from there. I was a little stumped at first, then realized that was because I don’t really use either of those approaches. Instead, my initial motivation is usually preliminary data that I’m excited about and where it’s clear more work needs to be done to figure out what is really going on.
Here’s an example: As a graduate student, I carried out a study on a population where I tracked a parasite outbreak and host population dynamics and, at the same time, assayed the susceptibility of the population to that parasite at three time points. The results of the susceptibility assays were not at all what I expected at the start of the experiment:
This past fall was quite busy for me, and I was worried at the start about whether I’d bitten off more than I could chew. The big things taking up time were teaching over 600 students in Intro Bio and chairing a university task force on graduate student mental health, but it was also important to me that people in my lab not have to go the whole semester without getting feedback on their manuscripts, and there were also a couple of grant deadlines that I really didn’t want to miss. I knew this would be a lot, so I did my best before the semester to set up a structure that would hopefully help me through my particularly busy semester. And it worked pretty well! Things weren’t perfect, but I did the things that needed to be done and think I did them reasonably well, and I came out of the semester with my mental health intact. I think a few things really helped with managing things, and I’m hoping that sharing them might be useful to other folks, hence this post.
I’ll expand on each of these below, but the short version of my strategy is:
I recently learned about an approach to mentoring that I think has a lot of potential. My initial conversations with others suggests they think it has promise, too. The goal of this post is both to share the idea and to (hopefully!) hear from people with experience with this approach.
Here’s the general idea: some larger graduate programs at Michigan use an approach where each cohort is assigned a mentor. So, there is one mentor for all of the first year students, a different one for all of the second year students, etc. That person is an additional resource for those students – someone who they can turn to for advice. They also host regular events (I think maybe ~monthly) for the cohort, which helps them develop skills, explore different topics, and crucially, helps build community.*
When we write, we hopefully have a point we want to make. Brian has called on us to view ourselves as story tellers when writing manuscripts, embracing
the art of story-telling that knows where it is going and does it crisply so that it sucks us in and carries us along with just the right amount of time spent on details of character and setting. Where the characters (questions), the plot (story arc), the setting, the theme (the one sentence take home message) all work together to make a cohesive whole that is greater than the sum of the parts
In doing so, Brian says:
Every word, every sentence, every paragraph, every section of the paper should be working together, like a well-synchronized team of rowers all pulling towards one common goal. The introduction should introduce the questions in a way that gives them emotional pull and leaves us desperate to know the answer. The methods and results should be a page-turning path towards the answer. And the discussion should be your chance to remind the reader of the story arc you have taken them on and draw sweeping conclusions from it. Any freeloading sentence or paragraph that pulls in a different direction should be mercilessly jettisoned (or at least pushed to supplemental material).
In this post, I am going to disagree with Brian’s last point (gasp! blogging drama!), but, in doing so, I am motivated by the same goal. When trying to make a convincing argument, it can help to address the most obvious concern or counterargument. As you are leading the reader towards your exciting, sweeping conclusion, you don’t want some part of their brain thinking “Well, I guess they are unaware of this thing that sure seems like a problem for their argument.” If it’s something that a reasonably well-informed reader might be wondering about or distracted by, you should consider directly addressing it in the discussion. (This is also important in terms of not over-selling your results.)
The last experiment I did as a graduate student was one where I wanted to experimentally test the effect of predation on parasitism. To do this, I set up large (5,000 L) whole water column enclosures (more commonly called “bags”) in a local lake. These are really labor intensive, meaning I could only have about 10 experimental units. I decided to use a replicated regression design, with two replicates of each of five predation levels. These were going to be arranged in two spatial blocks (linear “rafts” of bags), each with one replicate of each predation level treatment.
Left: two experimental rafts; right: a close up of one of the rafts, showing the five different bag enclosures
As I got ready to set up the experiment, my advisor asked me how I was going to decide how to arrange the bags. I confidently replied that I was going to randomize them within each block. I mean, that’s obviously how you should assign treatments for an experiment, right? My advisor then asked what I would do if I ended up with the two lowest predation treatments at one end and the two highest predation treatments at the other end of the raft. I paused, and then said something like, “Um, I guess I’d re-randomize?”
This taught me an important experimental design lesson: interspersing treatments is more important than randomizing them. This is especially true when there are relatively small numbers of experimental units*, which is often the case for field experiments. In this case, randomly assigning things is likely to lead to clustering of treatments in a way that could be problematic.
Last week, I wrote a post where I talked about how my training in evolutionary ecology led me to try reaction norms (that is, paired line plots) for plotting paired Likert data. I had already tried a few other options, but didn’t include them in that post, and I got some feedback on that post that gave me more ideas. There was also a request for code on how to actually generate those plots. So, this post shows four different ways of visualizing individual-level responses to paired Likert-scale questions (paired line plots, dot plots, mosaic plots, and heat maps). It does that for two different comparisons, leading me to the conclusion that the type of plot that works best will depend on your data. I’d love to hear which ones you think work best — there are polls where you can vote for your favorite! And, if you’re working on similar data and want to see code, there’s an associated Github repo, but it comes with the disclaimer that my code is good enough, but definitely not elegant.
In the past year, I’ve been working on several projects that used Likert-scale data (e.g., 1 = strongly disagree, 5 = strongly agree). And, in several instances, there were questions that it made sense to pair. As one example (which I blogged about in more detail earlier this month), for Morgan Rondinelli’s undergraduate thesis project on student mental health, we asked students whether they would think less of someone who sought mental health care and also whether they thought others would think less of someone who sought mental health care? In that case, I was curious not just about the aggregate percentages in the different categories, but also how individual views compared. So, being a good evolutionary ecologist raised on reaction norms (where genotypes are plotted in different environments, with the points for each environment connected by a line), I made a paired line plot:
This figure shows me that no students viewed themselves as more judgmental than the average: none of the lines go up. That’s not information that I could get from other ways of plotting the data (shown in my earlier post).
A different example comes from a project studying student views on climate change, which I’m working on with Susan Cheng and JW Hammond. We asked students the same questions at the beginning and end of the semester. To focus on one question, we asked students “Do you think climate change is happening” at the beginning of the semester and again at the end of the semester. The overall results were promising:
Note from Meghan: This is a guest post by Gergana Daskalova, a PhD student at the University of Edinburgh.
I recently attended the British Ecological Society Annual Meeting, one of the biggest scientific conferences in the calendar year of an ecologist. Over the course of just one day, I got asked where I am from 18 times. I counted because in just four years of attending conferences, meeting with seminar speakers and engaging in similar activities, I have been asked where I am from way too many times. When the pattern repeated itself on day one of the BES conference, I thought I could do the actual count on day two of the conference. I, like many other of my fellow conference goers, get these questions at a very high frequency probably because our looks or accents give away that “we are not from here”. Though it may seem like an innocent question – where are you from? – it leaves me feeling like my fellow ecologists are more interested in why I stand out than why I belong.
To counter the question in a productive way and to get the focus back on my science, over the last year, I have made a point of replying that I am from the academic institution where I am doing my PhD. People always follow up with “No, I meant where are you from originally?” The problem is not that I want to hide where I am from, the problem is that in a professional scientific environment, where I am from shouldn’t matter. When people make general chat at conferences with a group of PhD students, most of them get asked what they do. When the conversation makes its way to me, I get asked where I am from. Followed by comments about my country of origin. Cool! Exciting! I’ve never been to that country. Why did you come here? What a poor country. Was it hard living there? The list goes on. Only just over half of the 18 people that asked me where I am from originally then went on to ask me about my work.
Reviewing is something that brings out my imposter syndrome, and I know I’m not alone. Being asked to review implies that someone views us as having expertise in a given area, which means that, if you screw up the review, you will reveal yourself as an imposter (or so our brains tell us). And, for journals that copy reviewers on the decision letter, one way to tell if you’ve messed up and are an imposter is by comparing your review to that of the other reviewer(s). Rarely, I’ve been unable to figure out which was my review, because the reviews were so similar. (Phew, not an imposter!) But what about when the other reviewer notes things I missed? Clearly that means I’m an imposter!
For a long time, I viewed it as a failure on my part if the other reviewer caught something I missed. I felt like it indicated that I hadn’t been careful or critical enough. If we aren’t super critical, we aren’t good scientists, right? (I’m being facetious. I don’t actually believe that being harsh = being a good scientist. And it is definitely not the case that the harshest review is the best review!) But what about cases where the other reviewer raises concerns or criticisms that seem important and insightful and constructive. If I missed those, I failed as a reviewer, right?
Again, not necessarily. The reason relates to something covered in a recent blog post by Stephen Heard, where he talks about finding reviewers. In it, he says he only uses one of the reviewers suggested by the authors, and explains that is because: