When I first arrived at Michigan and began teaching Intro Bio, the course had four exams. In that first semester, I added in clicker questions. Since then, we have added in frequent quizzing, so the students now have four exams, plus two quizzes a week (completed before coming to class), plus clicker questions in class. We have all of that because we know that frequent testing improves student learning. (Here’s one review, here’s another, and here’s a summary of the changes we made in Intro Bio and their impacts on student performance.) As a side bonus, when the testing is low stakes (as with the quizzes and clicker questions), students get those learning benefits without paying a cost in terms of increased anxiety. Given all that, I would never consider changing the format to one where we have just a single, pass/fail, high stakes assessment at the end of the semester.
Now, let’s consider graduate prelim/qualifying exams.
I recently learned about an approach to mentoring that I think has a lot of potential. My initial conversations with others suggests they think it has promise, too. The goal of this post is both to share the idea and to (hopefully!) hear from people with experience with this approach.
Here’s the general idea: some larger graduate programs at Michigan use an approach where each cohort is assigned a mentor. So, there is one mentor for all of the first year students, a different one for all of the second year students, etc. That person is an additional resource for those students – someone who they can turn to for advice. They also host regular events (I think maybe ~monthly) for the cohort, which helps them develop skills, explore different topics, and crucially, helps build community.*
Listen to other people’s advice, but that doesn’t mean you should follow it.
– Janet Currie, as quoted in Air & Light & Time & Space by Helen Sword
When I was thinking about coming up for promotion to full professor, I asked some senior colleagues whether they thought it would make sense. Two senior colleagues independently said that, while they thought I was definitely deserving of promotion, they were worried that I hadn’t done enough teaching at Michigan; they thought that might cause problems for promotion. I had actually taught somewhat more than I should have, but had had several leaves, including based on having two children at Michigan. These colleagues were concerned that those gaps in my teaching record might cause problems for promotion. I decided to come up for promotion anyway—I felt confident I could write a strong teaching statement. I was promoted…and got a teaching award as part of the process.
I truly think my colleagues had my best interests in mind when they gave the advice—they have been incredibly strong advocates for women in science. (Indeed, they have surely contributed to a climate and culture that has allowed me to be successful.) But, in my case, following their advice would have led to me postponing a promotion, which would have meant postponing the raise & other benefits that come with it. As one example of the latter—I don’t think I would have been able to do some of the things I’ve done this past year related to grad student mental health without being at the full professor rank.
In the past few months, I’ve shared this story a couple of times, using it as an anecdote about how some people mean well but end up giving advice that isn’t in the best interests of the advisee. Now, based on the results of the poll we did on listing parental & other leaves on CVs, I’m realizing that I have probably* been doing the same thing. I have been advising people not to list parental leave on CVs. I didn’t have direct evidence of listing leaves on a CV being used against anyone, but was focusing on the downsides (we know some people doubt whether moms will really be committed to their work) and not on potential upsides (that committee members might productively use that information).
Recently, we did a poll asking about parental or other family leave and CVs. It was prompted by both a blog post by Athene Donald, who argues that people should include leaves on their CVs and an email from Tess Grainger who asked:
Is there is any evidence of bias related to parental leave, or it a thing of the past? How many people have been on a search committee (recently) in which someone indicated any kind of negative bias associated with a parental leave (or leave for illness, eldercare etc.)? Is this something that still happens, or should I and others not hesitate put these leaves in our records?
Poll results are below, but the brief answer to Tess’s questions seems to be that listing parental leave on a CV is unlikely to have a big impact but, if these poll responses are indicative of the field as a whole, listing leave seems more likely to help than to hurt. In many countries, applicants are already given specific guidance on when/where/how to list leaves on CVs. At the end of this post, I call on North American search committees (especially those in the US, where we are way behind on this front) to start routinely giving applicants the opportunity to list leaves, career interruptions, and major life events.
Last year, I wrote a blog post about a =piece that had appeared in Nature Biotechnology related to graduate student mental health. There were two big problems: first, Nature Biotechnology had not checked whether there had been IRB oversight of the study before publishing it, which is a huge ethical problem. Second, the major result (that grad students experience anxiety and depression at more than 6x the rate of the general population) was not valid — they used an apples to oranges comparison to get that statistic. Unfortunately, that inaccurate statistic has dominated the discourse on graduate student mental health since it appeared.
In addition to writing a blog post, I worked with two behavioral scientists, Carly Thanhouser and Holly Derry, to write a formal response to the Evans et al. study. We submitted it on May 17, 2018. On April 5, 2019, we finally heard back about our submission. It had been peer reviewed (unlike the original Evans et al. submission) and accepted. On April 17, I uploaded the final version and the paperwork. Since then, the manuscript (which, remember, has already been accepted) is still listed in their manuscript system as “under consideration”. No one at the journal office will explain what is going on, despite multiple emails (including one to the Editor in Chief on May 15th).
Here, I am going to explain why I have devoted so much time and energy to this (frustrating!) process over the past year. I care a lot about graduate student mental health, so it might seem weird that I’ve spent so much time trying to point out that we don’t have evidence that grad students experience depression & anxiety at 6x the rate of the general population. To explain why, I need to briefly introduce the idea of anchoring. And, to do that, I’m going to tell you a story.
Recently, I’ve been involved in a few discussions related to office hours and how to make them more accessible. There are many instructors, myself included, who would love to have more students come to office hours—I think lots of students would benefit from coming, but most don’t come (and that’s even though we have a relatively good turnout at office hours for a class our size). There are many, complex reasons why students do not come to office hours, but probably some key things are:
- Not realizing what (or who!) they are for
- Not feeling safe showing up to them (e.g., out of fear of looking bad in front of the instructor)
- Not being able to make it to them (e.g., because of work or childcare)
The solution to the first one seemed so obvious once I saw this tweet:
From the twitter reactions, I know I am not alone in wondering how this never occurred to me—it’s a great idea! It, along with having some more information in the syllabus about what student hours are for, starts to address the second point, too. But that point and the following one can’t be fully addressed by a name change. When I was emailing about this with a colleague, she jokingly replied that maybe we should call them “FREE ADDITIONAL INSTRUCTION THAT SOMEONE ALREADY PAID FOR WHY DON’T YOU COME???”, then immediately added: “Just kidding – I never went either. I always had to work and was too shy to ask someone to adjust around my work schedule.”
So, I was really intrigued to learn recently that a colleague of mine at Michigan, John Montgomery, records his office hours (which he calls “Open Discussion”). Michigan has a lecture capture system set up in classrooms. I use this for my lectures, which are all recorded and made available to students via the course website. Recording my lectures helps students review material, plus makes it easier for students who need to miss lecture (e.g., because they are sick) to catch up. It had never occurred to me to recording office hours/student hours, but, imilar to the “student hours” solution, it seems obvious in retrospect.
When I started my first faculty position at Georgia Tech, I felt like I was juggling as fast as I could; every time it felt like I was starting to get a hang of things, a new ball would get tossed in. I mentioned this at some point to someone there who said: the key is to remember that some balls are glass and some are rubber.
I was thinking about that juggling metaphor again recently because I was involved in a discussion with other faculty about how we all have too much to do. There was some discussion of the root causes of this, including a major decline in administrative support and more expectations. Obviously those are huge issues that are worthy of much more thought and systemic solutions. But there was also a discussion of what we can do individually in the short term as we all struggle with this. At some point, someone said something to the effect of, “you need to accept that you are never going to be able to do it all, and you have to accept that some things are just going to go off the edge of the cliff”.
In November 2016, I did a poll and wrote a post about how overwhelming email can be. About a quarter of respondents to the poll said they rarely or never feel overwhelmed by email. I am not one of them. I’m in the majority that are overwhelmed by email at least some of the time. Other notable poll findings were:
- people with more emails in their inbox were more likely to feel overwhelmed by email, and
- faculty were more likely than grad students and postdocs to have a lot of work-related emails in their inbox.
At the time I wrote up the results of that poll, one of the main strategies I settled on for trying to be less overwhelmed by email was to batch my inbox, so that my emails only arrived once or twice a day. The idea is to treat email like regular mail – a thing that arrives at a given time and that you deal with in a batch (or, um, toss on the table and leave there for a while).
After that poll, I switched to using batched inbox to batch my mail. (It was free when I signed up, but I don’t think it is now.) It was amazing how much less overwhelming email was! I wasn’t getting distracted by emails as they arrived in my inbox, I found I actually got less email than I thought, and dealing with them in batches really reduced the amount of time and energy I spent on email. (I’m not alone. Arjun Raj has a post about how much email filtering helped his peace of mind.)
So, I was a fan. But then I started “cheating” and checking the folder where the batched emails hang out until they get dumped into the inbox. And, in the years since then, I have gone through cycles where I recommit to batching, think “OMG, why did I ever stop doing this?!?! Dealing with emails in bulk is so much better!!!”, then start sliding and going back to more of a system of dealing with emails as they come in (why? why do I do this?!? I know it’s counterproductive!), then get completely overwhelmed by emails, then at some point remember that batching is supposed to help with that, at which point I recommit to it and once again think “OMG, why did I ever stop doing this?!?!”
I recently did a poll asking readers about their experiences with manuscript rejections. This was based on thinking about different submission strategies, including wondering about what the “right” amount of rejection is. In this post, I lay out the big picture results, and then end by asking about what further analyses you’re interested in.
There are lots of figures below, but here’s my summary of the key results:
- respondents to this poll reported a lower acceptance rate at the first journal to which they submitted a manuscript (48.4%) than in the recent Paine & Fox survey (64.8%). They had vastly more respondents (over 12,000!!!), so I trust their number more; other potential factors that might also contribute are discussed below.
- it’s not uncommon for people to need to submit a paper to 3 or more journals before it’s accepted.
- it’s surprisingly common (at least to me) for people to take the “aim high, then drop if rejected” strategy
- people are submitting to stretch journals pretty often—and sometimes it pays off
- there’s a decent amount of uncertainty in terms of how well a manuscript fits a particular journal (on the part of authors, reviewers, and/or editors). This suggests that the concluding advice of Paine & Fox (“We therefore recommend that authors reduce publication delays by choosing journals appropriate to the significance of their research.”) is sometimes easier said than done.
- people aren’t totally giving up on manuscripts as often as I might have thought they might (but this might be explained by the demographics of the poll respondents)
The last experiment I did as a graduate student was one where I wanted to experimentally test the effect of predation on parasitism. To do this, I set up large (5,000 L) whole water column enclosures (more commonly called “bags”) in a local lake. These are really labor intensive, meaning I could only have about 10 experimental units. I decided to use a replicated regression design, with two replicates of each of five predation levels. These were going to be arranged in two spatial blocks (linear “rafts” of bags), each with one replicate of each predation level treatment.
Left: two experimental rafts; right: a close up of one of the rafts, showing the five different bag enclosures
As I got ready to set up the experiment, my advisor asked me how I was going to decide how to arrange the bags. I confidently replied that I was going to randomize them within each block. I mean, that’s obviously how you should assign treatments for an experiment, right? My advisor then asked what I would do if I ended up with the two lowest predation treatments at one end and the two highest predation treatments at the other end of the raft. I paused, and then said something like, “Um, I guess I’d re-randomize?”
This taught me an important experimental design lesson: interspersing treatments is more important than randomizing them. This is especially true when there are relatively small numbers of experimental units*, which is often the case for field experiments. In this case, randomly assigning things is likely to lead to clustering of treatments in a way that could be problematic.