Friday links: women-only faculty positions, chatbot TA, and more

Also this week from Meg: reviewers vs. Rich Lenski, a good ending to a bad week, how to value your time, Emily Dickinson vs. plants, flipped classroom failures, the Kermoji-McKendrick model, and more! Did Meg read All The Things? ¯\_(ツ)_/¯ And from Jeremy: the PhD jobs crisis that isn’t, against open peer review, Jane Lubchenco vs. Jim Estes, bald eagles vs. bison, Dr. Seuss vs. Nietzsche, and more! Will you be able to have a productive Friday with all these links tempting you? ¯\_(ツ)_/¯

From Meg:

The School of Math at the University of Melbourne is doing a search for three new senior faculty. Normally that sort of thing wouldn’t be remarkable, but what makes this newsworthy is that, in this case, they are only accepting applications from women.

I’ve linked to this story before, but I just listened to it again and still find it completely amazing, so I’m linking to it again. (I double-checked the rule book and this is allowed.) Mary-Claire King tells the story of a Very Bad Week that culminated in her getting an NIH grant that was a key part of her work on inherited breast cancer and BRCA1. It has a wonderful guest star babysitter near the end. Listening to this is 12 minutes well spent.

Ambika Kamath had a blog post on tough love in science, motivated by a passage from Hope Jahren’s Lab Girl. The general idea is that while a tough love mentoring approach is common in academia (and talked about in the Jahren passage), many scientists — especially those from underrepresented groups — are likely to be turned away by such an approach. The post has good points, as do the comments. But one additional thing I wanted to note is this line in the Lab Girl excerpt in that post: “Any sign that the newbie regarded his or her time as of any value whatsoever was a bad omen”. I really disagree with that. I often find myself telling my students that their time is valuable — for the experiments we do, personnel costs are usually the biggest part of the budget by far. (I think that’s true for many labs.) This is important for them to realize because sometimes students, say, don’t want to buy a piece of equipment that would save time on the project because they don’t want to spend extra money. But, if that device saves time, it very well might pay for itself. And then there’s the issue that, if they are working very long hours, they are much more likely to make mistakes, which is obviously something we want to avoid. So, I’d be interested in hearing what others think: do you agree with that line from Lab Girl?

Did you note the nice em-dash in that previous link? It was a proper em-dash (—) rather than the quicker one I usually go with (–). I made it using shift-alt-dash, thanks to a twitter tip from Hadley Wickham. But that discussion spurred this blog post by Beth Haas on text shortcuts as academic time savers for people who use macs. I love this idea and plan to add in extra ones (e.g., for some Greek symbols). I also like the idea of using it to correct my own common typos. My only hesitation is that it will make it more annoying for me when I switch to a pc! (My office computer is a pc.) Haas’s post also recommends TextExpander, which I know a colleague of mine uses very effectively. I should look into that more. The timing of that discussion was funny because, later that same day, I did a search for the shrug emoji (which, I learned is actually a kaomoji, since it uses a Japanese character). You know: ¯\_(ツ)_/¯ My search led to this article, which recommends the same text shortcut trick, this time so you can easily shrug without having to google, copy, and paste. Will you find all this useful? ¯\_(ツ)_/¯ (Update: once I got around to setting them up, this approach doesn’t seem work in most of the programs I use most often — Word, Chrome, Firefox — but does work in TextEdit. Beth Haas was very kind and helped me figure out part of the problem, but I think I might just go back to my old approaches. Will I spend the time to try to figure it out fully? ¯\_(ツ)_/¯ But I’ll leave this link since it seems to work for many (most?) folks.)

This is a great story about how ecologist Doug Larson found an old growth forest about an hour from Toronto that no one had known about.

This is a great piece on flipped classrooms, course evaluations, and learning. The author, Kevin Werbach, says, “I’m writing this essay partly because we need more failure narratives about teaching.” It’s a really compelling essay. It relates to topics Jeremy and I have written about (e.g., this recent post from Jeremy) and to something I plan to blog about soon. (I hope!) ht: Jessica Hullman

Here’s a handy guide to colors for data visualization. (ht: Kara Woo, who tweeted about this a month ago. I’m behind!)

Every plant and animal in Emily Dickinson’s poems, catalogued. I love the word cloud in this piece. And reading this made me realize that clearly Dickinson would have been in favor of local fieldwork, as I am.

Tweet of the week:

Following up on my link to Ed Yong’s David Attenborough tribute piece last week: Sir David Attenbingo. Play along while you watch! (ht: Trevor Branch) [Note from Jeremy: Meg apparently read All The Things, except for her own blog.] [Reply from Meg: Whoops! I completely missed that!]

How to make survival plots in R (or, as the article is entitled: survival plots in R have never been so informative) (ht: Paul Hurtado)

Rich Lenski had a post reproducing the review his first LTEE manuscript received. It was … not positive. It’s worth reading the whole review, which starts with “This paper has merit and no errors, but I do not like it and do not think it appropriate for American Naturalist.” I find it amusing that the reviewer was “upset because continued reliance on statistics and untested models by population geneticists will only hasten the demise of the field.” Despite this scorching review, the revised paper was published in AmNat anyway and has been cited hundreds of times. It’s a good reminder that a negative review — even a really negative review — doesn’t mean one’s paper or career is doomed. (Rich also notes in his post that sometimes those negative reviews are right and, while painful, addressing those concerns can really improve a paper. That has happened to me. One of my thesis chapters was rejected from Ecology and the revision process made it a much better, award-winning paper.)

From Jeremy:

Tell me again what problem open peer review solves, that’s worse than the problems it creates? Excellent reality-based critique of the idea from experienced managing editor Tim Vines. (ht Small Pond Science) Now seems to be a good time to re-link to this comprehensive review of the data on anonymity and openness in peer review. As the author herself–an advocate of openness–admits, it’s mixed at best. Certainly not the kind of evidence base that justifies calls for revolution.

Chatbot TA. (ht Small Pond Science) I look forward to this technology becoming cheap enough that everyone can use it, though few classes will have the critical mass of old questions and answers required for the technology to work. Alternatively, anyone else have a good hack for efficiently addressing repetitive questions asked by many students Every. Single. Semester? You can’t just put them all in a big FAQ, because the students mostly won’t refer to it when they need to, or at all.

Continuing to Share ALL The Small Pond Science Things*: The NSF DEBrief blog on NSF’s preliminary evaluation of the preproposal system. (ht Small Pond Science)

Sticking with the DEBrief blog, data on how likely a “good” NSF DEB proposal is to be funded. The short version is that the very highest-rated preproposals are almost sure to be invited to the full proposal stage, and the very highest rated full proposals are very likely to be funded. The post also considers the difficult question of whether ratings of preproposals predict the ratings full proposals will receive (difficult because it’s mostly the highest-rated preproposals that get invited to the full proposal stage). As always with the DEBrief blog, it’s a very detailed and thorough analysis. I’m continually blown away by the willingness of NSF DEB staff to spend time and effort addressing questions from the community (the linked post was in response to a question from a panelist). It’s really unfortunate that frustration about the tight funding climate manifests itself as baseless rumors about NSF DEB and how it operates, because NSF DEB actually is really open.

And if you’re wondering why NSF program officers sometimes make funding decisions that deviate from panel recommendations in the name of “portfolio balance”, here’s the answer.

Jane Lubchenco reviews Jim Estes’ memoir. Sounds like a must-read. (Meg adds: I agree! I put this on my wish list as soon as I read Lubchenco’s review.)

Thoughts on successful postdoc-ing from successful postdoc Caroline Tucker.

Margaret Kosmala speculates on the possible short- and long-term effects of the new US overtime rules on postdoc pay and working conditions.

Joan Strassmann on the value and challenges of doing exit interviews with graduating seniors from your department.

Be careful with what you mean when you talk about a “jobs crisis” for newly-minted PhDs. The unemployment rate for recent US PhDs in science and engineering is 2.1%, and it never exceeded 3.4% even at the height of the Great Recession in 2012. Obviously this doesn’t mean we’re producing the “right” number of PhDs or that there are no issues whatsoever in the job market for people with PhDs. It just means that “high risk of unemployment” is not among those issues.

On Beyond Zebra Zarathustra.

And finally, Fox News thinks that bald eagles are mammals. As much as I’d like to see how well bald eagles would do in the next Mammal March Madness, maybe somebody should tell Fox News what bald eagles really are:

Velociraptor_portrait

Bald eagle. More or less.

*Terry McGlynn was ON FIRE last week, linking-wise.

25 thoughts on “Friday links: women-only faculty positions, chatbot TA, and more

  1. “do you agree with that line from Lab Girl?”

    If you haven’t read that passage or you can’t tell out of context, it is basically lab hazing. They have a new student spend hours labeling a ton of tubes and then Hope and Bill say they’re all wrong and throw them in the trash (knowing ahead of time they will do this). Then they see how the student reacts. The entire event is an activity in purposely wasting a student’s time. It’s not about seeing a how a student deals with something that fails in their own research.

    • Thanks for the additional context! It wasn’t clear to me that they were planning to throw everything away right from the start. (I’m reading Lab Girl now but only just started the part where she moved to Georgia Tech, so I haven’t made it to this passage yet.)

      • If you’re just starting, you’re probably getting close to the point where the 80-hour work week is brought up. I thought of you when I read it!

      • 🙂 I haven’t gotten there yet! I was reading this morning about where she first brings Bill to the GT lab. Then I ended up with all three kids on top of me and had to put the book down. 🙂

    • I haven’t yet read Lab Girl except for the bit quoted in Ambika Kamath’s post, so may lack important context. But if Hope Jahren is giving students makework and then pretending they screwed it up to test whether they’ll stay late and do it again without getting discouraged…wow. I’ve *never* heard of such a thing. Hazing seems like exactly the right word. Except that, with most hazing, people at least know they’re being hazed or are going to be hazed! (which is not to say that any and all hazing is fine as long as you know you’re going to be hazed!)

      As a supervisor, I can certainly understand wanting to know early on whether a student will be up to dealing with the inevitable challenges of grad school. But there are a *lot* of other ways to find that out that are as or more effective than Hope Jahren’s hazing, and that don’t involve lying to the student or wasting hours of the student’s time. Such as, you know, talking to prospective students and their references about how they handle adversity. Which is something every competent supervisor does before taking on a grad student.

      Further, you can’t ever know for certain how a new student will handle the challenges of grad school. Every student who enters grad school, and every supervisor who takes on a new student, is taking a leap of faith to some extent. The only way to find out if you can do it–and can *grow into* being able to do it, with the aid of a mentor who both has high expectations and wants to help you meet those high expectations–is do it. Nothing–certainly not hazing–can tell you whether a student will be able to get through, say, what Meg went through (https://dynamicecology.wordpress.com/2013/01/22/the-study-that-almost-made-me-quit-grad-school/). I highly doubt that Hope’s selecting very effectively for resilience or commitment in her grad students, and I don’t think it’s ok for her to try to do so just to give herself the illusion of certainty that her students will turn out as she wants them to. (Though on the other hand, one psychological effect of hazing on those who get through it is to breed loyalty and commitment to those who hazed them…)

      I’m curious what Hope would’ve thought if, when she was hired to her first faculty position, her head of dept. had done the same thing to her. Tricked her into thinking she’d badly screwed up a routine task, just to see if she’d react in the “right” way or “had what it takes”.

      I’ll also note that Hope has two responses to Ambika’s post (they’re at the bottom of Ambika’s post). The first one I don’t buy *at all*. Hope’s little hazing exercise is *nothing* like a theoretician telling her grad students to take difficult math courses! Now, it’s true that mathematicians teaching advanced courses do assign students problems more difficult than they think the students can handle–including unsolved problems. But that’s for the purpose of training students to do real math. Doing real math means figuring out how to prove things that neither you nor anybody else has ever figured out how to prove. Which is *totally* different than *tricking* a student into thinking they’ve screwed up a *routine labeling task*, just to see if they’ll suck it up and stay late to do it again! What Hope’s doing is analogous to a mathematician giving a grad student a tediously lengthy but trivially routine calculation to do (say, summing a thousand numbers), then tricking them into thinking they got it wrong to see if they’ll stay late to do it again.

      Hope’s second response reveals what’s really going on here: Hope Jahren thinks of her students as her employees, and she thinks that she has no choice but to do so because of the competitive funding climate. She believes that, for her lab to be productive enough to get a big grant every 3 years, she can only afford to take on students who will do as their told, who will work however long it takes to do it, and who are already capable of doing this rather than needing to learn how to do it “on the job”. If that means screening out students who would develop into excellent scientists with a supervisor who took some other approach, and screening out students who don’t like being lied to or having their time intentionally wasted, well, that’s the whole point. She’s not looking for students who want to be trained and mentored, at least not in the way that every ecologist and evolutionary biologist I know thinks of training and mentoring. She’s looking for students who will follow orders and work long hours at routine tasks in exchange for being given a credential. Ok, that’s not the most charitable reading of Hope’s response to Ambika’s post–but I don’t think it’s an unfair reading.

      As for the notion that the competitive funding climate, not Hope herself, is to blame for that supervisory approach, it’s falsified by the existence of many, many *very* successful supervisors who get *highly* competitive grants and sustain *very* successful long term research programs without treating their students that way. Including in Hope’s own field, I bet.

      Finally, as Meg’s already noted, if you think the most productive and mistake-free employees are the ones who try to work 80 hours per week, you’re wrong.

      Hope Jahren is a hero and role model to a lot of people, for a lot of very good reasons. But for me, this passage is a reminder, if one were needed, that nobody’s perfect and that somebody can be hugely admirable in many respects while deserving criticism in other respects.

      • I think time is our most precious resource. Junior scientists, relative to their mentors, tend to be blessed with much more of this resource that can be spent on labwork and fieldwork (though everybody’s situation is different). I think a reluctance of a brand new member of the lab to make a big investment of that resource into a unproductive research task is definitely a bad sign. Before making a big investment in someone, it’s only fair to all parties to assess whether there is commitment and an interest in conducting tasks that may end up being futile.

        We spent hundreds of person-hours in the lab and field last year on a thing that ended up being a total, 100% waste of time. No useful data came out of it at all. Not even close. I want people working with me who won’t see this an unrecoverable setback.

        Is that something I would do? Well, I haven’t done it, but I’ve done something similar. I’ve given plenty of students tasks to do in the lab that won’t amount to anything, to see how they perform before I make an investment into them. And that’s especially true when I’ve been in an environment when students express an interest in having done research but not actually in doing research. When dealing with irreplaceable samples, expensive analyses, and my own time, I’ve given students mundane things to do (for example, mounting samples which are unlikely to be useable, but they’ve got to practice plenty before they get to that point). Given the materials that I’ve used to construct my house, I’m not prepared to throw any stones.

      • Going to have to agree to disagree with you on this one Terry. Yes, absolutely, if you have a new student you’re not sure you can trust to take on a big important task, it makes total sense to assign them less important tasks to see how they handle them. And it’s totally fine to make students practice on valueless samples in order to earn your trust as a supervisor. And yes, you do want students who are resilient in the face of setbacks. And yes, I understand that depending on funding, the culture of the field, and other factors, labs will vary in how much rope the supervisor feels students can be given.

        None of which seems like an excuse for lying to your students and tricking them into thinking they’ve screwed up a mundane task in order to discover about them *something you could discover just as easily or better in other ways that don’t involve lying to them.*

        In my mind, there is a *big* difference between what you do and what Hope describes.

      • Yes, I agree that it’s different. We certainly don’t give new undergrads in the lab tasks to work on that, if they mess up, will ruin an entire project. But we also don’t give them something where, no matter what they do, we will declare it a failure (which is my understanding of what Hope Jahren was doing in this passage). Sometimes (say, when learning how to count a sample) we have them practice on a sample where we already know what it contains, as a way of them practicing and us verifying what they’ve done. But, more often, we try to find a project where, if it works it will give us additional data that will be useful, but where, if it fails, the whole project won’t fail. This can be things like measuring chlorophyll in buckets, or egg ratios on Daphnia, or trying to infect novel genotype pairings.

        I sometimes give an undergrad riskier projects — say, working on a new parasite we’ve never worked on — knowing that it might completely fail. But that failure would be informative, and there’s also a reasonable chance that it will succeed.

      • Just to clarify, Hope isn’t tricking students into thinking they did it wrong. They invent a tedious labeling scheme for an imaginary project. Then when they go to the student to see if the labeling is done, either Hope or Bill pretend to decide that the experiment will never work and the tubes are dumped in the trash. This is still hazing, of course, but I didn’t want anything to be misconstrued: they don’t tell the student s/he did it wrong, but rather they say that now they’re not going to use the tubes.

        In Hope’s example, students do not need labeling practice. The distinction in the examples from Terry and Meg is that the lesser tasks are practice before learning something bigger or taking on more responsibility.

      • Thanks for the clarification. Obviously, in light of that I withdraw those bits of my comments that reflect my misunderstanding of the quoted passage. But I stand by the main thrust: I still don’t think it’s a useful or appropriate screening or training exercise, and I still think “hazing” is an appropriate word.

  2. Thanks for a great list of links. I loved Lab Girl, but I would never conduct that hazing in my lab. As much as I can understand wanting a litmus test that could magically predict ability to work through hardship, I think that real science is test enough and with mentorship offers the student a chance to improve. That said, I’ve used some tough love approaches myself.

    I completely agree with this from Meghan:

    “students, say, don’t want to buy a piece of equipment that would save time on the project because they don’t want to spend extra money.”

    This is really important.

    • I’ve used some tough love approaches too. For instance, this old post on the importance of asking tough questions at talks: https://dynamicecology.wordpress.com/2011/08/14/on-asking-tough-questions/. But what Hope’s doing is not tough love in my mind. It’s just pointless, or worse. Giving your students a pointless routine task and then lying to them about whether they did it correctly is not an effective way to teach or select for any trait worth having in a grad student, not even “resilience” or “commitment”. Plus, you know, you really shouldn’t lie to your students.

      I’m reminded of how it used to be the case (don’t know if it still is) that the ESA would require student applicants for the Buell and Braun awards to fill out a separate application form (rather than, e.g., just tick a box when submitting their abstract) and write several hundred words on the “importance” of their work in addition to their abstract. Those essays were never passed to the Buell and Braun award committee, they were just discarded. They were used simply as a hurdle in order to hold down the number of applicants, on the view that anybody who didn’t want to bother jumping the hurdle wasn’t sufficiently hardworking or committed enough to be deserving of an award. I told the ESA staffer who told me this that having students do makework was a terrible way to select for the most “hardworking” or “committed” students. Hardworking committed students (i.e. the vast majority of all grad students) value their time and don’t want to waste it on makework just for a shot at a modest prize. I never got a response.

      • I’m sort of regretting having used the phrase “tough love” (though I defined it, and kept it in quotes through most of the post, for a reason). Just to clarify, I’m not at all advocating that anyone, especially people from underrepresented groups, be coddled or remain unchallenged, and I think that a lot of what people call tough love (e.g. asking tough questions, as in the post linked above) is vital to really great mentoring.

      • No worries Ambika, I at least think it was perfectly clear from your post what you meant, and that you weren’t suggesting that anyone be coddled or remain unchallenged.

        Also wanted to say I appreciated your posts. The reviews I’d seen of Lab Girl so far were uniformly positive. So I was interested to see someone criticize a passage (obviously, just one in a long book) that seems to deserve criticism. Will be interested to read further thoughts from you and others as you continue to work through the book.

      • Hmm…skimmed the storify. Agree that authors playing games with who they acknowledge, in order to shape reviewer/editor opinion of the ms, or prevent it from being assigned to reviewer X, is totally inappropriate. And also very unlikely to materially affect the fate of the ms, and at least as likely to be seen through and make the author look bad. Fortunately I also think it’s rare, at least in my admittedly-anecdotal experience (I’ve never seen it done, as a reviewer or in my years of editing for Oikos).

        Confess I don’t see how the issues raised in the storify apply to giving HTs to someone who alerted you to something that you yourself decided to share. But I haven’t stopped to think about it much, and I note that you have various reasons for not giving HTs any more. Which by the way I don’t think is a big deal. People get their links in all sorts of ways, giving HTs is far from universal, and as far as I know nobody really cares that much whether they get an HT or not. So personally I don’t think the choice of giving HTs or not is all that big a deal, I think either choice is fine.

      • p.s. It occurs to me that some acknowledgment customs are not unlike tipping customs. How in some countries, you don’t tip for X, because the custom is that whoever provided you with X was just doing their job, and you don’t give somebody a special gift just for doing their job.

        There’s nothing inherently polite or impolite about different tipping customs, it’s only rude not to follow the established custom. Which of course, it’s difficult to do if there *is* no established custom. As perhaps there’s not when it comes to who to acknowledge in scientific papers.

  3. I fully agree with your point that DEB is well run and open and suffers mainly from too many scientists for too little money.

    That said, I thought it was a bit of a punt on the question of whether ratings were meaningful. Their dataset wasn’t really setup to answer that (as they acknowledge). But there have been papers that directly address this, including one by DEB’s own Sam Scheiner (http://onlinelibrary.wiley.com/doi/10.1890/13.WB.017/full). There are also a couple of NIH ones (one I think from the cardio section). And the conclusion is pretty clearly no.

    Until we get our heads around the fact that there are decreasing marginal returns on amount of money given, and that our predictive power is very low, we are not going to design optimal granting systems. Once you do, I think you end up with a system that looks a lot more like Canada’s NSERC (many low awards with a low cost to play – i.e. moving towards a lottery and just dividing the money available by N) than the US’s NSF (fewer more concentrated awards that create winners and losers with a belief that you are creating the winners objectively and fairly).

    • This gets back to issues we’ve discussed before:

      -whether one should care if the (inevitably small) variation in NSF or NIH panel ratings *among funded proposals* predicts the future impact of the work supported by those proposals

      -how predictive panel ratings need to be of future impact (however measured) in order for panels to be worth having

      I agree with you that NSF moving towards an NSERC-type system makes sense and should be possible to some degree, without being sure exactly how far NSF could or should go given the likely effects of any such move (e.g., on funding for postdocs) and the other differences between the US and Canadian funding ecosystems. But I disagree with the idea of doing away with panels entirely and going to a lottery.

      • And this is where I chime in and say that, if the award sizes get too small, it won’t be possible to support technicians and other career scientists. To link this with the discussion above, something I really like is that Hope Jahren has highlighted how valuable technicians can be and has called attention to the difficulties of finding ways to support them. The Bills of this world have a ton to offer to science, and deserve to have much more job security than they currently do.

  4. Megan, thanks for the tips on em-dashes and the like! Very annoying that it doesn’t work in Word, Chrome, etc. (I think because they are not “Cocoa” apps), but I suppose a workaround could be to do a “find and replace all” post-facto. A little less satisfying, admittedly…

  5. “The nonexistent jobs crisis is a reminder of the dangers of taking government data at face value and using them for unintended purposes.”

    So the writer is criticizing statistics from government data (SED) by citing other statistics from government data (SDR)? This, despite revealing that the former is a larger and potentially more representative sample than the latter. Yikes…

    Stating that people with PhDs have low employment after 2 years tells us nothing about the nature of that employment. It is not at all surprising that the type of people able to obtain a PhD are also able to obtain some type of employment after 2 years. The true crisis has to do with whether the job actually makes use of the training, and follows from the whole purpose of the “apprenticeship” in the first place.

    I realize the issue is complicated (as Jeremy mentions) but to me, this is one of those click-bait titles that distracts from the actual problem.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.