Friday links: collaborating rivals, a good ending to a bad week, and more

Also this week: take the weekend off!, rethinking flipped classrooms, negative results in ecology, the post-tenure doldrums (yes, we know, #firstworldproblem), what good are birds, explaining liberal academia, how to choose grad students, one-stop shopping for data on women and minorities in science, young scientists have young ideas…[forgets to breathe, passes out]

From Meg:

Several people shared this link a little while back, but I only just got a chance to watch it. It’s amazing. Watch it. Mary-Claire King shares the story of the very awful week that led up to her getting her first major grant, starting her work on BRCA1. The story features a totally unexpected babysitter twist near the end. May we all find a Joe when we’re having a really bad week!

The Casting Out Nines blog had a post on evolving thoughts on the flipped classroom, which I found interesting. For the first two evolved thoughts, my current thoughts still match what he used to think. It will be interesting to see if my thinking evolves the way his has!

David Perlmutter had a post on dealing with the post-tenure doldrums. I do know people who’ve really stalled for a bit after getting tenure. I think the major thing is that the tenure hurdle was the main one on the horizon for so long, that there’s a tendency to think “Now what?” after clearing it (and also some exhaustion from the long run up to it).

PsycGirl has a post on her work-life balance secret: taking weekends off. I haven’t tried the approach of taking weekends fully off, though a mentor I respect a lot (who is very productive) recommends an approach of taking one full day off every week. I generally find myself trying to squeeze in an hour or two on weekend days; it would be interesting to know if there’s a cost of doing that in terms of how well-rested I feel on Monday.

This is a great summary of research on how to learn. Hint: reading and rereading are really ineffective (and give a false sense of security). (ht: Bug Gwen)

This app gives students coupons to local businesses if they keep their phone locked during class.

Colleges underreport sexual assaults, even after they’ve been fined. The story includes this quote: “The result is students at many universities continue to be attacked and victimized, and punishment isn’t meted out to the rapists and sexual assaulters.” Absolutely unacceptable.

How should we evaluate prospective graduate students? This analysis from UC-San Francisco says that scores on the general GRE, undergrad GPA, and ranking of undergrad institution did not correlate with performance as a graduate student. (ht: Gina Baucom)

From Jeremy:

Ecology papers report results supporting the tested hypotheses less often than papers in most other fields of science and social science. I leave it to you to decide if that’s good, bad, both, or neither. I also leave it to you to decide if you should believe it. The sample sizes are small, and I haven’t read the underlying paper to check the methods, so treat this with a lot of caution.

Research on why academics tend to be political liberals. Short version: it’s self-selection. My question: is this a problem, and if it is what could or should be done about it? After all, we worry about self-selection on the basis of other attributes that aren’t, or shouldn’t be, germane to teaching and research (well, not in all contexts), like gender and ethnicity. (ht Economist’s View)

Servedio et al. (open access) is a nice little paper on the importance of “proof of concept models” in evolutionary biology (and any other field). I like their point that asking how such models can be tested is the wrong question to ask. The model is a test–of our verbal intuitions. And I really like how the paper goes beyond general philosophical remarks and uses specific examples from evolutionary biology to illustrate its points.

Neuroskeptic with the story of an “adversarial collaboration”: neuroscientists with opposing views on a controversial topic collaborated to design an experiment on the topic, preregistered it, and ran it. Interesting and heartening to hear that they had no problems agreeing a design. Not surprisingly, but still somewhat disappointingly, they wrote separate discussion sections and still don’t agree on the issue, even though the results of the experiment were clear-cut in favor of one side. At least the “losing” side is now less certain that they’re right. Neuroskeptic suggests that in future the best thing would be for the rivals to agree in advance what evidence would settle the issue definitively, and then go collect that evidence. Anyone who then failed to interpret that evidence as previously agreed would obviously be guilty of moving the goalposts. Unfortunately, I think it’d probably be difficult to get rivals to agree in advance on a definitive test. It’s too risky to give up all your wiggle room. And anyway, I think the real point of such adversarial collaborations is not to change the minds of the rival participants; that’d be nice, but it ain’t gonna happen. Rather, it’s to settle the issue in the eyes of others. That way the field as a whole can move on (if necessary by just collectively ignoring recalcitrant stragglers, rejecting their grant applications and papers, etc.). Any suggestions on controversial topics in ecology that would benefit from an adversarial collaboration? I suspect that in ecology it would often be hard to get agreement on exactly what the question is and exactly what evidence would be definitive. But maybe not: Meg has talked in the past about how her collaboration with Spencer Hall is productive because they each tend to have different hypotheses about what might be going on in their system. So here’s one suggestion: an adversarial collaboration on whether species interactions stronger and more specialized in the tropics. (ht Not Exactly Rocket Science)

Do younger scientists have younger ideas than older ones? To address this, a new preprint text mines the every paper title and abstract in the MEDLINE database to determine the age of the key “ideas”. “Ideas” being operationally defined as “all 1-3 word sequences”, like “HIV” or “nitric oxide synthase”. The “birthdate” of each idea is its first occurrence in MEDLINE, and author “age” (really, career stage) is indexed by the number of years that have passed since the author’s first MEDLINE-indexed publication. Turns out that authors about 10 years into their careers are most likely to try out new ideas (the pattern’s surprisingly clear). The papers goes on to address follow-up questions, such as whether the results are driven by young authors jumping on trendy bandwagons (no). I would’ve liked to have seen plots of the raw data in order to judge if we’re seeing statistically significant but minor effects here, I worry if there might be some artifact that’s driving the results, and artifacts aside I’m sure you can think of all kinds of caveats to the whole text mining approach. But still, I thought this was a creative exercise.

If the reason to conserve biodiversity is to conserve community- and ecosystem-level functions or services, then are we sure we need to conserve birds? Deliberately provocative, of course. But also raises a broader question that comes up in many fields: what’s the optimal allocation of research effort across topics? For instance, one might question whether certain diseases get too much research attention relative to the amount of death and suffering they cause. Should we be asking the analogous question in applied ecology?

A hypothesis about why most Canadians aren’t likely to vote against the current federal government even though they say they care a lot about environmental issues and the government has a terrible record on those issues.

Large corporations still value the outputs of basic scientific research, but they no longer do it themselves, sez a new working paper. (ht Marginal Revolution)

NSF has a nice little website with graphs summarizing their extensive data on women and minorities in science–field of degree, employment, etc.–and how the numbers have changed over time in same.

And finally, here’s what I’m doing today. Jealous much? 🙂

14 thoughts on “Friday links: collaborating rivals, a good ending to a bad week, and more

  1. On the post-tenure doldrums, I found myself wondering if treating the pre-tenure time like a 7-year postdoc ( might help reduce some of the doldrums. My logic is that if you don’t think of getting tenure as overcoming a major hurdle, then there is no let down when you realize that nothing changes (except getting asked to serve on more useless committees, of course).

  2. Political imbalance is a problem to the extent that a field has a political dimension: I suspect that political imbalance is more problematic in sociology than in ecology. The problem appears in such areas as the questions that are asked, the way the data are analyzed and presented, and even *whether* the data are presented. For example, here’s a study that appeared in a top-tier political science journal (, in which all the errors and omissions that I found tilted toward liberal sensitivities.

    For what it’s worth, my recommendation for addressing political imbalance in a discipline is to require more transparency in research that at least includes preregistration of confirmatory research, public posting of data and code for reproducing the analysis, and high-quality venues to publish reproductions so that the work of checking the work of other researchers can result in more than a blog post.

    I recently had a Twitter conversation about the study referenced in the link, in which emails were sent to directors of graduate studies to measure academic bias: The experiment used a treatment of a student who worked for the McCain campaign because the researchers were worried that an email from a student who worked for the George W. Bush campaign treatment would “lead some respondents to question the legitimacy of the email” (p. 119). So that’s an interesting idea: there’s no bias against conservative applicants because directors of graduate studies probably won’t even believe that the conservative applicants are real!

    I’d still guess that self-selection is the cause of most of the political imbalance, but the email experiment seemed to be more a measure of professionalism than political bias because the stakes of the experiment were so low and the treatment was so weak. Plus, not receiving a return inquiry email from a director of graduate studies might be one of the least important types of academic bias possible; see Inbar and Lammers 2012 for evidence about more serious types of bias, and see Skitka 2012 for a critique of Inbar and Lammers 2012.

  3. Ok, I’m late to this, but just had a look at that UCSF study of predictors of success in grad school. I worry that people are making inferences from it that aren’t warranted.

    First of all, don’t you want to know how much the grad students in question varied in the attributes being used to predict their success? If some of them have several years of prior research experience and some have none, but they all have reasonably high GREs and GPAs, well then of course the former, but not the latter, is going to do a better job of predicting grad school success.

    Second of all, in selective grad programs the whole reason admission committees look at GREs, GPAs, research experience, etc. is to decide who to admit. Predictor variables that do a good job of discriminating between who to admit and who not to admit won’t necessarily (and shouldn’t be expected to) do a great job of discriminating among the admitted students.

    This is like complaining about how NSF panel rankings of funded grants aren’t good predictors of which funded grants go on to produce the most highly-cited research. That’s not the panel’s purpose; the purpose is to decide what research to fund.

    The fact that it’s difficult to make fine distinctions among any set of individuals or entities using criteria X and Y does not show that criteria X and Y are inadequate for making coarse distinctions.

  4. Pingback: What if coauthors disagree about what their ms should say? | Dynamic Ecology

  5. Pingback: Ask us anything: investing in your scientific beliefs, and applied papers as corporate prospectuses | Dynamic Ecology

  6. Pingback: Friday links: love letters to trees, are invasive species bad, ASN Young Investigator Award applications due soon, Barbara Kingsolver vs. Mary Treat, and more | Dynamic Ecology

  7. Pingback: Friday links: replicability vs. citations, EcoEssays, and more | Dynamic Ecology

  8. Pingback: Scientific bets vs. scientific influence | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.