What are you looking forward to as a semester break treat?

I’ve had a really busy fall, and am very happy to be in the home stretch!* There are a lot of things I’m looking forward to about the break between semesters, one of them being:

Philip Pullman's The Secret Commonwealth

I loved Pullman’s His Dark Materials trilogy (which I first listened to — in audiobook form — as a grad student) and the first book in the new trilogy, The Book of Dust: La Belle Sauvage was a highlight of my semester break two years ago. I know that I will not be able to put down the second book once I get into it, so I’m saving it as a post-semester reward! I also got a book by Martha Grimes. I haven’t read any of her stuff before, but I suspect I’ll like it. Since I’ve decided to reread The Book of Dust prior to reading The Secret Commonwealth, that means I have three books that I’m saving for the semester break that I’m really looking forward to.

In conversations with a few folks lately, they’ve talked about what they have in mind as a semester break treat — I’m definitely not the only one with reading plans! I’m curious to hear what others have planned. Please share in the comments!

*I mostly feel like “Home stretch! Finish strong!” but other times this is more accurate:

It’s hard to systematically improve K-12 student outcomes by improving teaching. Are there implications for universities?

The Gates Foundation recently spent 6 years and a lot of money trying to improve student outcomes in 3 big US public school districts and 4 US charter school networks by improving teaching. Metrics of teaching effectiveness were developed, tracked, and incorporated into hiring, retention, promotion, and salary decisions. Teachers received frequent, structured observations of their teaching, and received training in how to teach better. And of course, the total investment in all this was much higher than the Gates Foundation’s expenditures, once you account for all the teacher, administrator, and support staff time involved. In monetary equivalent terms, the total cost was on the order of several thousand dollars per pupil per year. All of which made basically no difference for any measure of student achievement, not even if you restrict attention to initially low-achieving students.

I freely admit I’m not an expert on this stuff. Just thinking out loud here. Here’s my question, to which I don’t know the answer: what would you find if you did something similar to try to improve university faculty teaching? On the one hand, university faculty typically have little pedagogical training, so maybe there’s more room for improvement in their teaching than there is among K-12 teachers. I’m sure my own teaching has room for improvement, and I doubt I’m alone in that! On the other hand, just like in K-12 education, a lot of things that have big effects on university student achievement can’t be addressed by anything professors do in their classrooms.

To be clear, my question is not “Is there any meaningful variation among university faculty in how well they teach?” (I’m sure there is!), or “Can individual faculty ever improve their teaching?” (I’m sure many can!) My question is, “Can you meaningfully improve university student achievement, compared to the status quo, with institution-level initiatives that aim to train, hire, and reward good teachers?” I don’t know the answer. But extrapolating (over-extrapolating?) from the linked report, it seems like “no” might be a possible answer.

I guess the question behind my question is, if a university wants to improve institution-wide student learning outcomes, what sort of initiatives work? I’m sure there must be some research on this, of which I confess to almost complete ignorance. Looking forward to learning from your comments.

Update on recording office hours: It seems to be working!

Over the summer, I wrote a post thinking about how to make office hours more accessible, specifically wondering about whether I should be calling them “student hours” instead of “office hours” and whether I should record them. This post is an update on that. The short version is:

  • I decided to stick with “office hours”, but also explained on the first day of class and in the syllabus what office hours are and who they are for (everyone!)
  • I have been recording office hours and think this has worked well, though it was harder to set up than I anticipated.

The rest of this post will focus particularly on the recordings, since I’ve had a few people ask me for more information about that. As I wrote about in my earlier post, this idea came from my UMich colleague John Montgomery, who records his office hours (which he calls “Open Discussion”) and who reported that student feedback on it was really positive.

At first, it seemed like it would be easy to record them. My lectures are recorded through a university system, so I thought that I would just be able to do the same with office hours. But no rooms with lecture recording capability were available for me to reserve for office hours. “No problem,” I thought, “I’ll just figure out another way.” And I did…eventually. A couple of people have asked me for more info about how I’ve been doing this, so I figured I’d share it in a blog post. I’ll also talk about the response to the recordings & things that could be improved.

Continue reading

On a bad argument for grant lotteries

Nature recently did an interesting news story on the growing trend for scientific funding agencies to hand out grants via lottery, from among the proposals judged to be fundable. I thought the article did a nice job touching on the various arguments for and against grant lotteries. But I was struck by a quote at the very end from economist Margit Osterloh, an advocate of grant (and publication) lotteries:

If you know you have got a grant or a publication which is selected partly randomly then you will know very well you are not the king of the Universe, which makes you more humble. This is exactly what science needs.”

Ok, this isn’t a big deal. It’s one quote in one article, and based on a skim of some papers it’s not Margit Osterloh’s main reason for favoring grant lotteries. That said, it’s a very puzzling small deal. So at the admitted risk of talking about something that might be best ignored, I’m going to talk about it a bit.

Continue reading

In praise of Slow Boy science

Back in college, I was on the cross-country team. Williams College was (and is) in NCAA Division III, which for non-US readers basically means the lowest level of intercollegiate athletics. But even in the context of Division III, I was a very slow runner. So slow that I was an official Slow Boy.

The Slow Boys were a self-selecting, tongue in cheek club within the men’s cross-country team. They were founded years earlier, when a few of the fastest guys on the track team started proudly calling themselves the Fast Boys. In response, some of the slowest guys on the track team started proudly calling themselves the Slow Boys. They even chose a Latin motto, “Festina lente”.* And somehow over the years, the Slow Boys (i) became a self-perpetuating thing, and (ii) became a cross-country thing rather than a track thing. Every year, at the end of the cross-country season, the current Slow Boys would select some of the first years to join their illustrious slow ranks. And they’d choose a rising senior as the new King Slow Boy, whose symbol of office was a jeweled scepter lead baton. Why yes, I was King Slow Boy in 1994, thanks for asking, please read the footnote to this sentence so I can bore you with an anecdote about that.** 🙂

Why am I telling you all this? Because the queue was empty Because I think the institution of the Slow Boys illustrates some broadly-applicable lessons for science, about creating an environment in which everyone feels like part of the team and can achieve their full potential. We need Slow Boy science. 🙂

Continue reading

How do ecological controversies typically end?

We talk a lot around here about ecological controversies–asking why some ecological ideas become controversial, and polling ecologists on what they think about currently-controversial ideas.

But how do ecological controversies end? I can think of several possibilities:

  • The controversy gets resolved in favor of one side or the other, at least in the eyes of most people. Example: the late-90s controversy over whether effects of biodiversity on ecosystem function in “random draws” experiments are driven by a “sampling effect”. Resolved by widespread adoption of the Loreau & Hector (2001) additive partition and related approaches. (well, resolved except maybe in the eyes of a few holdouts)
  • The controversy gets resolved because everyone agrees that both sides were partially right. That is, everyone agrees that the answer to the controversial question is “it depends”, or “one side is right X% of the time, the other side is right 100-X% of the time”, or “one side is right under conditions Y and Z, the other side is right under other conditions”. This is purportedly the way all ecological controversies get resolved, but based on our poll data it’s actually not easy to find examples of ecological controversies that get resolved in this way! Though I can think of a few. The controversy over whether local-regional richness relationships typically are linear or saturating, for instance.
  • The controversy gets resolved when both sides turn out to be totally wrong. I’m thinking for instance of 19th century debates about the age of the Earth. As far as I know (please correct me if I’m wrong!), everyone involved missed on the low side.
  • The controversy gets resolved by a Hegelian synthesis of both sides’ opposing views. That is, a synthesis that shows that both sides had a point, but they also shared some blind spots. Synthesizing the views of both sides not only resolves the controversy (or points the way to showing how it could be resolved), but also raises new questions that both sides had overlooked. That’s more or less how the density-dependence vs. density-independence debate in population ecology was resolved (Bjornstad & Grenfell 2001).
  • The controversy devolves into two opposing camps who keep repeating the same points, while everyone else stops caring about the issue. Based on our poll results, that’s more or less how the controversy over ratio-dependent functional responses looks to be resolving itself. And further back, I think the SLOSS debate might be another example? As a grad student back in the late 90s, I recall David Ehrenfeld telling a class that, as EiC of Conservation Biology, he’d decided to stop publishing SLOSS papers because nobody had anything new to say on the issue.
  • The controversy stops without being resolved because everybody, including the main participants, stops caring about the issue. Can’t think of any examples of this in ecology off the top of my head.
  • The controversy gets resolved because it’s shown to not actually be a controversy. Ziebarth et al. (2010) is a possible example, showing that a famous old controversy over population regulation should never have been a controversy at all. There was actually no conflict between the claims the claims of the opposing camps. The whole debate was premised on a misunderstanding of the implications of the relevant empirical evidence. An example from evolutionary biology is Robert Mark’s brilliant demonstration that the whole debate over “spandrels” was premised on misunderstandings of architecture on the part of all the main participants.
  • The controversy doesn’t get resolved and just runs forever. P-values, anyone?

It would be interesting (and hard) to compile data on the frequency with which ecological controversies get resolved in different ways, and try to explain why different controversies were resolved differently. It would also be interesting to know if ecological controversies tend to get resolved differently these days than they used to. For instance, do more empirical controversies get resolved in favor of “it depends” these days, thanks to the widespread adoption of meta-analysis?