A curmudgeon’s musings on modern pedagogy

(warning this is long – you can skip to the conclusions or even bottom-bottom line at the end if you want)

I am not an expert on pedagogical methods. But I have been on the teacher side of university education for almost 20 years. And I’ve literally taught 100, 200, 300, 400, 500 and 600 level classes. I’ve taught classes ranging from 630 students to 3. Math-oriented to field-based. In short a pretty typical mid-career teaching history. And about 8 years ago, I took over a 600+ student intro bio class (basically BIO 100) and spent a lot of time thinking about goals which led to my introducing clickers which led to my basically being the lead academic (working with the campus learning center) leading clicker introduction in basic science classes across campus. And I was a TA in a class before and after introduction of active learning. (my most recent experience with changing pedagogy in a class is discussed below) So I’ve formed a few opinions along the way.

I am by no means at a settled state of where I think university education should go. But the following are a few thoughts and musings. (NB Meg has a series of good posts on this topic as well: here   here and Friday links here  and Terry has a bunch of good posts over at Small Pond here and here).

Point #1- Buzzword blur – we tend to just lump all the trends together but they are not the same. You can do one without the other. (And there are distinct goals and rationales in each case). Here is a quick tour

  • Active learning – activities in which the students are not just passively listening but actively producing knowledge via inquiry, answering questions, discussing, etc. This was one of the earliest movements (in ascendancy in late 90s).
  • Peer instruction – a model in which students teach each other. Often students are given a question and then discuss the answer with their peer students. This draws on research showing most people learn better in a social context. When tested via before & after versions of the same question using clickers I am astonished at the improvement (often 10% right to 95% right).
  • Flipped classroom – the buzzword du jour – this starts from the notion that lecturing is a relic from the days when textbooks were rare (hand copied). Flipping means students do passive learning (reading, watching lectures) at home on their own schedule, and then uses the classroom with the instructor present to do something more active where the instructor can intervene and assist. This can be as simple as having students do what used to be their homework now done in class and raise their hand for help to much newer approaches like peer instruction.
  • Just-in-Time-Teaching – the notion that the teacher will dynamically adapt the material being taught based on real-time feedback on what students are not understanding. This implies an ability to reteach material in a new way. It also implies real time feedback either from quizzes just before class or some in class feedback mechanism (clickers, hands raised) or although nobody talks about it old-fashioned sensitivity to puzzled looks on students faces.
  • Inquiry based learning/Investigative learning – instead of teaching material, giving students problems (specifically non-trivial problems) to solve. The teachers role is as a facilitator to help students discover first the process they need to use then the answer to the questions themselves.

Point #2 – Clickers – clickers are just a tool – they can be used for any of the above techniques or for purposes not listed above. At one end clickers can be used to pose simple multiple choice questions and then reward or penalize based on attendance (there is a difference and both are possible) Clickers can also be used in peer instruction (get clicker answers, show what everybody answered, discuss for 2 minutes with peers, then revote – amazing improvement occurs)  Clickers can also be an important tool in just-in-time-teaching if the teacher is flexible enough (i.e they’re a great way to find out if the students really understand what you just taught if you’re brave enough to deal with a no they didn’t answer). Generally one should only expect as much out of clickers as one puts into them. And clickers have real issues about cost – old fashioned methods like hand raising can do many of the same things (although its harder to force 100% participation). Honestly, I think the single biggest value of clickers is to serve as a disruptor and force you to think about how and why you teach. And if you don’t do that thinking, then clickers aren’t doing much.

Point #3 – Remembering why we are doing this – Although often not made explicit the goal of most of the techniques listed in Point 1 is to elevate learning up Bloom’s taxonomy. If this is not the goal, then such techniques are not necessarily the best approach. Bloom’s taxonomy was formulated in three domains: cognitive, emotional & physical, but the most talked about and the relevant one here is the cognitive. This recognizes the simple idea that there are different levels of learning starting with knowledge (memorize facts), then comprehension (extrapolate/understand), then analysis (using knowledge) then synthesis then evaluation. The last sentence is immensely oversimplified of course. But this is the central motivation of all of these techniques: to elevate learning up the taxonomy. Much of the origin of these techniques started in physics when people realized students were memorizing formulas enough to plug and chug on tests, but had major failures in basic intuition about how physics works. So they began teaching to develop higher level mastery.

Learning higher up on the taxonomy is obviously a good thing. But the thing I never hear anybody discuss is that it is part of an inherent trade-off. It is essentially a depth vs breadth trade-off. Any realistic application of active learning etc techniques to elevate learning involves covering less material. Covering better, but covering less. Are there times and places in university courses to cover the breadth rather than the depth? I struggled with this question a lot teaching intro bio. The breadth expected of that course from higher level courses, and indeed the breadth of life gives a strong demand in the breadth direction. But to cover it meant giving up on deeper understanding of higher level concepts like homoplasy. Which is more important: a) truly understanding homoplasy rather than just being able to regurgitate a definition of homoplasy (e.g. being able to produce new examples of homoplasy which would probably be the applying or 3rd level of Bloom’s taxonomy) or b) remembering platyhelminthes and their acoelemate architecture and basal position (level 1 or remembering)? Maybe some of you out there are such fantastic teachers you can achieve both in a semester. But in my experience this trade-off is very real (not on just these two exact topics of course but on these two levels of learning across all of the material to cover in an intro bio class). I never did fully decide what I thought about this and I’d be curious to hear what others say. But I do strongly believe there is a trade-off between breadth and depth (moving up the taxonomy).that is not talked about enough.

Point #4 - Notetaking – I find it ironic that in this day and age of focus on active learning and moving up the taxonomy, teachers have largely capitulated on giving students copies of powerpoint slides and eliminating a very effective method for doing real-time active learning while listening to lectures (with many studies showing that note taking is a very effective learning method). And nobody is calling this out.

Point #5 – You can go halfway (or 10%) in – It seems to me the conversation is very binary. All-in flipped/active learning/peer instruction 100% of the time or boring old traditional. This is totally bogus. If active learning has value, then one five minute exercise per hour (or even every other class) has value. And practically, it is very possible to choose anywhere on the spectrum from 0% to 100% flipped/active. This is also my reason for being pedantic and breaking apart the ideas in point #1. One can flip without inquiry based,  do active learning without just-in-time, etc.

Point #6 – This is not new – Another thing that is not discussed very often is that these techniques are hardly new (but see this link and commentary of Terry’s). Socrates was demanding productive/active learning using inquiry based techniques and peer instruction 2500 years ago. And many teachers have been doing the same for decades (and millenia).

Point #7 – How hard is it to do? – You can find various opinions about how much work it is to flip a class room (see Meg here and Terry here). My main experience was also the first time I taught the class so it is hard to separate the two. I don’t think I have an informed opinion. But I do think that for those of us raised in the traditional lecture mode, it can take more creativity and emotional energy to do something new and different.

Point #8 – Does it work? – My sense of the overall empirical literature on how effective these techniques is that the answer is complex, which matches my own experiences. There is a lot of evidence that active learning etc approaches match what we know from cognitive psychology about how we learn best, but this is indirect evidence for superior learning occurring. Students on average also enjoy these techniques. This is also indirect evidence (but very relevant in its own right). More directly, studies show statistically significant improvements in level of learning with active approaches but the pedagogical significance is tougher to assess. A good recent metanalysis is Freeman et al They show one half standard deviation improvement which amounts to about 6 points out of 100 improvement (less on traditional exams, more on higher level learning concept inventories).  But there are a lot of issues with these studies (e.g. are more motivated teachers more likely to adopt active learning techniques but succeed primarily because of the motivation not the method – or are they likely to teach better because the change in technique is forcing new energy and attention to teaching regardless of technique).

My own experience with a partial commitment to such techniques in the BIO 100 course is that the students scored exactly the same average (and I mean to 2 significant digits) on the final exam as they did in the earlier version of the course. It was a a rewritten exam, and I would like to argue that it was testing further up the taxonomy. But this was not formally measured. And it wasn’t miles up the taxonomy (it was still multiple choice for goodness sake). My overall impression is that there is an improvement in “learning” (hard as that is to define and measure) but it is not stunning or even by obvious amounts (i.e. I would have to use statistics to tease apart the improvements) .Its certainly not like every student is suddenly moving up a grade (e.g. B to A) in class or anything. Freeman suggests 4-5 points on traditional exams which might be a B- to a B. This still sounds a little high compared to the experiences I know of but not outrageously high. But I am more confident (based on experience and literature) that students are enjoying things more, paying attention more, and probably developing a slightly more sophisticated understanding. And that is nothing to sneeze at.

My most recent personal experience with pedagogy reform

This year I abandoned powerpoint (except for occasional graphs and pictures) and did a lot of chalk boarding but in the end you would have to say they were “traditional” lecture classes (in fact really old school lectures without visual aids except the chalk board). But the students took lots of notes (no powerpoints to give). And I spent a lot of time asking and being asked questions (there were <20 students so everybody was involved). Indeed, despite always making dialogue during class a top priority, a lot more happened this year – somehow powerpoint seems to introduce a wall and turns the goal into finishing the slides instead of teaching/learning. I did some peer instruction and also just giving ecological math problems to do individually in class, but most of it was more in the vein of Socratic inquiry (i.e. teacher asking a question and getting multiple responses back). So I wasn’t following too many of the buzzwords, but it felt like a much improved class to me. Was this good pedagogy or bad? NB: I am officially combining point #4 with this experience to launch a new pedagogical movement called “PowerPoint is evil”. If this takes off, you heard it here first! But then again, its possible that getting rid of the powerpoint was just the disruptor (as mentioned above with clickers) that made me pay more attention to my teaching and five years from now adding PowerPoint back in will improve my teaching.

Point #9 – Class size – Thinking about class sizes above raises another big point – one that I’m sure administrations won’t like. But how much can pedagogy innovation do to fundamentally change learning (or lack thereof) in a classroom of 600 (or 300 or even 100) students? Teaching a class of 15 students effectively is pretty easy.Teaching a class of 300 effectively is impossible no matter what. The aforementioned meta-analysis by Freeman showed pretty clearly that active learning is most effective in classes with <50 students and decreases in effectiveness pretty quickly in larger classes. Is pedagogy improvement just a giant distraction from the real issue?


Overall, I think the emphasis on pedagogical methods is fantastic (and largely unprecedented in higher ed – most previous reform movements have focused on curricular reform). And I do think there is something real and of value in the active learning movement.But its not ginormous. And I also think we have gotten overly simplistic, reducing teaching to a one-dimensional bad (traditional) vs good (100% active learning) axis. The reality is that even the concept of learning is multidimensional (with the Bloom taxonomy being but a single dimension) and that pro-con trade-offs exist on all of these dimensions. This makes it impossible to to say what the “best” teaching method is without specifying the learning goal. In practice, I think we are better off to think of the traditional vs active/flipped axis as a dial we should tune depending on the situation and goals. And this dial has positions everywhere in between 0 and 100. And it is not one-dimensional it has multiple dimensions including 0-100% flipped, 0-100% just-in-time, 0-100% peer instruction, 0-100% inquiry based learning independent of each other and etc. And, although I haven’t fully worked it out for myself, I believe in some contexts breadth is a more important goal than higher taxonomy learning. We don’t have a set of best practices for breadth-oriented learning yet, but I wish we did.

One big thing I hope comes out of all of this is that we spend a lot more time in our departments and among colleagues having discussions about what our learning goals are (and no I don’t mean the kind my university requires me to list on my syllabus under the heading goals that are just lists of topics covered). I mean talking about how far up the taxonomy should a class go. What breadth is necessary and appropriate in this class to set up future years. Which classes are appropriate for different kinds of learning. Perhaps ecology and genetics should focus on high level learning and BIO 100 should focus on memorizing the phyla of life? Or maybe not? How important is a 6 point increase on an exam (and maybe half of that in a large class)? Would we be better off scrapping exams and lectures and active learning and putting them in hands-on labs? or taking ecology students out in the field to design their own experiments? Recall that there are finite resources so there are trade-offs and limits. How can we measure and assess whether we are succeeding? We need to start having discussions about pedagogical goals in departments. Logically that should proceed decisions about classroom pedagogical methods, but I’m not sure this is how things have happened.

Bottom bottom line - Modern pedagogy (=active learning/flipped class/etc) is not a silver bullet and it should not become the good end of a one-dimensional value judgement (flipped=good, not flipped=bad teaching). But these techniques definitely have some benefits. There are probably other issues we should be talking about equally much ranging from the simple like the declining art of notetaking to the difficult like class sizes. And maybe just mixing up our teaching approach periodically is more important than any specific technique. More broadly we need to think deeply and discuss regularly about our pedagogical goals, especially depth vs breadth, and the best ways to get there.

What are your experiences with the modern pedagogy movement? Has flipping classrooms become a bandwagon? Is this a good thing or a bad thing? Is there a breadth vs depth (=up the taxonomy) tradeoff? Should we ever choose breadth? Which of the techniques in point #1 do you think are most important?

Friday links: scooped by a blog post, the real history of P-values, lab safety, and more

Also this week: the history of #icanhazpdf, microbiome pet peeves, new evidence of widepread p-hacking, the difference between terrestrial and marine ecologists, using software to solve the wrong problem, and more. Oh, and never mess with a manatee.

From Meg:

Jacquelyn Gill has a post that provides an important reminder about the importance of considering lab safety. Fortunately, no one in my lab has had an accident as serious as hers, but someone did cut her hand (while making chemostats) enough to require an immediate trip to the doctor. As a grad student, I had to drive a student to urgent care after she sliced her foot open on something while wading into a lake in a limnology class. I sliced my own foot on a zebra mussel in a different lake. And when I worked as a technician between college and grad school, the other tech had what initially appeared to be a very, very bad cut to her hand, but that fortunately ended up not being very serious. So, it’s clear to me that things can go wrong and, combine that with me being the child of a nurse and a fireman (safety first!), and you’d think my lab would be all about the safety training. But Jacquelyn’s post has me realizing that I probably haven’t thought about this enough. We have some basic safety measures (especially that people should head into the field with a buddy, that you need to get off the lake at the first sign of a thunderstorm, and that no one is allowed to go out in the boat unless they can swim); at Georgia Tech I made sure everyone in the lab knew the number to call if there was an emergency and had this taped to the lab phones. (It wasn’t 911 because that would get Atlanta police, whereas GT police would be able to respond faster.) But how would people respond in a situation like the one the Gill Lab was in? I’m not sure. Then again, what sort of training could we do that would prepare folks for the wide variety of (fortunately unlikely!) situations that could arise? Definitely lots to think about!

On twitter, people use the #icanhazpdf hashtag to ask for pdfs that they can’t get on their own (usually via institutional access). For people at institutions, this is a way to bypass the InterLibrary Loan (ILL) system (and I think often results in getting a pdf more quickly). This paper has an interesting summary of #icanhazpdf, including its history and information on what is being requested. It also includes this depressing sentence:

The current scholarly publishing system is so broken that some researchers are forced to make requests like “Still looking for a pdf of my own paper! Please help.”

From Jeremy:

Sociologist Andrew Lindner on how a paper of his was scooped by a blog post, and what this says about the scholarly publishing system. I wouldn’t overgeneralize from what seems like an unusual coincidence, but still, interesting to think about. (ht Brad DeLong)

Noah Fierer’s pet peeves of microbiome studies.

The potted history of P-values, at least when told by certain sorts of Bayesians, is that they were an invention of R. A. Fisher that set scientific inference on the wrong basis for the better part of a century. For instance, Nate Silver spends a whole chapter on this potted history in his recent book. Statistician Stephen Senn corrects the historical record (emphasis in original):

Fisher did not persuade scientists to calculate P-values rather than Bayesian posterior probabilities; he persuaded them that the probabilities that they were already calculating and interpreting as posterior probabilities relied for this interpretation on a doubtful assumption. He proposed to replace this interpretation with one that did not rely on the assumption.

The upshot, Senn argues, is that Bayesians don’t really have a problem with P-values. Rather, they have a problem with other Bayesians. And many contemporary complaints about the evils of P-values are misdiagnosing the root of the problem. Go read and then join the (already lengthy!) discussion in the comments.

Marine ecologists are organism/system focused, terrestrial ecologists are question-focused. At least, that’s one way to interpret the fact that marine papers name the study organism(s) much earlier in the introduction than do terrestrial papers (Menguia & Ojanguren 2015, open access). Casey terHorst comments. Meg, what do you think the results would be if you looked at freshwater ecology? (ht @hughes_lab)

Head et al. 2015  (open access) text mined all open access papers on PubMed and examined the distribution of P-values <0.05 to look for evidence of p-hacking. It’s a very careful study, more careful than most of casual text mining on this topic that I’ve linked to in the past (somewhat to my regret). Turns out p-hacking is widespread across scientific disciplines covered by PubMed. The results also suggest that scientists mostly are studying real effects rather than chasing noise. Head et al. also found evidence of p-hacking in meta-analyses of sexual selection in evolutionary biology, but not enough to dramatically alter the conclusions of the meta-analyses.

A while back we did a post on ecologists who are awesome at things besides ecology. In the same spirit, I give you Ravens offensive lineman John Urschel, who is a serious mathematician. (ht Marginal Revolution)

Straight from the horse’s mouth: the relationship between academic economics and economics blogging. (ht Marginal Revolution)

This week in Treating the Symptom Not the Disease: dude, if you need software to tell you if a scientific paper was computer generated, your journal has problems no software can fix.

And finally, never mess with a manatee. :-)

Ecologists think general ecology journals only want “realistic” theory. And they think that’s bad.

Last week I polled readers on whether they shared my impression that general ecology journals only want to publish “realistic” theory, meaning theories tightly linked to data. I also asked readers if they thought general ecology journals should only publish realistic theory.

The answers were loud and clear: yes to my first question, no to my second.

We’ve gotten 102 responses as of this writing (about 24 h after the poll went up), and from past experience we know that the results won’t change much since most responses come in the first 24 h. It’s not a random sample from any well-defined population, obviously. But it’s large enough to be more than anecdotal, I think.

Respondents were a balanced mix of ecologists who primarily do empirical work (37%), theory (29%), or a mix (32%).

Almost everyone either shares my impression that general ecology journals (besides Am Nat) only want to publish “realistic” theory (43%), or isn’t sure (48%). Only 8% disagree with my impression.

Only 10% think general ecology journals should only publish “realistic” theory. The vast majority (80%) disagree. Another 9% aren’t sure.

Looking at the crosstabs, those who think that general ecology journals only want to publish realistic theory skew towards theoreticians (39%) and people who do both theory and empirical work (41%); only 20% are empiricists. Those who said “not sure” are disproportionately empiricists. And most (8/10) people who think that general ecology journals should only publish “realistic” theory are empiricists. The other 2/10 do both; none are theoreticians.

As discussed in the comments in the previous post, it’s not actually clear if general ecology journals are in fact only interested in publishing realistic theory. It might be a case of author perception becoming reality to some extent. And not all unrealistic theory is created equal; some of it really isn’t of wide interest to ecologists (the same is true of any sort of work, of course). See the excellent comments from Andre de Roos, a theoretician and an editor at Ecology, for what he looks for in theoretical papers submitted to Ecology. But even if general ecology journals only have a perception problem, I think that’s still a problem. You don’t want authors seeing you as unwelcoming to papers that you’d actually welcome.

Not sure what can be done about this. But the fact that most ecologists don’t like this perceived state of affairs would seem to provide an opportunity. A general ecology journal that manages to convincingly signal its receptivity to good theoretical work might reasonably expect to start attracting more of it–work that would otherwise go to specialized theoretical journals. That could be an attractive proposition to both the journal, and to the authors, who presumably want to reach a broad audience*. Convincing signals might include running special features on theoretical work, and publishing theory papers from the journal’s editors.** Andre for instance notes that he publishes his theoretical work in general ecology journals, including Ecology.

*Of course, insofar as people doing “pure” theoretical work see their audience as comprising other theoreticians, they’re going to keep submitting to theoretical journals whether or not they see general ecology journals as receptive to “pure” theoretical work.

**Not that any journal wants to be a house organ for its editors, obviously. But if the theoreticians on the journal’s own editorial board don’t see the journal as an outlet for their own work, why should anyone else?

Communicating about lab finances

As I talked about in yesterday’s post, I’ve been thinking a lot about lab finances lately. For the most part, I’ve done this on my own, staring into what sometimes feels like an abyss of spreadsheets in my office. A lot of the recent spreadsheet crunching has centered around some personnel decisions I need to make soon. After spending a while thinking things through on my own, I decided that this would be a good opportunity to talk through some aspects of lab financial management with my lab members.

Overall, I think I’m somewhere in the middle of the road in terms of discussing finances with my lab. I share grant proposals with lab members, but, because the budgets contain salaries, I don’t feel comfortable sharing that portion with the lab and remove it from the pdf. But I think there’s value in them understanding the general process of how things work, so I wonder if I should find a way to share more information with them. The way that fringe benefits and overhead cause budget numbers to balloon is generally completely shocking to people who hadn’t heard of them in the past. I think my lab members are aware of that, but I suspect the numbers I had in yesterday’s post (most specifically, that a $40,000 salary takes away $82,000 from a grant’s bottom line) would still be surprising. They still always shock me a little when I see them!

As I said, a lot of my current budget crunching relates to personnel decisions I have to make soon. At first, I was running through all the options on my own, but eventually decided that I should talk a little more about it with my lab. I think the increased dialogue has been good, both because it directly affects them now, and because it provides them some more information on what life as a PI is like (which is valuable as they consider that as a possible career path).

Should I discuss more specific numbers with them? My inclination is not to. It doesn’t seem right to me to discuss people’s salaries (which is a huge component of the budget for my lab). I’m sure my reticence is influenced by having been raised with the belief that money is not something that is discussed.* While it’s possible that having my lab members slog through some spreadsheets would provide valuable experience, I tend to think it’s not so valuable as to be worth their time. At the same time, it seems problematic that future PIs receive no training in how to manage budgets.

While I mostly haven’t discussed specific numbers with my lab, I have tried to emphasize that people are expensive. I think this is important in trying to avoid penny wise, pound foolish solutions. For example, if something costs $200 but would save them weeks of work, that’s totally worth it. In my experience, there’s often an inclination to undervalue the cost of one’s time when thinking about possible purchases.

If you’re a PI, how much do you discuss lab finances with lab members? If you’re a student, postdoc, or technician: how much does your PI discuss lab finances with you and the rest of the lab? Do you think that is the right amount, or would more/less be preferable?


* Whether or not that is a good thing is debatable. I remember very clearly, as a grad student, when I heard I had received a postdoctoral fellowship; when another grad student heard this, the first thing she asked was what my salary would be. I was shocked at such a forward question. But when I relayed that to a mentor later, he pointed out that being open about salaries is a good thing, given all the evidence for inequities in pay.

Keeping track of lab finances

Recently, I’ve spent a lot of time working through lab finances. I need to make personnel decisions, and what I want to do is:

  1. Make sure the amount of money I think I have to spend is the amount I actually have to spend.
  2. Encumber the salaries/stipends/tuition of people currently in the lab.
  3. Play around with different scenarios (e.g., What if that prospective grad student comes? What if I hire another postdoc?) to see what impacts they have on the budget.

This seems like it should be straightforward. It’s not.

First, to make sure the amount of money I think I have to spend is the amount I actually have to spend: At both Georgia Tech and at Michigan (and I’d guess most other universities), there is an online system for tracking accounts. Here, I log in to something called Wolverine Access, then click on the M-Reports option, and then can see a summary of projects (where “projects” are startup and individual grants, in my case). This system is updated once a month when accounts are reconciled. I can see something labeled “Official Balance as of last month closed” (at the moment: Feb 2015) and the Projected Balance. Those two numbers differ from each other, for reasons that I’m guessing are related in part to salaries of people currently supported on that funding. But those salaries don’t seem to be able to fully account for the differences, so I’m not sure why the projected balance is different. Maybe they’re projecting further than one month? Or maybe they’re assuming we’ll spend a certain percentage of the non-personnel funds each month?

Let’s assume, though, that I can figure out whether I should be looking at the “official balance” or the “projected balance”. Additional questions that then arise are things like 1) Has the most recent installment of the grant hit the account? 2) Has the REU supplement hit the account yet? 3) Did the invoice for that spec get paid yet? There are ways to look at the nitty gritty details of all the transactions associated with my accounts, but it always takes me a long time to find the right spreadsheet and, once it’s opened, it’s somewhat overwhelming. It’s tempting to stick my head in the sand and just totally ignore the details of individual transaction, but I know people who’ve opened those files to find things like that someone else’s grad student was accidentally put on their grant. That’s the sort of thing you want to figure out sooner rather than later! I’ve never found anything that major, but I have found discrepancies that needed to be corrected.

Assuming I get all that worked out, the next thing I want to do is to encumber the salaries/stipends/tuition of people currently in the lab (and also any major planned expenses, such as those associated with molecular analyses). My first priority, before taking any new people into the lab, is to make sure I can properly support the people already in my lab. For grad students, this means stipends, benefits, and tuition. For staff (postdocs and technicians), this means salary and fringe benefits. Plus, when charging to grants, there’s also overhead, which gets tacked on. So, if I want to pay someone in mylab $40,000/year, $82,000/year gets charged to my grant. People are expensive.

The way I do this is to take whatever number I came up with above and then go over to excel to play around with numbers (e.g., let me pay my current technician off funds X & Y for the next two years). I pull up the budget template in excel that I get from the grants person prior to working on a budget for a grant proposal. And then I go in and play around with numbers. What I would prefer would be to have a system that integrates with the one above, where I can ask them to project the budget through to the end of the grant, assuming the additional yearly increases from NSF and various encumbered personnel. This should be flexible so that, for example, if it took 6 months longer to hire a postdoc on a grant than was originally planned, the salary encumbrance on the grant gets moved by 6 months. The report I downloaded at Georgia Tech allowed me to play around with numbers like this better than the report I can download here at Michigan does. I’m guessing there’s a way to do this, but I haven’t figured it out yet. I have a meeting set up with the financial folks to get pointers!

Then, once the current people are taken into account, it’s somewhat straightforward to do a similar thing but with scenarios involving new lab personnel (or, for something like startup, new equipment purchases). But these can sometimes be trickier because there isn’t complete information. This is the most notable in terms of prospective grad students. In my department, the expectation is that I will support grad students for two semesters on a grant.* So, if I make offers to two grad students in a given year, between the time when I make those offers and when I hear back from them, I’m not sure if I should budget $0 for them or two years worth of support. That can make a big difference when trying to make other budget decisions.

From discussions with other faculty members, budgets are always a source of stress. We don’t get training in accounting, but we need to handle pretty substantial budgets. And I’m always really nervous that I will make a mistake somewhere along the line. So, after poring over the spreadsheets for a while, I emailed the accounting folks in my department to ask them for a meeting. We’ll meet tomorrow so that I can make sure I am doing things correctly. Then, just to be safe, I’ll email them to make sure the outcome I think we came to is the one they think we came to as well. And then, after all that, I’ll hopefully be able to feel comfortable that the lab finances are in order.

But I continue to feel like there has to be a better way. Is there an approach that I’m missing? How do you track lab finances?


*The rest of the time is covered by a mixture of fellowships and TAing. And, yes, I realize we are very lucky to have this much support!

Friday links: Andy Warhol on citations, and (a bit) more

From Jeremy:

Philosopher of ecology Jay Odenbaugh argues that ecological theory doesn’t need to make accurate predictions to be successful science, and that you’re misunderstanding the purpose of theory if you think otherwise. The paper’s a few years old, but you probably missed it at the time and it’s still relevant. Speaking of predictions, I predict a counterargument from Jeff Houlahan in the comments in 3..2…1… :-) (ht Ben Kerr)

In 2003 Wellesley College implemented a policy to combat grade inflation. Here’s a rundown of the effects it’s had since then. It’s an interesting study in part because the policy wasn’t campus-wide–it only affected 2/3 of the departments (most of the science departments, and economics, weren’t affected by the new policy because grades in those departments ran lower). Briefly, the policy worked–the marks that were supposed to drop, dropped, and by just enough to comply with the new policy. Students’ choices of major shifted, at least within the social sciences, in such a way as to suggest that previously students were choosing their majors in part out of a desire for high marks. Students in the affected departments gave their profs poorer course evaluations. And there were other effects.

In the future, everyone’s papers will be cited for 15 minutes. (ht Retraction Watch)

To make up for the paucity of reading material this week, here are some funny pictures I found by googling “ecology meme”:


Should theory published in general ecology journals have to be “realistic”?

At last year’s ESA meeting there was some discussion among members of the Theoretical Ecology section about how, with the exception of Am Nat, leading general ecology journals seem not to publish much theory. And further, that general ecology journals increasingly seem to demand that the theory they do publish be “realistic”. Meaning in practice that there needs to be data supporting the assumptions, estimating (or at least constraining) the model parameter values, and/or testing the model’s predictions.

Which seems problematic, at least to me.* I think leading general ecology journals should seek to publish the best work that ecologists do, including theoretical work. It’s not true that only “realistic” theory is of interest to empiricists, with “pure” theory belonging in theory journals. I don’t think it’s fair to expect theoreticians to test their own models, but not expect empiricists to develop their own models. And think of all the important theoretical papers that rightly have a had a big influence on all of ecology without being “realistic”. Bob May’s work on chaos. The Rosenzweig-MacArthur model. Charnov’s marginal value theorem of optimal foraging. Many others. Yes, I know that none of those were published in general ecology journals–but the idea that they couldn’t or shouldn’t have been because they weren’t “realistic” (in the sense of being tightly linked to data) bothers me.

I wonder if the issue here isn’t just an empiricism-vs.-theory thing. I wonder if some of it also reflects the increasing popularity of models over theory in ecology. I wonder if we’re so keen to link models and data, and getting so good at it, that we’re coming to see models not linked to data as either of little value, or as some separate thing that belongs in its own journals.

I’m curious about whether you share my admittedly-anecdotal impression here, and if so, whether you think it’s a bad thing. So here’s a little three-question poll, which I encourage both theoreticians and non-theoreticians to take:

*Although I admit that, when serving as a reviewer for general ecology journals, on at least one occasion I’ve asked authors of a theory paper to add in data demonstrating the real-world applicability of their approach.

Sciencing with a newborn

A little while back, I wrote a post in response to a reader’s request for tips on how to continue being a productive scientist while in her first trimester of pregnancy. This is the follow up post, also on request, that talks about the strategies I used for trying to get work done with a young baby. I want to stress that, as with the last post, what I’m writing about here are simply my experiences. Others will surely have different experiences. Because I know that everyone’s situation is different and what works for one person won’t necessarily work for another, I’ve shied away from writing this sort of post before. But I get asked this quite often, and my hope is that this might help some new moms, while recognizing that it surely won’t be useful to everyone. I’m also hoping that people will share their tips/thoughts in the comments, so that people can read through to get ideas that might work for them.

Before launching into how I approached sciencing* in the first trimester, I want to acknowledge that the situation in the US for parental leave is not good. But I imagine that, even if I lived in a country with more generous parental leave policies, I would still try to get work done with a newborn. My real family is my top priority, but my science family is incredibly important to me, too. Even with a newborn, I felt a responsibility to try to continue helping my students and postdocs make progress on their work. For someone living in the US, I was relatively lucky in both cases that I was able to be released from teaching for the entire semester. That, combined with my babies being born early in the calendar year and having a partner who also has a flexible schedule, made it so that they didn’t need to start daycare until 6 months.

In the first month or so after the baby was born, I felt like any work I got done was a bonus. With my daughter, I had saved making a bunch of high resolution figures for when I had a newborn. I just needed to enter a new set of parameters, hit run, and come back a day later to get the figure that had been output. That was easy. With my second child, my lab actually got three papers submitted in the month after he was born. That sounds impressive, but, really, it was just that we had three papers that were very close to being submitted before he was born, but that didn’t quite make it out the door. In retrospect, I kind of wish I had taken a little more time to be totally off, but I had a student finishing up and a postdoc who was going to be on the job market and a tenure dossier due, so getting those papers out seemed important.

With both children, when they were around a month old, I felt both like I should be doing a bit more work, and like I wanted to be doing some work. These experiences taught me that I am much happier when I get to think about science. The amount I worked increased gradually over time. At one month, I would be happy if I could get in a couple of hours of work. At five months, I wanted to be working several hours a day.

With my daughter, the strategy my husband and I used was to trade off watching her by feedings. We were worried about the possibility of nipple confusion, so I was nursing her for every feeding. Her general routine was nurse-play-sleep. So, I would nurse her, then watch her if it was my turn, then put her down for a nap and then go try to get a little work done before she woke up and needed to eat again. If it was my husband’s turn to watch her, I would nurse her, hand her off to him, and then try to get work done. In theory, that was a longer time to work, but I would often get distracted once he put her down for a nap, since I would start wondering how soon she’d wake up and think I might have to stop working any minute. Overall, I felt too unable to focus when I was trying to work.

With my son, we used a strategy that worked much better for us: I would work in the mornings while my husband watched our son, and my husband would work in the afternoons while I watched the baby. This allowed me to focus more fully on my work in the morning. This strategy was only possible because we were fine with giving our son bottles when he was little. (We started when he was one month. He was a good nurser, I’d read something indicating there’s no real evidence for nipple confusion**, and my guess is that waiting to introduce a bottle until my daughter was older probably contributed to it being really hard to get her to take a bottle, which was a source of stress as she neared the point of starting daycare.) Me working in the morning worked well for two reasons: 1) I am naturally a morning person (and my husband is not), so this fit in with our natural schedules. 2) Breastmilk supply is highest in the morning, so it was easier to pump then. This strategy let me build up a nice little freezer stash that helped a ton once the baby was in daycare full time.

When using this half-day strategy, I often would stay home while working, since I didn’t want to take up work time with commuting. I would use that time to work on things that really required focus (especially writing, editing, and data analysis). I would save things that I could do with the baby for the afternoons. That included meetings with lab members (which I often did via Skype – without video – so that I could bounce on the yoga ball or nurse the baby to keep him happy without that being awkward); fortunately, my lab members were all really understanding and flexible, and we’d often just schedule meetings for times like “around 2 – I’ll email you once I get the baby down for a nap”. We were also in the process of choosing a new Intro Bio textbook (which involved a surprising number of meetings) and planning for a new building (also lots of meetings); my son came to those meetings with me, worn in a baby carrier.*** He was really happy in the carrier as long as I was swaying, so I would stand in the back or off to the side and sway through the meeting; obviously this strategy wouldn’t work as well with a baby who likes to yell. As far as I know, no one minded this approach, which is surely partially because I’m in an environment that is very supportive of work-life balance.****

I think trading off half days was more effective for me than if we’d tried to alternate days (me working one, my husband the next), since I could focus really completely for the half-day. If it had been a full day, my focus would have lagged during that time. Plus, it let me get into a daily rhythm, which I liked. At the same time, trying to do as much in the afternoons while watching the baby was certainly exhausting (especially since my son was a really bad sleeper, so I wasn’t getting a ton of sleep at night). While it is possible to keep a baby quiet through lab meeting by bouncing away, it’s really tiring. But I’m not sure how to resolve this issue. The first year felt like a constant tug for me between wanting to be with the baby and wanting to do science. I was always trying to juggle taking care of my family, myself, and my lab.

Reading back through what I’ve written, I’m unsure of how to end this post. I left it hanging without an ending for several days, because I feel somewhat conflicted still about the strategies I used for sciencing with a newborn. There’s a part of me that thinks the strategy we used with my son worked well. And there’s a part of me – especially the part that has mommy guilt – that wonders if I did too much when my kids were little babies, and if I should have dialed back more on work. But, when I try to think of what I would have dialed back on, there are no obvious candidates. It feels very hard to put my job fully on hold. But I recognize that I am a product of my culture, which values workaholism. I would be really interesting to hear from other parents, and especially from those in countries where taking time off in general and parental leave in particular is more of a cultural norm: how did you try to balance a baby with work when the baby was very young? Did you feel like you needed/wanted to work with a newborn?


*I am aware that this is still not a real word. I think it should be.

**I can’t find the link now, unfortunately.

*** One key piece of advice for taking the baby to work with you: bring extra clothes for the baby AND yourself. It’s not the end of the world to go to a meeting with spitup on your clothes, but it’s nicer to be able to put a new shirt on! Also: it’s amusing to see how quickly people clear out of your office when you all realize there’s been a diaper blowout.

****People would actually be disappointed when I would show up without the baby, which was really nice.