Also this week: the most beautiful data in the world, the perception of doors, against pre-publication review, and MOAR!
From Jeremy:
The British Ecological Society is marking Black History Month with a series of blog posts by and about black ecologists, their work, and their experiences in ecology. The ESA is doing the same. Lots of good stuff here well worth reading and reflecting on. I’ll also take this opportunity to re-up this excellent guest post from Lynette Strickland from last summer, which speaks to some of the same issues raised in some of these posts.
Oh man, these are the most beautiful data I’ve seen in a long time. Even though they’re the data I think everyone expected.
Sticking with things that give hope: Bree Rosenblum has written a textbook on global change biology that aims to plant “the seeds of a new story: a story in which our actions matter – not because we are racing against the clock fueled by guilt and fear – but because we are bringing our passion and creativity to the grand challenges of our time.” (quoted from an email blurb I received about the book) Sounds intriguing. Related: Meghan’s post on this issue, which has an excellent comment thread. And Brian has various posts critiquing gloom-and-doom stories about global change, such as this one.
Holy s**t: Laurentian University, a public uni in Canada, applied to court for protection from creditors, so that it can financially restructure. The university says it will run out of money to pay salaries at the end of the month. Here is some commentary from Canadian higher education consultant Alex Usher, and here is some more, some of which corrects errors in the previous link.
The ESA on lessons learned from the 2020 virtual annual meeting, and their application to planning the 2021 meeting.
Nick DiRienzo on the status of the two Behavioral Ecology and Sociobiology papers he co-authored with Jonathan Pruitt. The co-authors requested retractions a year ago, on the grounds of serious data anomalies that Pruitt (who collected the data) could not explain to their, or the editors’, satisfaction. But yet, a year later the papers still haven’t been retracted.
The perception of doors. Very good short essay on macroeconomics. Even if you don’t agree with it, I hope you’ll agree it’s a good example of the general sort of thing you’d want to say to students on the first day of any college or university class. What’s this class all about? Why should you care about it?
New preprint from Insler et al. exploits random assignment of US Naval Academy students to intro course sections (e.g., Calculus I) to estimate how student performance in later courses (e.g., Calculus III) depends on the instructor they were assigned in their intro courses. The study then asks how estimated instructor quality relates to how instructors are viewed on RateMyProfessor.com. I’ve skimmed it, it looks to me like an ideal dataset for addressing the questions asked. And it seems quite carefully done to me, though of course I’m no expert on this literature. Some instructors are indeed better than others (by the measure used in the paper), and poorer instructors are poorer for interesting reasons. Would be particularly interested in comments from folks who know this literature better than me. (ht Marginal Revolution)
Writing in Science, Luis Campos reviews Lara Choksey’s Narrative in the Age of the Genome. It’s literary criticism of a dozen gene-related works of fiction and nonfiction. Based on the review, I can’t decide if I want to read the book or not. On the one hand, I’m interested in how scientific ideas feed into novels and other works of art, so the book sounds right up my alley (for instance). And the review is positive. But on the other hand, the bits of the book quoted in the review make it sound full of jargon. Maybe I’ll wait for Stephen Heard to read it, and then ask him to tell me about it.
Philosopher of science (and other things) Liam Bright’s brief reviews of his own papers. The title–“My Work Is Bad”–gives you an idea of the tone. Not sure what to say about this beyond that I found it striking. Just speaking generally rather than about this piece specifically, there’s a lot to be said for being your own toughest critic, but there’s also a lot to be said for not being too hard on yourself. Finding the optimal balance between those two desiderata isn’t always easy. Anyway, here’s Liam Bright talking briefly about why he wrote and published this piece.
Sticking with Liam Bright, here’s Heesen & Bright (2020) arguing that pre-publication peer review should be abolished (!) I haven’t read it yet, but I’ve found Liam Bright’s other work thought-provoking even when I haven’t agreed with it, so I’m happy to link to this sight unseen.
And here’s a wide-ranging interview with Liam Bright. Includes interesting discussion of the linkages between science, philosophy, and improving human societies.
Kareen Carr on the Monty Hall problem. Caught my eye because he doesn’t think the problem has much to do with probability. I just taught the Monty Hall problem in my intro biostats course, as a fun aside in the unit on probability. I doubt that teaching it did any harm, and it didn’t take much time away from other stuff I could’ve taught instead. But now I’m kind of thinking I either need to stop teaching it, or else get clearer in my own mind why I’m teaching it.
Ha! Nice try, buddy. You don’t get to outsource your blogging effort to me 🙂
It was worth a shot. 🙂
Have to say I completely agree on the Monty Hall door problem. I remember when my son brought it home thinking, that’s not a probability problem, it’s a semantics problem. Or prior knowledge of the gameshow structure. In particular one of the commenters on the twitter thread nailed it as far as I’m concerned:
“My problem with MHP is that the situation is never well explained. The fact that Monty will ALWAYS open a goat door and ask if you want to switch is usually left out. That’s vital info. This explanation stresses the importance of that info, and therefore nails it.”
When I teach it to my students, I’m always very careful to explain that Monty knows where the car is, and always opens a door with a goat behind it.
Does it still seem paradoxical or surprising to them at that point? I always thought the paradox emerged from not really knowing that.
Yes, it does. Or at least, when I ask the class a clicker question asking if they should stay or switch (or if it doesn’t matter), a substantial minority answer that it doesn’t matter whether you stay or switch. (Very few students say you should stay). And of the majority who correctly say you should switch, an appreciable fraction have seen the Monty Hall problem before. So the fraction who are able to correctly figure it out on their own isn’t close to 100% (it’s not close to 0% either; it’s in the middle).
So, whether or not you think it’s a “paradox”, I do think it’s a *challenging question for undergrad biostats students to figure out*. I think because it’s not immediately obvious to them that Monty *has* given them information about what’s behind the remaining unclosed doors. Presumably because the information he’s given is indirect. As a contestant, you can’t think to yourself “Ok, he’s given me the information that there’s a goat behind the door he opened, but who cares because it’s not as if I’m going to switch to *that* door.” You have to ask yourself “Hey, why *didn’t* he open the door that he left closed? Maybe it’s because he *couldn’t* open it, because it’s got the car behind it.”
Of course, the other way to explain the right answer–and the one I personally find most intuitive–doesn’t appeal to any information that Monty’s given you by opening a door. Imagine a slightly different version of the game: You pick a door. Then, instead of opening one of the other two doors, Monty asks “Do you want to stick with your pick, or would you rather switch and receive *everything that’s behind both of the other doors*?” In that modified game, obviously you should switch, and you’ll have a 2/3 chance of winning the car if you do (plus a goat). Which the same odds of winning the car as when you switch in the original game.
Here’s another modified version of the game that has the same correct answer. You pick a door. Monty, who knows where the car is and will never open the door it’s behind, opens one of the two other doors to reveal a goat. He then offer you a choice: stay with the door you picked, or switch to the other closed door. If you switch, he’ll throw in the free goat that he just revealed along with whatever’s behind the door you switched to. I think this modified version of the game does an especially good job of making clear the intuition behind switching, without appealing to the information that Monty has provided by opening one of the other two doors.
I learned about the MHP in the context of Bayesian probability – not as a paradox. So, the fact that Monty was always going to show you a goat was a key piece of it. When you first select a door there is 1 chance of three of getting it right because your prior is that the car will be randomly assigned. As soon as Monty shows the goat, the probability (car) on that door drops to zero. That 1/3 probability has to move to one or both of the other two doors. If Monty actually chose which door he opened at random (so that sometimes it was the car and sometimes it wasn’t), the 1/3 probability would be evenly distributed between the remaining two closed doors. So your prior (before deciding to switch or not) has improved to 0.5, but there would be no good rationale to switch doors. (But1/3 of the time Monty would throw open the door to the car and ruin the suspense). But, because Monty will always show a goat, the entire 1/3 shifts to the door the contestant didn’t choose. So, the prior on their original door remains at 1/3 but the prior on the remaining door goes to 2/3 and so the rationale for switching.
So, it’s a nice example of how changing priors changes probability estimates and thus, decision-making.
This doesn’t seem like a paradox but I can’t see how it’s not a probability problem. In fact, it seems like a great illustration of how our rote use of probability estimates occasionally waves away priors. For example, if you ask students what the probability a flipped coin will come up heads, they will say 0.5. If you then ask “What about if the coin has heads on both sides?”, they will make a face, as if you played a trick on them…because the assumption of equal priors is so strong. Could somebody explain to me how this is not a useful probability problem? I am just not getting it.
I see it as a lesson in conditional probabilities IF you fully explain Monty’s thinking/knowledge (which is missing in most of the versions I’ve heard). I guess conditional probabilities is not too far (even mathematially) from a lesson in Bayesian probability.
That Insler preprint studying teacher effectiveness from the Naval Academy should be required reading for all teachers. It really confirms two of my core understandings of my role as a teacher: 1) teaching large intro courses is as much about teaching new college-level expectations and study skills as it is about teaching content, and 2) that buying student approval with soft grades is bad teaching. Accurate demanding feedback does draw more from students.
Probably the most useful result going forward is that student faculty rankings have to be conditioned on difficulty rankings. After controlling for difficulty rankings, student opinions of faculty have some objective basis in reality (as well as of course many biases). But without controlling for difficulty rankings, student opinions of faculty are flat out wrong – i.e. overall scores are inversely related to student learning.
And yeah what a data set. Randomized assignment. Diverse subject areas. And standardized across section final exams.
The other reason to teach first year intro courses in such a way that students have to learn college-level expectations and study skills is because that’s when students are mentally prepared for it.
Many years ago in my dept. at Calgary, we were getting complaints from students that our first year intro bio courses were too easy–too much overlap with high school biology. The complaints came from second year students who were shocked at how hard they found our second year courses, relative to our first year courses. What our students said in surveys was that they entered university mentally prepared for it to be different than, and harder than, high school, only to find it wasn’t. And *then* find that, actually, it *is* different and harder–but only starting in second year. The students said they’d rather have gotten a “reality check” in first year, when they were expecting it.
So we revamped our first year courses (which we did for various other reasons too).
Interesting (and confirmatory on some level that honest accurate feedback is what students want/need, although the Insler paper more highlights – overhighlights in my opinion- the propensity for students to not know this and love soft grading over accurate information)
Well, if you ask our students now whether they like being challenged in first year bio, many might say no! Some might even say “I wish we could ease into university, and not be challenged too much until second year”! 🙂
Re: students liking soft grading, I’m recalling that a few years ago Cornell (? I think it was Cornell) started publishing the average mark in every major they offered. There was an immediate flow of students out of majors with lower average marks, into majors with higher average marks, IIRC.
Although I do wonder if some of those students ended up unpleasantly surprised, because higher average course marks don’t *always* indicate “easier” courses. Back when I was an undergrad, the student newspaper published data on the average mark in every department. IIRC, Japanese courses had some of the highest average marks on campus–an average of a 3.65 on a 4.0 scale or something. But my best friend was taking Japanese courses–he took them for four years and moved to Japan after he graduated. It was clear from talking to him that the high GPA in Japanese courses wasn’t because they were easy. They were a lot of work! But precisely because they were a lot of work, and were known to be a lot of work, the only students who took Japanese were really serious about it. Nobody took Japanese on a lark. Which is why the average mark was so high–there was self-selection, so that only super-keen students took Japanese.
Ok, commenters, if anyone wants to talk about Simpson’s Paradox, I’ve just given you your opening! 🙂
More on Laurentian:
https://higheredstrategy.com/laurentian-blues-3-hillary-redux/
One striking tidbit: Apparently they put millions of dollars of research funds from CIHR, NSERC, and other federal agencies into their general operating account–which is now more or less empty. Dear lord.
Another striking tidbit: Laurentian has been more or less lying to its staff and to the general public for years about the state of its budgets. It was running deficits every year for years, but issued press releases trumpeting how it had balanced its budget every year. The Board did indeed adopt balanced budgets every year–but those budgets weren’t adhered to in practice.
” they put millions of dollars of research funds from CIHR, NSERC, and other federal agencies into their general operating account” – wow – this starting to sound more like a full blown corporate fraud (a la Enron) than a government university slowly getting ground down by declining government funding (an all too common story at the moment).
All I know is what I read in the blog posts I’ve linked to, which are from someone with relevant expertise in university finances and who has read Laurentian’s court filings.
With that caveat out of the way: apparently Laurentian co-mingled all their funding in one bank account? And had always done things that way? Or at least, had been doing things that way for years?
I honestly have no idea what to make of this. Is it prima facie financial fraud, for a university to have one bank account in which it intermingles all sources of funding? Or incompetence? Or some mix of the two? Or what?
I mean, if they’d had separate accounts for operating funds, research grant funds, etc., and had moved money out of the research grant account and into the operating account, I’d say that sure seems like deliberate fraud. But if Laurentian had always had just one bank account…I honestly don’t know. Is that still fraud, or just incompetence, or what?
It doesn’t really matter the literal bank account. But accounting wise you run separate balances (aka accounts). NSERC funds are designated for a specific purpose and cannot be used for any other purpose (and must be held to ensure availability when researchers call on them). If you’re spending down NSERC funds to cover the red ink on operating costs (be it salaries, heating or whatever) rather than letting them sit untouched until the NSERC awardees call on them for research, then, yes that’s fraud at the level of Enron. Think about it this way – they took money from NSERC to pay for research, and then comingled funds, then spent the comingled funds all on heating so there is nothing left to spend on the research that NSERC thought they funded which is now never going to happen (or more likely will happen only after the province bails them out which couldn’t happen in a corporation). On one level it sounds like complex accounting. But on another level it is just blatant fraud. From the blog post it sounds like that’s what they did. And one of the commentors asked the right question – where the heck were the auditors who should have been reporting these deficits to the board.?
Even my little $15M/year school district gets this right. We only have one bank account but the accounting staff track multiple funds – general operating, Title monies from the federal government for specific purposes (mainly helping struggling and poor students). Money from a bond voters approved for construction. All separate accounts and I could ask the balance of any account on any day. And if we said we had a balance of say $1M in the Title federal money but our revenues and bank balance wouldn’t support spending that money the auditor would be telling the board in red letters what happened and probably notifying the state and federal government. This is something nearly every small NGO gets right. It didn’t happen without somebody knowing it was happening.
I agree with all this–absolutely, any university needs to have finance and accounting procedures in place that would prevent them from (say) spending research funding on general operating expenses! And yes, I imagine that not having such procedures would meet the legal definition of fraud. I guess what I was wondering about was more “is this ‘fraud’ in the non-legal sense of ‘somebody at Laurentian consciously set out to deceive others about the state of the institution’s finances'”? Or is this ‘fraud’ in the non-legal sense of “the institution has long had accounting and financial procedures that aren’t fit for purpose”?
And yes, one does wonder where the auditors were on all this. One of the posts I linked to wonders the same thing–aren’t some of these issues things that any competent auditor should’ve caught in a routine annual audit?
This is not a “anybody could have made this mistake”. This is so far outside the norms of conventional practice that if it was incompetence it would be at such a mind blowing level that it doesn’t really matter if it was fraud or incompetence. If it was incompetence there are about 5 levels of people who weren’t qualified to be doing the jobs they were doing if they were unable to look over the shoulder of people reporting to them and not seeing this. You combine that with the fact they kept putting out press releases saying their budgets were balanced while anybody who spent 5 minutes with an even slightly competent audit would note that the bank balance was going down which necessarily contradicts the notion of a balanced budget. It seems to me like a lot of people were busy pretending they didn’t know what was going on. If this was a public company, shareholders would be preparing lawsuits (and ultimately winning them). Again all based on reporting I’m hearing. But if the reporting is accurate, this is way more than mismanagement. Intent to fraud is always hard to prove. But that would be what most people would assume.
Latest on Laurentian. Interim report from the Ontario Auditor General: https://higheredstrategy.com/the-auditor-general-on-laurentian/
The tl;dr takeaway is that the Auditor General thinks the President and the Board “drove Laurentian off a cliff” (to quote a phrase from the linked blog post). Over a period of several months, they forced the institution into bankruptcy even though other courses of action were available.
As the linked post notes, the interim report doesn’t include any documents that the Auditor General reviewed in order to come to that conclusion, and the Auditor General isn’t infallible. But if you trust the Auditor General, this is a damning interim report.
Linked post also includes some speculation as to why the President would do this. Why would you drive your own institution off a cliff?