Also this week: the wisdom of Randall Munroe, banning students from emailing you, the benefits of active learning, and more. Oh, and buried in one of the entries is the story of how “functional groups” are a statistical artifact.
Active learning raised average test scores more than 3 percentage points, and significantly reduced the number of students who failed the exams, the study found. The score increase was doubled, to more than 6 percentage points, for black students and first-generation college students.
Will Pearse reanalyzes the raw data from that new study of the declining explanatory power of ecology, and finds that while the mean R^2 value reported in ecology papers is declining over time, there’s so much scatter around the trend that the trend arguably isn’t worth worrying about. Click through for a good discussion in the comments over there. (ht downwithtime, via the comments)
“Finally, we replaced [the] data with random numbers and continued to find very large numbers of apparently statistically significant effects.” Ouch. This check actually is quite generally useful, especially for exploratory analyses or any sort of model selection–randomize the data (or replace it with random numbers), repeat the exploration or model selection process, and see if you still find any patterns. Here, this check reveals that sorting study subjects into two groups generates “significant” effects in subsequent analyses, even if the groupings are completely arbitrary. Reminiscent of the findings of Owen Petchey, and Wright et al., that standard ways of assigning plant species to “functional groups” have no more explanatory power for ecosystem function than arbitrary assignments. It’s the mere fact that you’re lumping species into groups that does all the “explaining”. (Protip for students: if you read that last sentence and went “Wait, functional groups are an artifact?!”, that’s a sign that you should click through.) (ht Not Exactly Rocket Science)
But I’m also wary of people saying “everyone should know” some skill from their area of expertise, because people have their own stuff to deal with. It’s easy for me to imagine an abstract person and then say, “Wouldn’t it be better if that person knew how to program?” And maybe it would. But real people are complicated and busy, and don’t need me thinking of them as featureless objects and assigning them homework. Not everyone needs to know calculus, Python or how opinion polling works. Maybe more of them should, but it feels a little condescending to assume I know who those people are.
Keep this in mind the next time you want to argue that “everybody” (like “all ecologists” or “all ecology students”) should learn X, or more of X. Popular values of “X” for ecology include “natural history”, “programming”, “math”, and “statistics more advanced than GLMs“. In general, if you think that ecologists should learn more of anything, I think that you should also say what ecologists should learn less of in order to free up the time. Curriculum design is always about hard choices. I’ll probably post more on this soon.
BioDiverse Perspectives has an interview with John Harte. Here’s one choice quote:
[E]verywhere we look we see uniqueness, but being a scientist I refuse to accept that and I look for what general underlying patterns and principles govern this wealth of phenomena.
Which makes for a contrast with Tony Ives:
It’s the differences among lakes that I think are interesting. As a theoretical ecologist, you might think that I’m motivated by general laws. But I don’t find general laws very interesting. I really like solving problems.
I’ll try to do a post on this contrast in the near future.
One way to use online preprints is as a way to get feedback on an idea that might be brilliant, or might be wrong/known/silly. Blogs can be used the same way (e.g., this). Of course, whether it works depends on how many people read your preprint or blogpost and are willing to comment on it.