I keep a file of ideas for blog posts. Many of the ideas have been in the file for years. Clearly I’m not likely to ever get around to writing them up! So, in the spirit of The Prehistory of the Far Side, in which cartoonist Gary Larson shared a bunch of sketches that he never developed into published cartoons, here are some ideas from my file.
As with Gary Larson’s sketches, you will probably think that some of these ideas would’ve made great posts. And that some would’ve made bad posts. And that others would’ve made…strange posts. Whatever your opinion of each of these ideas: I agree with it. 🙂
Also, you should see the ideas I didn’t include in this post! On second thought, actually you shouldn’t. Which is why I didn’t include them. 🙂
The ideas are written as notes to myself, which is why they may read a bit oddly to you.
- How are ecology papers discussed on Twitter? Is it mostly people, and bots, and journals, just tweeting announcements or summaries of their own papers? See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5570264/ for a study of this in the context of dentistry papers. If most of the activity is just people/journals tweeting their own stuff, is that a problem? If it is, is there anything that could be done about it?
- In a past poll on this blog, “distributed experiments” stand out as a research approach that many ecologists would like to adopt, and would like to see other ecologists adopt, even though few ecologists currently use the approach themselves. Why aren’t there more distributed experiments, and more ecologists participating in distributed experiments, if everyone thinks they’re so great? Is this a sort of “market failure”–a lot of “unmet demand” for distributed experiments?
- How frequently do leading ecology journals publish comments? Has the frequency declined or increased over time? Could do a full text search on JSTOR to get at this.
- What common practices from other fields should EEB adopt? Example from philosophy (which may be specific to our philosophy dept rather than the field as a whole): the practice of taking a 5 min. break after a seminar to allow audience members time to think of better questions.
- Compare ecology vs. evolution papers in terms of (i) the covariates used in their meta-analyses, and (ii) whether the hypotheses tested in their papers derive from mathematical theory. My vague anecdotal sense is that evolution research papers and meta-analyses tend to be grounded in mathematical theory much more often than ecology research papers and meta-analyses are. But is that right?
- Tally up the topics covered by Nature and Science papers in ecology over the years. Are Nature and Science papers in ecology disproportionately likely to be about trendy bandwagons, compared to what’s published in leading ecology journals? Are they disproportionately likely to involve global data? Etc.
- The process-pattern two-step. Attempts to infer pattern from process in ecology often morph into muddling of pattern and process. Patterns that start out being taken (often incorrectly) as symptoms of some underlying process come to be interpreted as defining that underlying process.
- Why did some of ecology’s big ideas in the 1960s and 70s subsequently develop into productive research programs that panned out, while others ran into dead ends or turned into zombie ideas?
- What circumstances favor optimism over pessimism? Is either ever favored over an accurate assessment of the downside and upside risks? And can one draw analogies between optimism and pessimism in, say, optimal habitat selection under uncertainty, and optimism and pessimism of scientists when choosing what lines of research to pursue? Pretty sure I’ve seen a paper on this in the context of habitat selection.
- Via a remark of Philip Kitcher: John Dewey has a critique of “knowledge for its own sake”; says that research programs seeking it can deteriorate (Reconstruction In Philosophy, p. 164). Should look this up, see if it’d be good post fodder.
- A post on the importance of picking away at things you don’t understand. If something is puzzling to you, it’s probably puzzling to other people. And often, it’s not the case that there’s an answer to the puzzle already out there. Example from my own work: the spatial hydra effect. Example I’m still puzzling over: Hatton et al. 2015.
- As a scientist, under what circumstances should you “follow the crowd”? That is, ask questions or adopt approaches just because those questions/approaches are popular? I think about this in the context of being able to maintain my research program in protist microcosms with graduate students (not just summer undergrads, who are happy to work on whatever I tell them to work on, and who aren’t hard to come by at my university). If few/no grad students want to work on protist microcosms, at least not with me, should I switch to working in a more popular study system?
- What if you modeled the entire scientific peer review system with a model analogous to “noise trading” in financial markets? That is, a pure “Keynesian beauty contest”. No paper’s intrinsically better than any other, on any dimension, Everybody just tries to guess which papers everybody else thinks are the “best.” Stefano Allesina could probably modify his toy models of the peer review system to consider this weird limiting case. In what respects would this case resemble the real world, and in what ways would it not?