Also this week: ASN award winners, working from home vs. scientific fraud (?), WEB Du Bois vs. scientific fraud (?), cliches of scientific writing, crowdsourced species interactions database, and more
Kiyoko Gotanda on her experiences searching (successfully!) for a faculty position before and during the pandemic. Which included a Zoom interview from the Galapagos!
Nicola Smith’s powerful story of discovering an (honest, hard-to-detect) mistake that undermined years of her lab’s work, how she did the right thing in response, and how those around her reacted. Related: Meghan’s story of a similar experience. Mistakes happen in science.
Philosopher Liam Bright’s recorded talk on why scientific fraud (and replication failure more broadly) happens. Discusses WEB Du Bois’ proposal for how science should be organized around pure truth-seeking so as to prevent fraud. I had no idea that Du Bois wrote on this, so I found this super-interesting (even though Du Bois, Bright, and I all disagree with one another to some extent about how science should be organized). It’s a super-interesting talk in part because Du Bois had some ideas that sound a lot like many contemporary reform proposals, and some other ideas that are very much out of line with contemporary reform proposals. That’s one of the best reasons to learn about old stuff–to expand your horizons. Help you realize that there are different ways to mix and match ideas, besides however everybody mixes and matches them today. At the end, it raises the difficult but important issue of to what extent the progress of science depends on individual scientists having the right motives. Alternatively, maybe scientific progress requires science to be organized not so as to inculcate the right motives in all scientists, but to be organized so that progress happens even if scientists have various motives and some of those motives are bad. Related to Mark Vellend’s recent guest post for us, this guest post from Peter Adler, and this post from Brian.
Falko Buschke weighs in on whether we should trust the Living Planet index. Post includes some nice graphs. Falko’s responding in part to Brian’s recent post.
Writing in Science Advances, Squazzoni et al. look for evidence of gender bias in peer review outcomes, using what looks to be a quite detailed dataset covering 145 journals across all scholarly fields, 1.7 million authors, and 740,000 reviewers. Results suggest that papers by women are very slightly more likely to be accepted than those by men, even after controlling for the reviewer scores, particularly in fields in which women comprise a smaller fraction of authors. As the authors themselves note, their results need to be interpreted carefully, and their data don’t allow them to address many of the questions that one would ideally like to address.
This surprises me: Covid-19 had basically no effect on submissions or peer review at the six British Ecological Society journals through Oct. 2020. Not on the number of submissions, not on the gender balance of submissions, not on the geographic origin of submissions, not on the speed with which submissions were handled. The only (small) effects were that reviewers agreed or declined to review a bit sooner, and those who agreed to review returned their reviews faster. I’ve heard through the grapevine that not all of these results generalize to all ecology journals. I hear there are journals where submissions are up since the pandemic began, and journals for which reviewers are declining to review more often. And of course, it’s possible that the pandemic might have effects that only show up after Oct. 2019. But I wouldn’t dismiss these BES results just because they contradict your intuitions, or what you’ve heard through the grapevine. The six BES journals are leading journals in the field, and between them they get a lot of submissions. These (reassuring!) results can’t be written off as some kind of weird blip just because they aren’t what you (or I) was expecting. I think it would be useful for more journals to publish data on this.
Whistleblower allegations of financial fraud to the US Securities and Exchange Commission are up 31% this year, due to a surge in allegations that began after much of the financial industry switched to working from home in March 2020. At the link, financial commentator and ex-Goldman Sachs banker Matt Levine suggests that allegations are up because people don’t feel so attached to their colleagues when they’re working from home. If you’re working from home, you’re less like to get acculturated into a dodgy workplace culture, and less likely to get to know your co-workers well enough to be willing to turn a blind eye when they do something dodgy. Here’s my question: does the same apply to scientific fraud? Have allegations of scientific fraud gone up since last March, for the same reason? I have no idea; just musing. Related: my little series of posts on analogies between scientific fraud and financial fraud, which starts here. Coincidentally, I started that series of posts last March…
I know this feels like a formality because everyone long ago decided to stop citing Jonathan Pruitt’s research papers, but for the record, he’s had another one retracted. This one’s Lichtenstein et al. 2016 Am Nat. The retraction is for a long unexplained sequence of duplicated data points, and nearly identical distributions of spider colony sizes across different collection sites. Pruitt did not agree to the retraction. Aside: it’s striking to me (not surprising, but striking) just how little interest there is on social media about #pruittdata these days. I’m old enough to remember when every new scrap of information about #pruittdata would go massively viral.
I’m late to this, but here’s Ken Hughes with a fun data-based dive into cliches of scientific writing.
I’m even later to this: remove cognitive overhead from your papers.
The illusion of explanatory depth. That is, people think they understand complex phenomena with much more precision, coherence, and depth than they actually do. Decade-old psychology paper I just found. Looking forward to reading it. (ht Ken Hughes)
Congratulations to 2020 ASN award winners Sharon Strauss (Sewall Wright Award) and Tia-Lynn Ashman (E. O. Wilson Naturalist Award).
There’s a new board game based on Charles Darwin in the Galapagos. (ht a commenter)
Check out https://speciesconnect.com/ This is a new community science effort to build a global database of species interactions put together by a group of scientists including Cristian Solari. It has the capacity to simply denote an interaction and also to store detailed notes and case studies of interactions. It is off to a quick start by building on some existing databases, but the creators hope the scientific community will add significantly more data.
Hey Jeremy, I’m really interested to watch Liam Bright’s talk because I’ve been part of a grad student seminar for the last couple of years on what separates science from non-science. And the students seem to be closing in on the conclusion that the characteristics separating scientific knowledge from other ways of “knowing” are (1) the belief that there is an objective reality, (2) a real commitment to finding the truth about that objective reality, and (3) predominately using empirical evidence to support claims about the objective reality. Characteristic 2 allows for various motives but not motives that conflict with a sincere commitment to ‘truth’.
So tempted to spoil the talk (even more than I already have!) by telling you why Liam Bright disagrees with Du Bois on whether scientists should have truth-seeking as their only motivation….
I didn’t find Liam’s argument against truth-seeking very convincing. Even his example about mask-use wasn’t completely convincing because, up until covid-19, when more investigation of mask-use occurred, the evidence for or against mask-use was ambiguous. My understanding is that there was not strong evidence for mask-use in those early days. Certainly not as strong as the current evidence. I’m not convinced that the reason for the early mask-messaging was as Machiavellian as Liam describes. And he acknowledges that he’s describing a hunch. Further, even if he’s right, there is a distinction between scientists being motivated by truth-seeking and officials trying to affect public policy and behavior. There may be reasonable arguments for misrepresenting science if you believe that the scientific evidence will lead to inferences that will cause public harm. But, those decisions are not ones that scientists should make if they are truly going to do science – that is for decision makers.
Lastly, the grad students weren’t trying to sort out how to avoid fraud or even distinguish bad science from good science. They were trying to distinguish science from non-science. And Popper’s idea about testable hypotheses as an objective way of distinguishing science from non-science didn’t work for them. Because, they could come up with testable hypotheses about astrology that could be assessed with empirical evidence. But, without a true commitment to sorting out whether there was anything to astrology, data could always be cherry-picked to provide support. They were convinced (and I haven’t been able to find an argument against but that may be a failure of my imagination) that what would distinguish science about astrology from non-science about astrology would be the researcher’s commitment to finding out what was true about astrology. Without the commitment to finding the truth there was no other way to ensure that they were doing science.
One last caveat, the students weren’t suggesting that truth-seeking should or could be the only motive of a scientist or even sufficient motivation but that it was necessary.
Hmm. What about, say, the QAnon folks? In their own minds, they have a very strong commitment to discovering the truth. All conspiracy theory adherents do.
Do you really think QAnon folks have a strong commitment to discovering the truth? Aren’t they already convinced they know what the truth is? A strong commitment to discovering the truth includes the ability to identify potential empirical evidence that would convince you to change your mind about what you believe right now. For example, I don’t believe in astrology but if you could demonstrate that predictions based on astrological signs and planetary motion predicted future events better than random guesses I would be on the way to changing my mind.
My understanding, from very limited reading, is that popular conspiracy theories have a rabbit hole character. There’s always another layer of the onion to be peeled away. A new, deeper truth to discover. I found this discussion interesting: https://medium.com/curiouserinstitute/a-game-designers-analysis-of-qanon-580972548be5
Do you meant Oct. 2020 by the way? I may be misunderstanding your commentary if you do mean Oct. 2019 in those two places…
I was also super happy to see the talk by Liam linked here, as I hadn’t seen it before. I rtecently started following him on Twitter. Perhaps most interestingly, the talk was in a seminar from my alma mater, which does not even offer humanities degrees or more than a couple philosophy courses in a semester. Very nice to see that they are engaging with some neat speakers on this seminar series!
So many typos in my own comment… did you mean* of course etc…
Yes, meant Oct. 2020. Will fix it, thx.
Check out bit.ly/stat-thinking for nice illustrations of changing your mind based on evidence, and if you like that the beginning video is https://youtu.be/OJt-k9h9pmk.