Recently we polled y’all on how you filter the literature and find papers to read. This was a follow-up to a similar poll we did five years ago. People’s filtering methods surely are changing–but how, and how fast? Is it only
old fogeys faculty who still look at journal TOCs these days, or what?
tl;dr: No major changes from last time, although one filtering method in particular seems to be growing in popularity…
We got 304 respondents (thanks everybody!), split about evenly between grad students (30%), postdocs (30%), and faculty (26%), plus 10% government/NGO scientists and 4% others. As with all our polls, it’s not a random sample from any well-defined population. And as I’ll discuss, I suspect that it’s a fairly biased sample for purposes of the question asked. But whatever; it’s still a much bigger and more diverse sample than “people you happen to know”, so let’s talk about it.
Respondents were first asked to indicate every method they use to find papers and/or preprints to read. Here they are in descending order of popularity:
- Following up citations in other papers (used by 87% of respondents)
- Keyword searches/keyword alerts (71%)
- Journal TOCs (67%)
- Twitter (60%)
- Algorithmic recommendations (55%)
- Email and/or face-to-face conversations (53%)
- Author name searches/author name alerts (47%)
- Blogs not associated with journals (26%)
- Journal blogs (11%)
- Other (8%)
- Facebook (7%)
- Human recommendation services such as Faculty of 1000 and Peer Community In Ecology (3%)
The vast majority of respondents use at least four different methods to filter the literature. Only 3% use just one or two methods.
Respondents were then asked to indicate their primary filtering method. Here are their responses:
- Keyword searches/keyword alerts (23%)
- Journal TOCs (23%)
- “I use multiple methods about equally” (20%)
- Following up citations in other papers (12%)
- Twitter (8%)
- Algorithmic recommendation services (8%)
- (no other method is primary for more than 2% of respondents)
Finally, respondents were asked to identify any filtering methods they object to and wish no one would ever use. I asked this because years ago, I got into an argument with somebody who objected to anyone using journal TOCs as a way of filtering the literature, because that helped entrench journals and postponed the Journal-Free Publishing Revolution. That made me curious: just how common is it for scientists to have blanket objections to how other scientists decide what papers to read?
Turns out it’s more common than I thought, but not all that common in an absolute sense: 73% of respondents had no objection to anyone using any of the listed filtering methods. The remaining 27% disagreed with one another as to which methods were objectionable. Every single filtering method listed had at least one person object to its use. The most commonly disapproved-of filtering methods were Twitter (disapproved of by 9% of respondents), Facebook (7%), author name searches/author name alerts (4%), and algorithmic recommendations (4%).
Use of journal TOCs increases with seniority: 59% of grad student respondents use journal TOCs, vs. 72% of postdocs and 77% of faculty. However, use of journal TOCs as one’s primary filtering method doesn’t vary much with seniority (22% of grad student respondents, 26% of postdocs, and 25% of faculty).
Use of Twitter as a filtering method doesn’t vary much with seniority: 64% of grad student respondents and 66% of postdocs use it vs. 59% of faculty. Nor does use of Twitter as a primary filtering method vary with seniority.
Grad students seem to be a bit more likely than more senior people to follow up citations in papers they read as their primary filtering method. 21% of grad student respondents say that’s their primary method, vs. 12% of postdocs and 13% of faculty. Presumably that’s because students who are first getting up to speed on the literature, and students studying for candidacy exams, are especially likely to follow up citations in papers they read.
There’s a hint that more senior people may be less judgmental of other people’s choices of filtering method. 30% of graduate student respondents object to at least one of the listed filtering methods, vs. 24% of postdocs and 21% of faculty.
Objection to Twitter (the most objected-to filtering method) doesn’t vary appreciably with seniority. If anything, the trend is for decreasing objections to Twitter with increasing seniority (8% of grad student respondents object to Twitter as a filtering method, vs. 7% of postdocs and 4% of faculty).
- Many of these results are the same as five years ago. Most people still use several filtering methods. Traditional methods like journal TOCs and face-to-face conversations are still among the most widely-used methods. Certain new and new-ish methods like Facebook, journal blogs, and human recommendation services remain little used. And senior and junior people use pretty similar mixes of methods. (aside: the previous poll specifically asked about methods other than searches, which in retrospect was dumb of me. So you can’t compare use of keyword and author searches in this poll to their use in the previous poll.)
- The big difference with five years ago is that far more people now report using Twitter as a way of filtering the literature. Five years ago, Twitter was one of the less-popular filtering methods; in this poll it was one of the most popular. I’m sure that’s a real change. But I’m not at all confident in its magnitude, because of likely sampling bias. I bet our readers are far more likely to use Twitter than are randomly-chosen scientists (only about 20% of US scientists are on Twitter, if memory serves, which it may not).
- I would not assume that use of journal TOCs will decline over time because their use increases with seniority. Those grad students who go on to become postdocs, and those postdocs who go on to become faculty, might be more likely than others to use journal TOCs. And students who don’t currently use journal TOCs might start doing so as they progress in their careers. Might be interesting to poll people on how their own filtering methods have changed over the years.
- I am very curious to hear comments from people who object to anyone using one or more of the listed filtering methods. In particular, I hope to hear from people who object to methods other than social media or algorithms (because I’m pretty sure I know why small minorities object to those filtering methods). In particular, I’m shocked that anyone would object to other people reading papers suggested to them by their colleagues, or following up citations in papers they read, or doing keyword searches or author name searches. I’m sure people must have their reasons for these objections–but for the life of me I can’t imagine what they are! Which of course just shows that I am unimaginative. So if you object to any of these filtering methods, I hope you’ll explain why in the comments. I’m not looking to rip anyone, I’m just genuinely curious. It’s like if someone told me that they object to anyone riding buses, or eating ice cream, or having a pet dog. My response would be “Really? Why?”
One further thought: human post-publication recommendation services don’t seem to fill much of a need. Presumably because the recommendations aren’t tailored to individual readers. I want to read what *I* want to read, not what (somebody else thinks) some generic ecologist ought to read. I suspect most people feel the same. So I either need to do my own filtering (journal TOCs for the journals *I* choose; keyword/author searches), and/or outsource my filtering to someone or something that knows enough about me to anticipate my needs and desires (my colleagues; my Twitter friends; the Twitter feeds of the strangers *I* choose to follow; the Google Scholar recommendation algorithm…).
If that’s right, it also helps explain why journal blogs aren’t a particularly popular filtering method. Journal blog posts talking up the journal’s papers are not tailored to individual readers. Which means that, as a filtering method, they’re redundant with the journals themselves. (Note that I do think there are other ways that journal-associated blogs can “add value”, besides helping readers choose papers to read.)
Maybe if you want your recommendation service to be taken up, you need to somehow develop a “journal-like” reputation? The papers you recommend need to be sufficiently interesting/important/etc, and concern a sufficiently coherent/focused range of topics, that you build an audience for your recommendations. Just like how a successful journal has an audience.
Which is of course the idea behind “arXiv overlay journals”, that just recommend preprints on certain topics from the arXiv preprint server. And it’s more or less the idea behind Peer Community in Ecology, as I understand it. I’m curious whether that model will take off. I guess it might depend in part on how many readers only want to read peer-reviewed stuff, and how many of those readers will come to trust the recommenders to serve as peer reviewers.
Great post Jeremy, thanks! Just a detail: At Peer Community in Ecology (and PCI Evolutionary Biology and PCI Paleontology) recommenders are not the peer-reviewers of preprints. They act as editors, organize the peer-review process of the preprints, invite peer-reviewers (who may be from outside PCI) and take decisions on the basis of peer-reviews. Same as journals, but with a large difference: it disconnects publication (preprint servers) from evaluation.
Meghan has taken to commenting on Twitter rather than on her own blog, apparently:
Pingback: Tweeting to the Science Community: Audience, Content, and Voice | Scientist Sees Squirrel