Note: I’ve been struggling for weeks with how to write this post. I eventually decided it’s just one blog post and I’m worrying too much about it. I just need to write down my scattered thoughts and get them out there so my brain can move on, trusting our great commenters to start a conversation that’s better than the post.
Also, just to be clear: the post isn’t primarily about academic misconduct, although I do refer to examples of that. And I am not saying that all attempts to “game the system” in academia are tantamount to misconduct! For instance, see this old post and this one.
Academic science is competitive: scientists and their work get evaluated relative to other scientists and their work. That’s as it should be and there’s no changing it anyway. But in most contexts evaluating scientists and their work relative to one another necessarily involves judgement calls, shortcuts, and heuristics. Which creates opportunities to “game the system”, which for purposes of this post I’m interpreting in a deliberately-broad sense.
And once in a while the scientific evaluation system does get gamed in a big way. Think of Diederik Stapel, who became a really high-profile, influential social psychologist by faking the data in dozens of papers. Or the recent case of sociology grad student Michael LaCour, who faked all the data in a Science paper and got hired by Princeton as a result (presumably, he’ll soon be unhired and lose his PhD). Further back, think of Mark Spector.
Much more commonly, the scientific evaluation system gets gamed in much smaller ways, many (not all) of which are perfectly ethical. I’m sure you can think of all sorts of practices that could be lumped under the heading of “playing the game”. “Salami slicing” to increase your publication count. Submitting every ms you write to Science, then immediately resubmitting the rejected ms to Nature, then immediately resubmitting that rejected ms to PNAS, and so on down the “journal ladder” one “rung” at a time. Including stuff other than peer-reviewed papers in the “publications” section of your cv. Self-promotion of various sorts–everything from nominating yourself for awards to blogging and tweeting about your own work. Acknowledging people whom you think won’t like your ms, in the hopes that this will prevent the editor from asking them for reviews. Trying to spin your work as having more applied relevance than it actually has. Going out of your way to meet famous scientists at conferences, in the hopes it will boost your name recognition and thus your career. Dressing in a certain way when interviewing for a job, so as to try to make a good impression on the search committee. And on and on. Not all such practices are problematic. But enough of them are that I’ve seen complaints that academic science has devolved into nothing but corrupt games-playing, serving no purpose other than naked careerism. Or less dramatically, complaints that games-playing has come to matter too much, substantially advantaging those who know the obscure, arbitrary rules of the game and are willing to play along, and disadvantaging others who are at least as good at science.
I mostly don’t buy those complaints. I think the scientific evaluation system mostly works reasonably well. And insofar as it fails I don’t think people gaming the system is a big problem. That’s for a couple of reasons. First, many of the “games” involved in academic science aren’t actually “games” in the sense of fundamentally-pointless activities with arbitrary rules. Rather, they have a point, as do their “rules”. Second, trying to game the system mostly doesn’t work. At best, it mostly gains you no appreciable advantage, and quite often it costs you. The signals of scientific merit may well be noisy signals–but they’re mostly honest signals. So paradoxically, the best way to “play the game” of academic science generally is to not approach it as if it was a game, but instead to just do science as best you can. Trying to “game the system” mostly is a bad idea even on narrow, ruthlessly careerist grounds.
To elaborate on the first reason, many of the usual evaluation practices of academic science have a point; they aren’t arbitrary.* There are good reasons for preferring scientists with many publications over scientists with few, all else being equal. There are good reasons for publishing in widely-read selective journals, and for paying attention to what’s published in such journals. There are good reasons to “network” at conferences that have nothing to do with naked careerism or self-promotion. There are good reasons for caring whether a prospective grad student has written a personalized and informative inquiry email. Etc. Yes, there are incentives and opportunities for people to try to game the system. But you can’t eliminate such incentives and opportunities. Evaluating science and scientists differently would just create opportunities for people to game the system in different ways. And there will be incentives to game the system as long as science remains a desirable career.
To elaborate on the second reason, little ways of trying to game the system don’t actually make any appreciable difference to your career prospects, either individually or in aggregate. “Every little helps”, as the saying goes–but so marginally that the effect is swamped by stuff you can’t easily fake, and stuff you can’t even control.** For instance, you think merely introducing yourself to Dr. Famous (even many Dr. Famouses) at the ESA meeting is going to help your career by “getting your name out there”? Think again. The same goes for other little ways of trying to game the system–they are easily seen through, and so are almost sure to backfire.*** Any competent scientist in your field can tell from your cv if you’re salami slicing, will spot it instantly if you try to pass off letters to the editor of Nature as Nature papers, rolls their eyes at over-the-top salesmanship about the applied importance of your work, will notice if you’re self-citing sufficiently often to make a material difference to your citation counts, etc. Sending everything you write to Science, Nature, and PNAS regardless of appropriateness just wastes everyone’s time (including yours) and grows your shadow cv rather than your real one. Trying to increase your publication rate by quickly resubmitting rejected mss without bothering to revise mostly just gets you a reputation you don’t want. The most likely effect of trying to trick editors into using or avoiding certain reviewers is that it will embarrass you. Etc.
Heck, even big ways of trying to game the system mostly don’t work. For instance, I’ve probably spent more time blogging over the past few years than any ecologist in the world, and quite successfully. If you think of blogging as a way of gaming the system, because you think it’s a form of self-promotion or vanity publishing, then I’ve probably gained as many undeserved rewards as it’s possible to gain via blogging. Which is to say, hardly any tangible rewards at all! Or think of how long and hard folks like Stapel and Spector had to work to fake stuff in a big enough way to make a material difference to their careers, and how carefully they had to hide what they were doing–and they still got caught! Bottom line: producing good science is costly, in various ways, and so the production of good science is an honest signal that mostly can’t be reliably faked (even though our systems for evaluating science and scientists are noisy).****
One reason I worry about this is that I hate to see people needlessly anxious about stuff that’s not worth worrying about. At least, not worth worrying about as much as many people seem to. For instance, Thea Whitman has a wonderful blog post on her successful job interview in which she talks about how she succeeded by just being herself. Rather than worrying too much about how to come off as someone or something she’s not. I like that attitude–choose your own path and own your choices.
p.s. For some pushback against this post in the context of universities–rather than individual academics–all gaming the system in a big way, see here.
*Other possible evaluation practices also have a point. That’s why practices sometimes change. But the fact that we don’t all agree on evaluation practices doesn’t show that our current practices are broken or just pointless games or whatever.
**Paradoxically, the very same competitiveness that creates incentives to game the system also helps make the system difficult to game. If you’re in a race with a bunch of very fast, well-trained elite runners, there’s nothing you can easily do to either make yourself faster than the competition, or cause other people to mistakenly think you’re faster than the competition.
***In this respect, unethical little ways of trying to game the system in academia are like undergraduate academic misconduct. Like many instructors, I’m always struck by how students who commit academic misconduct mostly do so in ways that are easily detected, and mostly would provide very small rewards even if they went undetected.
****Insofar as people think otherwise, I think it’s for a few reasons: (i) They overgeneralize from the rare big exceptions, like the Stapel case. (ii) They mistake noisy signals of merit for dishonest ones. (iii) They mistake noisy signals for completely uninformative noise (often also thinking, incorrectly, that the noise level could be substantially reduced). (iv) They misunderstand what signals the recipients are looking at (e.g., mistakenly thinking that faculty search committees just toss applications from anyone without a Nature or Science paper, or anyone with less than X publications). (v) They misunderstand what unobservable attributes the recipients are looking for signals of, for instance misinterpreting the desire of universities to hire independent scholars as a desire to hire people who don’t collaborate.