Honest signals in academia (or, why academics mostly can’t game the system and shouldn’t try)

Note: I’ve been struggling for weeks with how to write this post. I eventually decided it’s just one blog post and I’m worrying too much about it. I just need to write down my scattered thoughts and get them out there so my brain can move on, trusting our great commenters to start a conversation that’s better than the post.

Also, just to be clear: the post isn’t primarily about academic misconduct, although I do refer to examples of that. And I am not saying that all attempts to “game the system” in academia are tantamount to misconduct! For instance, see this old post and this one.


Academic science is competitive: scientists and their work get evaluated relative to other scientists and their work. That’s as it should be and there’s no changing it anyway. But in most contexts evaluating scientists and their work relative to one another necessarily involves judgement calls, shortcuts, and heuristics. Which creates opportunities to “game the system”, which for purposes of this post I’m interpreting in a deliberately-broad sense.

And once in a while the scientific evaluation system does get gamed in a big way. Think of Diederik Stapel, who became a really high-profile, influential social psychologist by faking the data in dozens of papers. Or the recent case of sociology grad student Michael LaCour, who faked all the data in a Science paper and got hired by Princeton as a result (presumably, he’ll soon be unhired and lose his PhD). Further back, think of Mark Spector.

Much more commonly, the scientific evaluation system gets gamed in much smaller ways, many (not all) of which are perfectly ethical. I’m sure you can think of all sorts of practices that could be lumped under the heading of “playing the game”. “Salami slicing” to increase your publication count. Submitting every ms you write to Science, then immediately resubmitting the rejected ms to Nature, then immediately resubmitting that rejected ms to PNAS, and so on down the “journal ladder” one “rung” at a time. Including stuff other than peer-reviewed papers in the “publications” section of your cv. Self-promotion of various sorts–everything from nominating yourself for awards to blogging and tweeting about your own work. Acknowledging people whom you think won’t like your ms, in the hopes that this will prevent the editor from asking them for reviews. Trying to spin your work as having more applied relevance than it actually has. Going out of your way to meet famous scientists at conferences, in the hopes it will boost your name recognition and thus your career. Dressing in a certain way when interviewing for a job, so as to try to make a good impression on the search committee. And on and on. Not all such practices are problematic. But enough of them are that I’ve seen complaints that academic science has devolved into nothing but corrupt games-playing, serving no purpose other than naked careerism. Or less dramatically, complaints that games-playing has come to matter too much, substantially advantaging those who know the obscure, arbitrary rules of the game and are willing to play along, and disadvantaging others who are at least as good at science.

I mostly don’t buy those complaints. I think the scientific evaluation system mostly works reasonably well. And insofar as it fails I don’t think people gaming the system is a big problem. That’s for a couple of reasons. First, many of the “games” involved in academic science aren’t actually “games” in the sense of fundamentally-pointless activities with arbitrary rules. Rather, they have a point, as do their “rules”. Second, trying to game the system mostly doesn’t work. At best, it mostly gains you no appreciable advantage, and quite often it costs you. The signals of scientific merit may well be noisy signals–but they’re mostly honest signals. So paradoxically, the best way to “play the game” of academic science generally is to not approach it as if it was a game, but instead to just do science as best you can. Trying to “game the system” mostly is a bad idea even on narrow, ruthlessly careerist grounds.

To elaborate on the first reason, many of the usual evaluation practices of academic science have a point; they aren’t arbitrary.* There are good reasons for preferring scientists with many publications over scientists with few, all else being equal. There are good reasons for publishing in widely-read selective journals, and for paying attention to what’s published in such journals. There are good reasons to “network” at conferences that have nothing to do with naked careerism or self-promotion. There are good reasons for caring whether a prospective grad student has written a personalized and informative inquiry email. Etc. Yes, there are incentives and opportunities for people to try to game the system. But you can’t eliminate such incentives and opportunities. Evaluating science and scientists differently would just create opportunities for people to game the system in different ways. And there will be incentives to game the system as long as science remains a desirable career.

To elaborate on the second reason, little ways of trying to game the system don’t actually make any appreciable difference to your career prospects, either individually or in aggregate. “Every little helps”, as the saying goes–but so marginally that the effect is swamped by stuff you can’t easily fake, and stuff you can’t even control.** For instance, you think merely introducing yourself to Dr. Famous (even many Dr. Famouses) at the ESA meeting is going to help your career by “getting your name out there”? Think again. The same goes for other little ways of trying to game the system–they are easily seen through, and so are almost sure to backfire.*** Any competent scientist in your field can tell from your cv if you’re salami slicing, will spot it instantly if you try to pass off letters to the editor of Nature as Nature papers, rolls their eyes at over-the-top salesmanship about the applied importance of your work, will notice if you’re self-citing sufficiently often to make a material difference to your citation counts, etc. Sending everything you write to Science, Nature, and PNAS regardless of appropriateness just wastes everyone’s time (including yours) and grows your shadow cv rather than your real one. Trying to increase your publication rate by quickly resubmitting rejected mss without bothering to revise mostly just gets you a reputation you don’t want. The most likely effect of trying to trick editors into using or avoiding certain reviewers is that it will embarrass you. Etc.

Heck, even big ways of trying to game the system mostly don’t work. For instance, I’ve probably spent more time blogging over the past few years than any ecologist in the world, and quite successfully. If you think of blogging as a way of gaming the system, because you think it’s a form of self-promotion or vanity publishing, then I’ve probably gained as many undeserved rewards as it’s possible to gain via blogging. Which is to say, hardly any tangible rewards at all! Or think of how long and hard folks like Stapel and Spector had to work to fake stuff in a big enough way to make a material difference to their careers, and how carefully they had to hide what they were doing–and they still got caught! Bottom line: producing good science is costly, in various ways, and so the production of good science is an honest signal that mostly can’t be reliably faked (even though our systems for evaluating science and scientists are noisy).****

One reason I worry about this is that I hate to see people needlessly anxious about stuff that’s not worth worrying about. At least, not worth worrying about as much as many people seem to. For instance, Thea Whitman has a wonderful blog post on her successful job interview in which she talks about how she succeeded by just being herself. Rather than worrying too much about how to come off as someone or something she’s not. I like that attitude–choose your own path and own your choices.

p.s. For some pushback against this post in the context of universities–rather than individual academics–all gaming the system in a big way, see here.

*Other possible evaluation practices also have a point. That’s why practices sometimes change. But the fact that we don’t all agree on evaluation practices doesn’t show that our current practices are broken or just pointless games or whatever.

**Paradoxically, the very same competitiveness that creates incentives to game the system also helps make the system difficult to game. If you’re in a race with a bunch of very fast, well-trained elite runners, there’s nothing you can easily do to either make yourself faster than the competition, or cause other people to mistakenly think you’re faster than the competition.

***In this respect, unethical little ways of trying to game the system in academia are like undergraduate academic misconduct. Like many instructors, I’m always struck by how students who commit academic misconduct mostly do so in ways that are easily detected, and mostly would provide very small rewards even if they went undetected.

****Insofar as people think otherwise, I think it’s for a few reasons: (i) They overgeneralize from the rare big exceptions, like the Stapel case. (ii) They mistake noisy signals of merit for dishonest ones. (iii) They mistake noisy signals for completely uninformative noise (often also thinking, incorrectly, that the noise level could be substantially reduced). (iv) They misunderstand what signals the recipients are looking at (e.g., mistakenly thinking that faculty search committees just toss applications from anyone without a Nature or Science paper, or anyone with less than X publications). (v) They misunderstand what unobservable attributes the recipients are looking for signals of, for instance misinterpreting the desire of universities to hire independent scholars as a desire to hire people who don’t collaborate.

48 thoughts on “Honest signals in academia (or, why academics mostly can’t game the system and shouldn’t try)

  1. Dear Jeremy

    The analogy makes sense, but I don’t buy your unsupported premisses that gaming the system is not a problem because the system ignores most of these games. LaCour was offered a job at Princeton and if the fraud had not been discovered, he would be at Princeton this fall. Green was conned into co-authorship. Psychologists have studied for decades that humans are prone to rely on heuristics that create biases in decision making. What we really need are signals that reveal integrity and credibility.


    Sincerely, Dr. R

    • Not sure what you mean by saying the system “ignores” most of these games. For instance, LaCour was caught, and as Jesse Singal’s interviews with Broockman show, he was lucky to get as far as he did and was almost caught earlier. Further, LaCour was caught before many people had had to waste much time and money trying to build on his work. And Princeton and Green were fooled, as far as I know, and Green in particular reacted very fast as soon as he was informed what was happening. I haven’t read anything to suggest that Princeton and Green had suspicions but ignored them. (Some have argued that Green *should’ve* had more suspicions, but it’s not clear to me how much that’s hindsight or not, and in any case that’s different than saying he “ignored” LaCour’s games).

      That people rely on signals (or as you put it, heuristics) doesn’t show that those signals are dishonest, or mostly dishonest. In fact, they’re mostly honest, because they’re costly to produce and so difficult to fake. As has often been asked about folks like Stapel and LaCour, why didn’t they just do honest science? It would’ve been just as easy (read: difficult) as what they actually did, and they’d still have careers.

      Let me try a different tack: do you think *most* scientists game the system in any important way? That is, do you think folks like LaCour and Stapel are only unusual because they were caught, and that in fact *most* scientists (quite likely including me!) are doing the same sort of thing but getting away with it?

  2. I agree. Sure, there are frauds and other examples of “gaming”, but they’re so much more interesting than normal, routine, execution of what we do as scientists that we pay lots of attention to them (as does the media). So it’s easy to get the feeling that system-gaming is pervasive – just like reading the tabloids would have you think major crime is at an all-time high (when in fact the opposite is true). Scientists are human, so there will be fakery and gaming and all that; but at low levels, and I think without distorting the progress of knowledge much, or for long.

    • Yes. As I probably should’ve put it in the post, heuristics and cognitive biases don’t just create the possibility of gaming the system. They also cause us (especially those of us who are on Twitter!) to pay too much attention to, and overgeneralize from, rare instance of people gaming the system.

      I think you’re asking the right question: where has gaming the system distorted scientific progress in a big way for a long time? You could maybe argue that Stapel sent a chunk of social psychology down the garden path for many years. I’m not sure, not being a social psychologist myself. In evolution, you could maybe argue that Anders Pape Møller fiddled the data on fluctuating asymmetry enough to create a big bandwagon where none should’ve existed (see this old post and links therein: https://dynamicecology.wordpress.com/2011/11/03/is-scientific-misconduct-especially-rare-in-ecology-and-evolution/). There might be a few other such cases. But yeah, they’re really rare.

      I worry much more about honest practices that lead science astray. For instance, my impression from reading Andrew Gelman’s blog is that social psychology has been led astray much more by “researcher degrees of freedom” than by cheaters like Stapel.

  3. Dear Jeremy,

    there has been quite some discussion, also on this blog, about widespread p-value hacking, post-analysis hypothesis formulation, disguising explonatory as confirmatory studies etc.
    Aren’t these practices actually quite often succesful (in terms of getting a paper published) and therefore a way of gaming that can be done and that is done?

    • Those practices aren’t gaming the system, I don’t think. They’re bad practices, and I worry about them much more than about fakery. But they’re just mistakes, involving honest people doing the best they can.

      • Well. I guess, that depends on how consciously it is done. I am not quite sure it is always just a honest mistake.

      • Social psychologists can publish studies based on small samples of undergraduates or based on convenience samples from MTurk, which means that the cost of running an experiment is low. Which means that social psychologists can run multiple experiments and present only the experiments that provide support for their hypothesis. Which some of them do. You can argue that this is the current norm in social psychology, but I don’t see how such practices constitute mistakes in the sense of a good faith error. I also don’t see how such practices are not gaming the system, either, given that social psychology journals appear to have a bias against publishing null results.

      • I think what you’re describing is an entire field going off the rails because they’re looking for the wrong sorts of signals, and/or looking for signals of the wrong sort of thing. An entire field with the wrong norms seems to me importantly different (and worse) than the sort of thing I’m thinking of in the post.

        Don’t misunderstand, I worry a lot about whole fields going off the rails even though every individual working in those fields is well-intentioned and doing their best.

      • I do not understand how a field can develop bad norms when every member of that field is well-intentioned and doing their best. It seems much more likely that bad norms such as selective reporting of results develop from a critical mass of members trying to game the system.

      • Sorry, we’ll have to agree to disagree on that. Bad norms in science mostly develop for good reasons. Not because a field is rife with a critical mass of ruthless careerist jerks who are asking themselves “How can I bullshit my way into a bunch of papers?” Andrew Gelman is good on this in the context of social science, psychology in particular (e.g., the end of this recent post: http://andrewgelman.com/2015/05/31/the-greatest-impediment-to-research-progress-is-not-impediments-to-research-progress-it-is-scientists-reading-about-impediments-to-research-progress/). For instance, I’d imagine that a norm that small samples of undergrads or MTurkers are ok can develop as a side effect of reasonable pragmatism. It’s really difficult, and very expensive, to take a large random sample of everybody on the planet and subject them to the same psychology experiment.

        In ecology, I think this is why so many ecologists repeatedly make the mistake of trying to infer process from pattern and causation from correlation (https://dynamicecology.wordpress.com/2012/10/23/has-any-shortcut-method-in-ecology-ever-worked/). It’s not because there’s a critical mass of jerks all trying to game the system by cranking out quickie oversold papers that they themselves know are rubbish. It’s because ecology is hard, and so people quite naturally and reasonably want to squeeze as much information as they can out of the data they happen to have. Which is often observational data.

        Or take Brian’s recent post on widespread abuse of AIC in ecology. It’s not that ecologists abuse AIC because they’re trying to game the system. It’s because they’re vague in their own minds about what they’re trying to accomplish, statistically.

      • The bad practice that I am writing about is the selective reporting of results; the small undergraduate samples and MTurk samples only make this practice easier to implement.

        John et al. 2012 reported on a survey of psychologists in which 46% of the sample admitted to selectively reporting studies and 63% of the sample admitted to selectively reporting dependent variables. I think that such behavior is better characterized as gaming the system than as a mistake or as researchers doing the best that they can. You are welcome to disagree.

        Link to John et al. 2012: http://www.socio.mta.hu/uploads/files/archive/john_et_al_2012.pdf.

      • @L. J. Zigerell:

        I think most all researchers are selective about which studies to report, if only because of lack of time (https://dynamicecology.wordpress.com/2013/04/16/prioritizing-manuscripts-and-having-data-go-unpublished-for-lack-of-time/). I can certainly imagine circumstances in which selective reporting of studies amounts to gaming the system. Doing an experiment 10 times and only reporting the results for the one run that gave you the results you wanted, without any reason to think the other runs were flawed, for instance. But in other circumstances, like those Meg describes in her post, selective reporting of studies seems hard to avoid and even desirable in some ways. I don’t think it’s useful for everyone to publish every exploratory study they do.

        Re: selective reporting of variables, I’d say the same. There are times when that’s clearly gaming the system, and times when it’s just somebody honest who’s doing the best they can and who doesn’t fully realize the statistical and inferential consequences of their seemingly-reasonable choices.

        As noted in John et al., there’s a continuum from practices that are fine and widely accepted, to practices that are debatable to varying degrees and less widely accepted, to practices that are obviously inappropriate and universally condemned. I wouldn’t take the existence of practices of the intermediate sort as evidence in and of itself of the existence of a critical mass of dishonest bad actors in a scientific field.

        It’s possible that we’re at least partially talking past one another, so let me ask you this. At the end of their ms (left column of p. 531), John et al. comment on what their results say (or don’t say) about the professional ethics and scientific judgement of the psychologists they surveyed. Do you agree with those comments? Or do you find them much too kind to to the professional ethics and competence of psychologists (as is my impression, but perhaps I’m misreading you)?

      • The selective reporting of studies in terms of Meg’s post isn’t much of a problem. I’m more concerned about selective reporting of results that produces inferences that are not representative of the data that has been collected. I’m thinking of, for example, cases in which researchers conducted a survey experiment on a nationally-representative sample, did not find evidence to support their hypothesis, and then reported results from smaller convenience samples in support of their hypothesis without ever mentioning the null results from the larger nationally-representative sample.

        I do not doubt that there might be some cases in which researchers were acting in good faith but questionable research practices in their study produced inferences that were not representative of the data that had been collected. In the absence of evidence, I think that it would be a mistake to impute bad motives to these researchers, but I also think that it is a mistake to impute good motives to these researchers. I think that the best course of action in the absence of evidence is to acknowledge the absence of evidence.

        That’s the reservation that I have about the John et al. passage that you referred to, in which John et al. write: “We assume that the vast majority of researchers are sincerely motivated to conduct sound scientific research” (p. 531). I think that we should be charitable and refrain from assuming bad motives, but I don’t think that charitableness requires assuming good motives.

        I’m not entirely sure how carefully to parse that John et al. sentence, either. I have no problem believing that the vast majority of researchers would choose good scientific practices over bad scientific practices, but I’m not sure that the vast majority of researchers would choose to report the most representative description of the data if it meant reporting results that would undermine the researcher’s chance of publishing the data or would undermine the researcher’s public policy goals or would undermine the researcher’s chance at more funding.

  4. This is an intriguing topic, Jeremy. I don’t know how many positions the average scientist works nowadays. I know for the generation that educated me (they are now retired) most had relatively few. Once they got tenure, they stayed. It has been a very different experience for me. The longest position I’ve had is my current one (five years, 9 months & counting). My previous mark was 4 years, and I am almost 54 yo. So what that says is, I’ve worked a heck of a lot of positions in science. That experience, involving two disciplines, has given me exposure to many scientists in many settings.

    I believe you are correct that relatively speaking, there is less of the overt gaming in ecology compared to some other disciplines. Geez Louise, when I was in cell cycle science, the gamesmanship and nearly constant kissing of the rings was nauseating. I can say that because I was deeply imbedded in the science, went to all the big international conferences, and worked with several of the ‘gurus’ in the field. And certainly when it came to funding, you best have kissed the requisite # of rings to get the approvals you needed. It was a feast or famine world, with plenty of good scientists not getting recognition or funding for the reasons I state.

    I recently witnessed a form of gaming in ecology that I did not see mention of in your post. I’ll refer to this person as “Fred,” an identifier that most certainly no one could ever connect to the real person. I worked with Fred for several years. Time & again, his level of incompetence raised pretty big red flags, but Fred was very personable. Fred also talked the talk really really well. He had the whole “I’m a consummate professional thing” down pat. But a disturbing pattern developed relatively quickly: Fred could never,ever make a mistake- much less make amends for one. Fred was always out front on that, and he was blaming someone else for his mishaps before many even knew the mistakes occurred.

    Fred placed two of his students on academic probation because of mistakes Fred made. When study design was compromised by land managers not knowing the protocol, Fred blamed them, not himself for a failure to communicate. When Fred was investigated for less than acceptable practices, Fred once again blamed everyone else. This became a disturbing enough of a pattern that I decided to look into how Fred made it “up the ladder”. Fred went to a school of little notice and managed to secure free reign over his studies. There was no oversight. Then, Fred got a gig where he was the only ecologist on staff… so getting tenure??? No worries, because no one really understood Fred’s work.

    Fred did not have the publication record justifying his hire, and Fred never aspired to the level of scholarship deserving of tenure. But Fred got it, because no one else was looking in on what was happening. Fred gamed the system in spades.

    • Let’s please restrict this thread to discussion of cases in the public record. This thread isn’t a venue for airing of grievances against individuals who have no opportunity to defend themselves.

      • Please correct me if I am mistaken, but I thought I had taken necessary precautions to not air any grievance of any kind. I also did not identify any particular person. I did however respond to the theme of your post. And suffice it say there is a record, but out of respect for you, your blog and this person I refrained from citing it.

      • “Please correct me if I am mistaken, but I thought I had taken necessary precautions to not air any grievance of any kind. ”

        You gave a *lot* of details about a series of incidents with which you’re obviously personally familiar. And you posted under your real name. It wouldn’t be too difficult for anyone with sufficient curiosity to figure out who “Fred” is, or at least narrow the field considerably.

      • Well, yes, I can see this from your point of view. Understand I was aware of the sensitivities and tried to confer the respect your blog is deserving of.

  5. Thanks for posting this. I hope it leads to a lot of interesting discussion. I think you have a lot of good points. I’m a bit less optimistic, though, about what this view of meritocracy means in a social context where the playing field isn’t totally level.

    1) The line between getting on people’s radar and getting on their nerves probably isn’t drawn in the same place for all scientists. If senior scientists have less patience for junior researchers that don’t look or sound like them, then doing good science might not always be enough for women, minorities, scientists without impressive academic pedigrees, non-native English speakers, researchers with disabilities, etc.

    2) As your post implied, there’s a huge amount of unwritten or barely-written knowledge that’s basically required to avoid stepping on people’s toes. One’s expertise in these areas may not correlate all that well with one’s ability to do good science. How much of the variance could be explained by being an older, white native English speaker, or someone whose highly-educated parents could afford to send them to an elite university?

    On a more positive note, this blog has done a really fantastic job of making that unwritten knowledge more explicit, which could be a big help in addressing issue #2. So, thank you for all your work on it.

    • Yep. This is what I was thinking when reading the post. Gaming the system is not too much of a problem if “all else is equal,” which it is not. The problem with gaming the system (in unethical ways, since Jeremy lumps a lot of stuff together) is that it can exacerbate inequalities already in the system.

      • @Margaret:

        “The problem with gaming the system (in unethical ways, since Jeremy lumps a lot of stuff together) is that it can exacerbate inequalities already in the system.”

        Hmm…without wanting to disagree with David’s good point, this is an odd way to put it, isn’t it? I mean, it suggests that unethical ways of gaming the system would be ok if it was just disadvantaged people engaging in them. Which can’t be what you meant, surely?

      • Hmm. I take your point. But I suggest that since you lumped a lot of things together from the clearly not unethical (networking) to the clearly unethical (faking data), there is a gray zone where some people would say certain activities are ethical and others would say they are not. We then get into the more thorny world of things like affirmative action: “Is it ethical to give disadvantaged people a leg up in one way or another to even the playing field?” (But let’s not answer this question on this blog post…)

  6. Thanks for the thoughtful post Jeremy. I think it’s interesting that you put social media/blogging (I think of it as personal branding) in as a potential way of gaming the system. My one comment is that I think you are perhaps not the best counter example to that as a way of using self-promotion to game the system. To begin with the blog has impacted your reputation (undoubtedly for the better) and even your CV (Zombie paper, I’d say the blog gave you a platform to really flush the idea out, not to say this couldn’t have happened without the blog though). I can’t comment on how this has impacted your career, but on the surface it seems good to neutral. I think a stronger test is how it impacts the careers of younger scientists in the grad student / postdoc pipeline. Of course in the extreme case of all Tweets and no papers, the raw self-promotion and gamemanship will quickly be seen and dismissed. However what about cases in the margins? What’s a better time ROI for a young scientist if they want to advance their career; having a strong personal brand through blogs/twitter etc and 5 good papers or someone with no self-promotion and 7 good papers (assuming these two options took equal time)? I think that’s a question that is yet to be answerable but will become more so in the not to distant future.

    Here’s another example of the indirect effects of self-promotion. A couple of years ago I co-authored an unfunded NCEAS proposal (http://figshare.com/articles/Hart_and_Chamberlain_NCEAS_proposal/97215). Several of the suggested participants I found through blogs and twitter and thought “This person looks like an interesting collaborator”. The pay-off would have been being part of an NCEAS working group which can rapidly publish many papers boosting the CV’s of all the participants (are working groups their own sort of gamesmanship is another interesting question). What can be difficult to disentangle when thinking about personal branding/ promotion on the internet is the question of how it’s different than your example of “Introducing yourself to Dr. Famous” and other traditional forms of networking (meetings, talks, etc…). Self-promotion through personal internet branding is a way to separate yourself from a competitive field. NSF and foundation program officers are on twitter, so while they might forget you based on a random introduction at ESA, maybe they meet you there then you interact over twitter and maybe they read a really good blog post, that surely isn’t hurting your chances (especially in the foundation $$ world).

    I’ll freely admit this is mostly rank speculation and evidence of these effects would be almost impossible to muster beyond anecdotes. As I said earlier, I think personal branding will likely be a boost at the margins. Yes you can’t be all fluff and no substance, but I think there’s an optimum balance. While I agree with your premise of ‘honest signals’, as you say, they are noisy and given two equal honest signals, the one with more personal branding behind it will be boosted more.

    • Hi Ted,

      Oh, you’re absolutely right that blogging has been a net positive for my career. It’s just that it’s impact is in ways that mostly aren’t tangible, and that don’t take the form of the few narrow impacts everyone talks about and worries about–publications, grants, jobs. I’m sure blogging gives me much greater impact on the direction of the field than I’d have had had I not been blogging (though as an aside, I have no idea how much impact that is in an absolute sense). And it’s paid off in various other intangible ways that I care about a lot. I now count Meg and Brian as friends and close colleagues, for instance. I’ve learned a lot from them and our commenters. Etc. But the tangible payoffs really have been very modest. My blogging has not had any material effect on my raises or promotions, the reviews my papers or grants receive, etc.

      And even the very modest tangible payoffs couldn’t have been anticipated in advance. For instance, yes, I’ve gotten a couple of papers out of my blogging that I wouldn’t otherwise have written–but that was a happy accident. Had I set out to blog with the conscious intent of getting some papers (or other concrete, traditional outputs or rewards) out of my blogging, I’m sure my blogging would’ve been much less successful and beneficial to me overall.

      Heck, even just within the context of blogging, the reason this blog gets a lot of traffic (traffic being the most tangible payoff of blogging) is that we don’t decide what to post by asking “What would draw the most traffic?” We just blog as best we can, and the traffic follows from that. If we ever started to intentionally chase traffic, I’m sure the effect would be to reduce our traffic.

      “What’s a better time ROI for a young scientist if they want to advance their career; having a strong personal brand through blogs/twitter etc and 5 good papers or someone with no self-promotion and 7 good papers (assuming these two options took equal time)?”

      Good question. I don’t know the answer. But in practice, I think the answer’s so individual that it’s difficult to give general advice. I think you (meaning each of us) need to do what works for you. Blogging works for me (whereas, say, Twitter and Facebook don’t). But it wouldn’t work for everyone, or even most people. I have an old post on this:


      More broadly, there’s this:


      • Not sure why it would be inappropriate to start a blog with the intention of “getting papers” out of blogging. As long as the papers / research is valuable to the research community, why is it a problem to get some promotional points on the way?

        All science is promo work. Thats why names go on papers. The problem arises when the promotional outcome becomes more important than the underlying work.

      • “Not sure why it would be inappropriate to start a blog with the intention of “getting papers” out of blogging. ”

        It’s not a problem at all! It just wouldn’t have worked for me. Nor would I personally regard blogging for the purpose of developing one’s own ideas into paper form as self-promotion, except in the trivial sense that anything with your name on it is self-promotion (as you note). Sorry if that wasn’t clear.

    • Hey Ted- your comments about social media are interesting, and something I studied recently. When I attended a conference in DC in 2013, mostly composed of persons considered to be on the cutting edge of future trends (myself notwithstanding)- it was preached to me time and again that social media is the future of ANY enterprise, scientific or otherwise. As a person who did not touch a computer until his junior year of college, I was skeptical of this claim.

      Recently my organization tested this idea. We had not appreciably engaged in social media of any kind over the first 2 years our website was up and running, So we had a good baseline sample to work with. Then, we actively participated in six science blogs- usually commenting on one or two of them per day, for six weeks. Then we stopped for six weeks, and then we repeated the behavior for another 6 weeks.

      The results were stunning. We more than tripled our website traffic when we were commenting- and mind you, we never promoted our organization. We just participated in ongoing discussions. So just the activity of doing this, free of any self-promotion, self-promoted us. People were interested enough in our comments to look us up. Pretty neat stuff.

      I personally draw the line on sappy self-promotion in science between careerism and ideas. I think there is no issue with promoting a good idea, even if it happens to be your own.

  7. Jeremy,

    I agree that the examples of “gaming” the system that you mention are either very severe & rare and get purged or are relatively small and insignificant violations. I do not see spreading one’s results via blogs and social networks as gaming the system at all. The problematic examples of “gaming” include befriending editors of elite magazines/journals, omitting inconvenient data from manuscripts, putting pressure on students to get “the expected result”, granting authorship for the wrong reasons and so. I think that this type of “gaming the system” does undermine the research community and has lasting consequences.

    • “I do not see spreading one’s results via blogs and social networks as gaming the system at all.”

      I actually don’t think many people think of that as gaming the system these days (depending on how it’s done–if all you do on social media is incessantly promote your own work I think that tends to come off badly.)

      “The problematic examples of “gaming” include befriending editors of elite magazines/journals, omitting inconvenient data from manuscripts, putting pressure on students to get “the expected result”, granting authorship for the wrong reasons and so. I think that this type of “gaming the system” does undermine the research community and has lasting consequences.”

      Hmm…I don’t agree that most of those things have big effects or lasting consequences. At least not in the fields I know anything about. I disapprove of those things, but I suspect many of them fall in the category of having consequences too small to get panicked about. For instance, I’m sure authorship criteria are getting looser than they used to be (though practices vary within and among fields, of course). But I don’t think that’s throwing science as a whole off track in any big way.

      • I agree, using social media just for self-promotion comes off badly and it’s ultimately not very effective because it undermines one’s social rich.

        I hope you are right Jeremy for the practices that I find problematic. Your attitude/opinion is uplifting and a healthy one to have.

  8. Maybe this won’t fit in with your definition of gaming the system, but I think the following is an area of concern. I sometimes see examples where scientists provide biased public commentary on controversial conservation topics; e.g in the media or blogs which don’t go through the rigorous review process of scientific papers. This may arise because of strong personal values and stances on a topic, where they selectively report on arguments in their favour and ignore scientific evidence that doesn’t support their stance. A different situation may arise whereby a scientist chooses not to enter into a public debate in their area of specialised expertise because they are afraid of negative consequences from an industry funding body. Maybe they might choose (or be pressured) to not publish findings if they are unpalatable to their industry collaborators. Would such practices generally be considered unethical? I don’t think there is enough education or discussion around these aspects of ethics in science to guide peoples behaviour. What is and isn’t ok? It is a much greyer area than clearly unethical practices like fabricating results. The general public is likely take an expert scientists statements on face value without necessarily realizing that there might be an equally or more credible alternative viewpoint. Unfortunately, journalists don’t always report on all sides of an argument. Scientists who actively participate in public dialogue can build a big public image. I wouldn’t necessarily call this gaming the system, as their motivations might be good, but I’m guessing a big public profile might still enhance their career prospects.

    • ” Scientists who actively participate in public dialogue can build a big public image. I wouldn’t necessarily call this gaming the system, as their motivations might be good, but I’m guessing a big public profile might still enhance their career prospects.”

      It depends on what sort of career they want. Economist Julian Simon, for instance, got a massive public profile in the wake of his bet on overpopulation with Paul Ehrlich. But he always wanted to be taken more seriously by his academic colleagues and wasn’t. There are many broadly-similar examples: Richard Dawkins, Niall Ferguson…

      “Would such practices generally be considered unethical?”

      Yes. For instance, as far as I know every reputable journal in every field requires disclosure of financial conflicts of interest.

      • Are financial conflicts are the only significant sources of bias these days?

        Suppose a scientist who has campaigned for years against logging through political orgs publishes a paper or gives an expert legal opinion that logging was responsible for a deadly slope failure. Hasn’t this scientist effectively professed an anti-logging belief that potentially compromises h/her scientific objectivity? shouldn’t political activity be as important as financial interests?

    • You raise some interesting points there Sue and, for what it’s worth, I see the role of scientist as advocate/campaigner/whatever as a positive one IF it’s backed up by the science. Ecologists seem to be unique if fretting about this issue. Would we expect cancer biologists to be neutral on their subject, for instance?

      However I’d concur that there’s a fine line to be walked here and I know that not everyone would agree with me – here’s a recent blog post on the issue: https://jeffollerton.wordpress.com/2015/04/03/should-biodiversity-scientists-be-campaigners-and-polemicists/

      I’d also add that the vast majority of the world’s population do not read peer reviewed literature (and often cannot access it anyway), so commenting on important topics via the media, blogs, etc. is important regardless of whether it’s been peer reviewed but, again, as long as it reflects the science.

    • “Maybe they might choose (or be pressured) to not publish findings if they are unpalatable to their industry collaborators. Would such practices generally be considered unethical?”nk

      UBETCHA, Sue. And it’s not only industry where this happens. I have seen it within government as well. Back in the early 90s I worked as a biologist for the Wisconsin Department of Natural Resources. Our governor at the time, Tommy Thompson, personally overrode legislation prohibiting the development of hard rock mines within 500 feet of a river bank. His justification: “The ore body is fixed with regard to its location”. Meaning, because God wasn’t gonna move the gold away from the river, he was.

      It became a hot button issue. The Sierra Club sued, but they were unable to unearth the scientific evidence they needed to prevail. The state stonewalled them time and again. I was contacted and asked to help. I hesitated, but eventually contacted the state biologist in charge of the environmental review. He was livid, and he immediately sent me all of the scientific documentation needed to halt the project. I informed my employer I was going to turn it over to the Sierra Club. I was given plenty of cold shoulders and innuendos.

      When the case went before the judge, the project was shut down (unfortunately, not permanently, although the action vastly improved protections). The biologist giving me the documents retired the same day, and my position was reduced from FTE to 20 hours per week… and then, phased out.

      Being ethical comes with a cost, and I think anyone walking around claiming to be a scientist needs to be willing to pay that cost.

  9. Great post Jeremy and you clearly touched some nerves! But I do disagree with your statement that “Including stuff other than peer-reviewed papers in the “publications” section of your cv” is a Bad Thing, which I know we’ve discussed before but I think needs to be mentioned again in this context.

    If a scientist is asked to write a “News and Views” type commentary in Nature or Science, are they really not going to include that on their cv? Or a “popular” article in, for example, New Scientist or Scientific American? Or a newspaper opinion piece?

    These are all examples of scientists engaging more widely with their subject and peer group, and society more broadly, which is seen as increasingly more important, particualrly by funding agencies and university senior management. None of those are peer reviewed but it would be silly not to include them in your list of publications, as long as they clearly marked as such. Some people use separate sections for papers, book chapters, non-peer reviewed articles, etc. I prefer to list them all together and use asterisks to denote those that were peer-reviewed.

    • “If a scientist is asked to write a “News and Views” type commentary in Nature or Science, are they really not going to include that on their cv? Or a “popular” article in, for example, New Scientist or Scientific American? Or a newspaper opinion piece? ”

      Yes, we’ve discussed this before. Of course you should put this kind of thing on your cv–in a separate section. Or with asterisks indicating what’s peer reviewed, that’s totally fine. As I hope the post made clear, what’s not ok (and will instantly be seen through) is listing that stuff on your cv in the same section as your peer reviewed papers, with no indication that those publications are any different than peer-reviewed papers.

      • Yes, the post you linked to made that clear, but in the present post it was a bit of a bald statement that needed qualifying, hence my comment.

  10. Dear Jeremy

    It’s interesting that you should write about honest signaling specifically in the context of academic reputations and the job market (e.g. in LaCour’s case with the alleged CV fabrications). Just a point of interest which you may very well be aware of, but you made reference to honest signaling in the biology literature by linking to wikipedia’s page on honesty in biological signaling. Honest signaling theory in biology was predated by honest signaling theory in economics. In economics, the first demonstration of how costly honest signaling works, in which a separating equilibrium was found whereby individuals of different “quality” signal this honestly, was about the job market (Spence, M 1973. Job Market Signaling. Quarterly Journal of Economics 87: 355–374). See signaling section for economics on wiki: http://en.wikipedia.org/wiki/Signalling_%28economics%29 In the canonical signaling model in economics, individuals accurately signal their quality through their academic achievements and employers can use this information to differentiate individuals by quality type. This separating equilibrium is essentially the same result that was later independently derived by Grafen (1990) in biology, and which was (incorrectly as it happens, but that’s another story) taken as vindicating Zahavi (1975). Anyway, this probably doesn’t really contribute to your debate, but just as a point of interest I wanted to point out/remind people of the interesting quirk that because of Spence’s model, signaling your academic ability and achievements honestly and it’s links to your ability to get hired when applying for jobs is actually at the very foundation of modern honest signaling theory.

    • Thank you for pointing this out, I wasn’t aware of the irony here. @nyuprimatology also pointed this out on Twitter when the post first went up. Glad someone took the time to come by and comment.

      • @James Higham:

        Thanks again for stopping by. I agree that this sort of thing (meaning “a thought of any substance whatsoever”) isn’t best shared on Twitter.🙂

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s