William Shockley on what makes a person who publishes a lot of papers (and the superstar researcher system)

To me one of the all time most interesting meta-research papers (research about the process of research) was written by William Shockley way back in 1957 (On the statistics of individual variations of productivity in research laboratories).

Shockley won the Nobel prize for the invention of the transistor and later more or less singlehandedly launched Silicon Valley. He was, by any account, a not so nice person. He attempted to steal sole credit for the transistor when he had collaborated with two other people; in later life he veered into eugenics, culminating in founding a sperm bank only for Nobel prize winners. And you can find roots of his eugenics views in the paper I am about to talk about. Despite all this, I have returned many times to this one paper and found worthwhile new ideas in it (I guess it proves science shouldn’t be about personalities).

The paper was written when Shockley was the director of Bell Laboratories back in the 1950s when this was one of the premier research centers in the world. He gathered statistics on all of the research employees in his lab as well as many other national labs, university departments and other research units and showed rather decisively that productivity as measured by total number of publications, rate of publication and number of patents is log-normally distributed (most researchers had low productivity and a few had extremely high productivity, just like the species abundance distribution in ecology). This observation remains true today and applies to other research related topics which are also lognormal like the # of citations a paper generates (the median number of citations for a paper is zero!), the impact factor of journals, etc. It is where he went with this idea that I find interesting.

I want to start with what is for me the 2ary point of the paper. Namely Shockley was the manager of a lab responsible for setting salaries and he mused on the implications of lognormal productivity for salaries. His observation was pretty simple – salaries are basically additive – the best researcher is likely to be paid only 50-100% more than the average while their productivity will be an order of magnitude (1000%) or more than the average. Thus, his rather inescapable conclusion is that from a manager’s point of view where one is trying to maximize productivity per dollar spent, one should exploit the additive salary/multiplicative productivity disparity by employing the people in the extreme right tail of the productivity curve, even if it means paying them 50-100% more. He was basically railing against the very rigid salary structures of large corporations and governments that prohibited this. Over time, in many intellectually based disciplines (academics, software development, engineering) top managers have followed this advice. And for better and for worse, this is the rationale of the “superstar” system at research intensive universities where the very top people seemingly can negotiate any salary and perks they want (it’s probably also the rationale by which the top departments and universities get disproportionately more money). Of course there are countervailing forces like fairness, innate human dignity, encouraging a collaborative culture, the need for glue people who make the place hang together even if they’re not cranking out the papers (we still need department chairs and peer committees), etc why this might be a bad idea too. I’m not weighing in on what is the right way to do it – but the logic of the superstar system is laid rather bare in Shockley’s paper.

To me the more interesting discussion is on the mechanisms that might lead to a lognormal distribution in productivity. Shockely presents several.possibilities. But the one on the left hand side of page 286 has stuck with me and informed my own approach to research.

Shockely suggest that producing a paper is tantamount to clearing every one of a sequence of hurdles. He specifically lists:

  1. ability to think of a good problem
  2. ability to work on it
  3. ability to recognize a worthwhile result
  4. ability to make a decision as to when to stop
    and write up the results
  5. ability to write adequately
  6. ability to profit constructively from criticism
  7. determination to submit the paper to a journal
  8. persistence in making changes (if necessary as a result of
    journal action).

Shockley then posits, what if the odds of a person clearing hurdle #i from the list of 8 above is pi? Then the rate of publishing papers for this individual should be proportional to p1p2p3…p8. This gives the multiplication of random variables needed to explain the lognormal distribution of productivity (Shockley goes on to note that if one person is 50% above average in each of the 8 areas then they will be 2460% more productive than average at the total process).

But what I really like and take home from this paper is the hurdle model and how I find it a useful way to think about my paper writing productivity. The model says writing a paper is not about one thing. It is about a bunch of things. And – the really surprising point – all of those things count more or less equally. I think most academics have a mythos that people who are productive scientists are mostly good at #1 (coming up with ideas) and maybe #6 (writing). I don’t think most people think about the fact that being productive is about knowing when to stop or knowing what is an important result. And especially, #7 and #8 are about rejection and dealing with rejection. Did it ever strike you that being a productive scientist is 1/4 about dealing with rejection well? It probably should – recently at a meeting the VPR (vice president of research) on our campus pointed out that the person with the most grants on campus was the person who had been rejected on the most grants.

Another conclusion is that if you are really bad at just one factor (pi close to zero for just one i), it sinks your overall productivity. This is innate in the multiplicative model (it is analogous to the ecological concept of bet hedging*). Being moderately good at everything is better than great at some and terrible at others (the oft heard “I’m terrible at writing but really good at coming up with ideas” doesn’t cut it but nor does the opposite).

In a way the hurdle model is disappointing – I have to be good at lots of areas to get papers out the door. But in another way it is really comforting – I don’t have to be great at any area – I just have to tend to my knitting and plug away at ALL aspects of the process and I’ll do alright. Bottom line – getting a paper published (or a grant or even tenure) is about being pretty good at clearing all of a lot of different types of hurdles but not exceptional at anything.

Concretely, I use this model when I’m worried about my productivity by trying to think about which hurdle is most holding me back (not on one paper but across the sweep of papers).

What do you think? Does the hurdle model work for you? Any steps Shockley should or shouldn’t have included?

*To grossly oversimplify, bet hedging notes that in a sequence of fitnesses over time, evolution maximizes the geometric mean (multiplicative model) rather than the normal arithmetic mean (additive model). As a result fitness is increased both by increasing fitness of each component but also by reducing variance between components. Thus a steady-as-she-goes strategy without a lot of wild up and down swings is favored.

65 thoughts on “William Shockley on what makes a person who publishes a lot of papers (and the superstar researcher system)

  1. Wow Brian, this is really a fascinating idea.

    One thing concerns me though: Let’s assume that each of the eight steps in the process is independent, and that researchers’ abilities at each step are normally distributed.

    This means that a researcher has a 50% chance of being above-average at any one step. More worrying, it means that the probability of being above-average in all of the 8 steps is 0.5^8… that’s a 0.39% chance of being above average at every step of the research-publication process. Only 4 out of every 1000 researchers are above-average!

    As a PhD student, those odds scare me…

    • Well – your 0.39% – there’s the national academy members and people who publish 1000 papers in their life. But of course, just like in evolution, we only have to compare ourselves to the rest of the crowd (https://dynamicecology.wordpress.com/2013/09/24/dont-confuse-absolute-and-relative-fitness/). So I suppose it depends on whether your goals are to be that truly rare person (as your calculations show) who is a superstar or if you are happy with truly “above average”. In a way this paper is a meditation on what does it mean to be above average on a lognormal curve. It can be a little counter-intuitive. As I noted, I seem to recall hearing the median (one definition of average) number of citations for a paper is zero! We can probably all aim higher than that! But above average on 8 different fronts might be too much to ask (and indeed is very rare).

      • Yes, I agree with you that being above-average at each sub-skill is the sign of a remarkably talented researcher. I was probably putting the cart before the horse.

        Perhaps I should have phrased it in the sense that each step provides a threshold that must be exceeded to move on to the next step (i.e. only above-average ideas are worked on, only above-average researchers have the skills to perform the research and only half of those can identify a promising result… you get the point.) In this sense, only 0.39% of all ideas ever get published and >99% fall out along the way (or only 0.78% of above-average ideas get published).

        P.S. To be “above-average” in the conventional sense only requires that you’re generally better at each sub-skill than the bottom 8.2% of the competitors.

      • “I seem to recall hearing the median (one definition of average) number of citations for a paper is zero”

        I may be wrong but I think you picked that up from a comment I made on a post some time ago. It’s one of my favourite statistics that never fails to elicit surprise and a sharp intake of breath when I quote it to research students. The suggestion that most research is never cited comes from a couple of sources:



  2. Very useful Brian, many thanks.

    Years ago, a very nice professor showed me the structure of folders on his computer. It was a series of folders something like: ideas, proposals, working on, writing up, in review, in revision, published. A project starts as a folder in “ideas”, moves to “proposals”, and onwards as the project moves on. The hurdles you mention, or some similar to them, have to be passed for the project to move on. I’ve used this folder structure for years now. Regularly checking where projects are, and which haven’t moved for too long, helps me prioritise research (sometimes). Perhaps this is what everyone does, or there’s an even better (more functional) way of organising and keeping track of projects?


    • I’ve used something along these lines, though not in folder format. I’ve kept what I refer to as a “project-level to do list”, where I have things broken out into:
      1. ideas for new projects that I need to get started,
      2. projects in the early stages that may or may not work out,
      3. projects that have a reasonable chance of working, but where data collection isn’t yet complete,
      4. projects where data collection is complete and we need to start the manuscript,
      5. manuscripts in prep, and
      6. manuscripts in review or revision.
      I started this as a new faculty member, because I felt like I was starting to forget about projects, and I wanted to make sure that I had enough projects at all those different stages.

      On a related note, something I’ve heard is that a “baby gap” (that is, a decrease in research output on a CV due to having a child) is most likely to appear 1.5-2 years after having a baby. The idea is that what suffers the most while having a newborn is getting data collected and manuscripts submitted the first time, disrupting the flow of things along that pipeline. I have no idea if that’s true, but it seems plausible to me.

      This is a good reminder to me that I’ve been neglecting my project level to do list. I need to pull it up again and get things up to date!

  3. I like a lot the hurdle model too.

    I would only add that timing counts and some people are better than other at knowing when you need to allocate time to each step, specially when several papers are running in parallel.

  4. Thanks Brian. I loved this and have just shared it with several of my graduate students (new and old) and a few colleagues. The ideas laid out here and in William Shockley’s paper nicely align with my own, about how to publish and stay on top of your research. But, I have never seen it laid out quite like this. I also appreciated Owen Petchey’s comment on folders, as I lay my own folders out in a very similar fashion. Unfortunately, no one every showed me to do this when I was a graduate student. This is just something I have stumbled on during my time as a a young assistant professor.
    Kudos for writing about this.

  5. Really interesting post Brian, I’d not come across Shockley’s paper before. There’s a lot of truth in the multiplicative model you/he present. Only thing I’d add is that of all the steps, number 2 (“ability to work on it”) probably has more sub-steps than any of the others, e.g. Can I get funding to do the work? Can I do the work without funding? Can I set up appropriate experiments or make the observations required? Can I analyse the data? etc. Each of these could also be zero.

  6. Very interesting idea and great post. An interesting aspect, if you accept the hurdle model, is to think about how this can interact with collaborations. As you say, for an individual researcher, bet hedging is essential (you have to clear all hurdles by yourself). However, it should be possible to trade off poor skills for some hurdles with collaborations. In that sense, the hurdle model can be used to idenify the skills you need in collaborative parterns.

    • Really interesting point about collaborations. That is a rigorous model of what people claim comes from collaborations – this complementing of skills. People usually think about this as complementing of scientific skills. But the hurdle model suggests that if I’m bad at writing up I should look for people who are good at writing. Interesting.

      • Yes, need to think more about this, but it seems likely that collaborative ability/social skills can act as a wildcard, which can offset poor abilities in other areas. So that some researchers that would be relatively unproductive working independently could have very high productivity (maybe even more so than if they had an “even” distribution of skills). Feels like such a mechanism would skew the productivity distribution even more.

      • Interesting post, as usual!
        Isn’t collaborating around hurldes exactly what leads to “top” and “bottom” feeders in science, with a few people with good ideas & writing using the ground work of many people with technical skills (“working on it” hurdle)?

  7. That’s a really interesting take on a more general phenomenon, which is the selection for tails. These dynamics make a lot of sense, I wonder what that does to research as an endeavor on the whole. Mixing up absolute vs. relative fitness when it comes to individual scientists is actually a real problem – when you have extreme constraints, as imposed by the number of available jobs/funds/etc, it seems that the pool of competition is much broader and you can’t help but end up comparing apples and oranges. Everyone has to compete with superstars for funds, which translate into publications, which translate into more success, in a positive feedback. It’s not just that more productive individuals get a pay hike; it’s that they will gradually accumulate more resources, and those who manage to get into orbit around them will do better. I’ve discussed this among other “selection pressures” on science here: http://thoughtsforbreakfast.wordpress.com/2014/01/21/are-we-evolving-science/

    More on the gravity of stars here:

    • Thanks for the ideas. The positive feedback based on resource acquisition argument you give is a commonly given one for the lognormal (especially e.g. in the lognormal distribution of incomes or city sizes in economics), although interestingly Shockley did not give that one. He did give one other alternative one based on thinking power being lognormal due to the multiplicative nature of associative nets (nb my modern wordage but basically his idea). Although the positive feedback idea may be the true answer, I find the hurdles model a more hopeful and personally useful explanation for the somewhat unfortunate, uncomfortable fact that productivity is lognormal, so I hope the reinforcing feedback model is wrong!🙂

  8. I’m not sure the implications of the hurdle model for funding allocation decisions are obvious. Even if there are a few “star” researchers who are much better than others, to make funding allocation decisions you also have to know about how the productivity of individual researchers (their ability to jump a sequence of hurdles, if you like) scales with the funding they’re provided. I talked about this issue in my old post on “shopkeeper science” (where “shopkeepers” basically means “not superstars”): https://dynamicecology.wordpress.com/2013/07/11/in-praise-of-shopkeeper-science/

    I also wonder about alternative explanations here. Lots of underlying processes can generate highly skewed distributions. For instance, instead of thinking about a multiplicative model of “hurdle jumping”, think about the Matthew Effect (http://en.wikipedia.org/wiki/Matthew_effect) EDIT: oops, and now I see that my skim of the thread missed the comments right above mine, making the exact same point. My bad!

    None of which is to question the value of the “hurdle model” as a heuristic for helping one think about how to optimize one’s own scientific work (what hurdles do I need to get better at jumping, or find collaborators to help me jump, or etc.).

    p.s. a barely-relevant technical aside re: absolute vs. relative fitness and Brian’s footnote. Maximizing geometric mean *absolute* fitness is equivalent to maximizing arithmetic mean *relative* fitness (Grafen 1999, if memory serves). It’s relative fitness that natural selection “sees”, not absolute fitness. So evolution by natural selection doesn’t actually work any differently when fitnesses vary over time than when they don’t. Don’t get me wrong, the geometric mean argument is absolutely a useful way to look at the problem, and bet hedging is a real and very interesting thing. But people sometimes have the idea that temporal variation somehow fundamentally changes the evolutionary rules. It doesn’t. Sorry, pedantry over now, carry on.🙂

  9. One implication of your bet hedging analogy is that it’s not just worth it to you to try to improve at jumping those hurdles you’re worst at jumping. It can even be worth it to you to get *worse* at jumping some of the hurdles you’re good at jumping in order to improve at jumping the hurdles you’re less good at jumping!

  10. Irrelevant p.s. on Shockley: Science writer William Poundstone investigated the so-called Nobel Prize Sperm Bank for one of his “Big Secrets” books. As I recall, the only Nobel Prize winner who donated was Shockley himself.

    • I’ve heard various versions of this reply to Shockley from George Wald “If you want sperm that produces Nobel Prize winners you should be contacting people like my father, a poor immigrant tailor. What have my sperm given the world? Two guitarists?!”
      [taken from Stephen Pinker’s account in the Blank Slate]

  11. 3.5 Recognizing that while a line of inquiry seemed promising at first, it’s not going to yield sufficiently worthwhile results in a reasonable amount of time, and you should just suck up the sunk costs and spend your time on something else.

    This, I think, can be one of the hardest things — especially for students — to do. But if time is the biggest constraint (which it often/usually is), then one’s time should only be spent on projects stemming from good ideas that are likely to yield worthwhile results. Easier to say than do.

  12. That is a really nice way of thinking of a scientists research process. I like it. This right away goes onto the “forward to colleagues” list. Thanks for it!

  13. Pingback: Quite interesting way of thinking about it | fythesis

  14. I might be trying to be the devil’s advocate here, but I have to admit I am not very fond of Shockley’s theory, for several reasons:

    1. It is one thing to analyze and improve one’s performance in academic research; it is something completely different to analyze and improve one’s performance in advancing science. Shockley cannot hope to address the latter with such a formulation of the problem given that science is really a collective endeavour (“A people’s history of science” by Conner is a good starting point) and aiming at being good at jumping all these hurdles clearly implies to do it on your own (else, why not distribute the need for skills?)

    2. Even a single researcher’s productivity cannot really be used as a proxy for “stardom” or whatever you call it. Researchers evolve in different labs with different contexts that invisibly contribute to great ideas. Darwin did not invent the concept of natural selection out of thin air, but the people with whom he discussed at that time have all been forgotten (except perhaps Wallace). So yeah, “productivity” might be log-normal, but that just might be due to people’s social skill within a given institute rather than pure academical skills at jumping “hurdles”.

    3. Linking ability with salary in the case of intellectual jobs stems from false premises. It has been shown time and again that increasing salaries above a certain threshold does not improve intellectual worker’s productivity, while it works for manual labor (e.g. watch this http://www.youtube.com/watch?v=u6XAPnuFjJc). So, even if academical ability were log-normally distributed, the argument for or against increasing salary is porr at best.

    4. A very strong argument for log-normal distribution of productivity is the Matthew effect, which ha nothing to do with ability. Simply stated, those that have earned academic credits early in their academic career will be more rewarded afterwards. It has also been shown (but I can’t find the source anymore…) that silverbacks tend to pay less than upstarts when a paper is retracted or some fraud is detected in a paper. So usually the underlings pay for the big bosses, which again should increase this log-normal distribution.

    5. A final caveat: the price for learning to cope with each of these hurdles is not the same. Functioning as a team usually requires scientists to collaborate with others that have strengths in the skills where they are weakest. For this very reason, I would also expect some positively skewed distribution of academic productivity with the few researchers specialised in costly skills being associated to a lot of papers and hence providing the bulk of the right tail of the distribution.

    So we might say that productivity is a geometric mean of different things and thus strive to be average at everything. Or we might also recognize things like frequency-dependence in the usefulness of proficiencies, the Matthew effect and the heterogeneous costs of skills that actually make a completely different case.

    • “It has been shown time and again that increasing salaries above a certain threshold does not improve intellectual worker’s productivity…”

      Why is that the relevant measure? If I am running a lab, say, the question I face is “How much should I pay a star to move to my lab?” I am not worried about increasing their productivity: I am worried about what it is worth paying for it.

  15. Pingback: Assorted links

  16. Pingback: This Week’s Wildlife and Conservation News Roundup « strange behaviors

      • Well, I guess it depends on what exactly one means by “ability” (and “determination”). One might say that the ability to do X doesn’t lead to doing X (I’m able to do lots of things that I won’t do). If one thinks that “ability to write adequately” includes “ability to grind on with the pre-submission rewrite”, the problem disappears.

  17. I like the hurdle model. It gives a nicely intuitive breakdown on how the “fitness of a researcher” arises from its phenotypes 1 through 8 (which indeed seem to be more or less independent of each other and therefore give rise to the log-normal distribution). I see two major problems that arise form it.

    The first and major problem is that progress of science is not measured by the number of papers. Thus, while assembling a team of super-stars is a good strategy for maximizing the number of papers, its impact on the rate of progress is dubious. (As an example, take Peter Higgs whose productivity was clearly in the left part of the distribution.) So, Shockley’s practical conclusions from his model appear rather short-sighted to me.

    The second problem, mentioned above by asianelephant, is of practical importance today because today’s reward structure for researchers is even more skewed than that proposed by Shockley, with super-stars getting literally too much money (I know PIs who really don’t know how to spend the money) while the majority struggle to support even a small lab. Same situation with job searches. All of this already results in extremely harsh bottlenecks which reduce diversity of successful scientists and therefore the diversity of ideas, which in turn leads to a deceleration and deterioration of science.

  18. Excellent article and comments (signal to noise is off the charts!). A possible flaw in Shockley’s analysis is that he observed an ecosystem that was established in relative isolation. Bell Labs was a remarkable place where the company tried to leave the scientists to their own devices, knowing and trusting that they would produce remarkable discoveries and that typical incentives were unlikely to improve matters – all people desired was freedom and some security. But some (and this includes Shockley) learned to analyse and game this idyllic system by becoming useful to others through skilled navigation of non-scientific hurdles – namely those hurdles of process*. There is, of course, value to completion of these processes such that they are providing service, but they are largely artificial constraints that play secondary roles to ideas and ability to carry out and interpret experiments. Many of these additional “hurdles” wouldn’t exist if our system of publication had evolved differently, yet they often accrue respect, people become skilled at them and are often rewarded (arguably inappropriately if assistance in transition of the process hurdles is all they do). Perhaps the revolution in scientific publication will result in a different emphasis on skills than currently, and such developments may well be for the better.

    * I despair at the increased use of professional grant writers who are deemed necessary to provide a certain style of writing to satisfy the reviewers who, presumably, have better abilities in appreciation of the cadence of an experimental description than its actual idea. Yes, scientists need to communicate effectively, but I rail at the new normal of sanitized perfection now expected.

    • Thanks Jim – some good points. Shockley (and other since) have found similar patterns in other institutions including university departments, national labs and etc. So I don’t think it is unique to Bell Labs. But the point about how Bell rewarded certain individuals who could fill out other individuals shortcomings on the hurdles is very interesting and harkens back to some of the earlier comments on how this interacts with collaborations.

      It is a fine line and probably intresting discussion what is purely a “process” hurdle. Is writing a process hurdle? Is taking reviewer criticism a process hurdle? I am in agreement with you about how grants increasingly are becoming just a process hurdle.

      • Bells labs was likely more acute on the spectrum of research environments. LMB in Cambridge s perhaps another remarkable example. Many have tried to clone it with far more resources (perhaps that poisoned the chalice?). Such acuity helps identify behaviours (and is good science – like studying disease outliers). A greater and more troubling extreme is the feudal institute model (such as in some French centres) where the Director is an author on all papers. But such authorship is obvious and discounted. There is also, as other commenters have noted, a tendency for reinforcement among the “stars”*. A small lab submits a paper/grant which is reviewed by a larger lab head who sees the novelty, raises the hurdle a little by requesting more experiments and whips out a competing study using the army at hand. I like the idea of a cap on lifetime papers (field dependent) that would tend to lead to more substantive publications (coupled with pre-prints so that data are not held up but also can be constructively criticized along the route to final publication).

        * Also seen in celebrity where any news is good news, since it is more important to be in the news than ignored and fans seem to reward or at least are willing to tolerate bad behaviour.

  19. Pingback: The Makings Of A Superstar Scientist | CURATIO Magazine

  20. Pingback: Making a superstar researcher | Research Support Hub

  21. Pingback: Somewhere else, part 112 | Freakonometrics

  22. Very interesting indeed. Beyond all the stuff discussed thus far in the comments, the question it raised in my mind was one I hadn’t thought about before: why is the coveted first-author position on a paper given to the person who does the writing? I.e., why is that hurdle, of all the hurdles in the scientific process, given the bulk of the accolades, while the people who contributed to a project by helping to leap the other hurdles all receive less credit? What so special about writing? I guess some journals are encouraging a shift from this already, by asking that the contribution of each author be stated, and by allowing equal co-authorship, and so forth, but this post really highlighted the issue for me.

    • Interesting point Ben, though being first author does not have the same cachet in all fields as it does in ecology. In bio-med for instance, being last author is often seen as being more important. In fact one of my younger collaborators was recently told by a non-ecologist academic that his CV looked a bit weak as he “didn’t have any last author papers”.

      Regarding “accolades”, of course all authors benefit from citations and from having another paper to their publication list.

      • Interesting post, Jeremy. Perhaps I will start putting contribution statements in my acknowledgements.

  23. Wonderful post. I agree that the approach of taking into account many hurdles may be a very useful approach for those trying to climb the academic ladder, using the still universally accepted dominant currency of publications and citations. As a PhD I feel I must keep these currencies in mind if I want to compete in nearly any “main stream” sector of science, particularly academia. However, by accepting this currency it seems to me that it further reinforces this culture, which I get the feeling most scientists would agree is not best for science and discovery. There is a wonderful book by Michael Nielsen, Reinventing Discovery, that discusses this and offers an alternative to this culture – “open science.” I think young (and “old”) scientists, and especially the field of science, would benefit more from a reading of this book than to follow what I hope will be the antiquated way of doing science – a focus on how to get more papers published (which is what it would seem the hurdle model focuses on). Less time should be spent on how to play the game and instead more time should be spent on how to change it, especially for those who have tenure and the freedom to be more “revolutionary.”

    Here’s a link to a review of his book: http://www.nature.com/nphys/journal/v7/n10/full/nphys2109.html

    And you can preview his first chapter here: http://press.princeton.edu/chapters/s9517.pdf

    Also, one skill that is missing from the list that was perhaps not relevant during Shockley’s time but is definitely critical to have in the era of the internet and big data, is the ability to filter and organize enormous amounts of information.

  24. Pingback: Telling the story of P cycling and UA in Montreal (process) | Urban Phosphorus Ballet

  25. Pingback: Current Student Hangout/Question Thread - Page 4

  26. Pingback: Friday Recommended Reads #16 | Small Pond Science

  27. Pingback: Winter Writing Retreat Spurs Productivity and Pizza-Making | UW-Madison Center for Limnology

  28. Pingback: Friday links: animals jumping badly, the recency of publish-or-perish, mythologizing wolves, and more | Dynamic Ecology

  29. Pingback: Viewpoint: Why I’m Leaving Academia | :InDecision:

  30. Pingback: 2014-01-25 Log | Reading Room

  31. Pingback: Frogs jump? researcher consensus on solutions for NSF declining accept rates | Dynamic Ecology

  32. Pingback: Happy second birthday to us! | Dynamic Ecology

  33. Pingback: Writing for Science and the One Percent | The Incidental Economist

  34. Pingback: Web Roundup: More Links for March - My blog

  35. Pingback: Friday links: female ESA award winners, #overlyhonestcitations, academic karma, and more | Dynamic Ecology

  36. Pingback: What makes a productive scientist | what next?

  37. There is a closely parallel literature on this in sociology and economics. See:

    Abrahamson, Mark. 1973. ‘‘Talent Complementarity and Organizational Stratification.’’ Administrative Science Quarterly 18:186–93.
    Idson, Todd L. and Leo H. Kahane. 2000. ‘‘Team Effects on Compensation: An Application to Salary Determination in the National Hockey League.’’ Economic Inquiry 38:345–57.
    Jacobs, David. 1981. ‘‘Toward a Theory of Mobility and Behavior in Organizations: An Inquiry into the Consequences of Some Relationships between Individual Performance and Organizational Success.’’ American Journal of Sociology 87:684– 707.
    Kendall, Todd D. 2003. ‘‘Spillovers, Complementarities, and Sorting in Labor Markets with an Application to Professional Sports.’’ Southern Economic Journal 70:389–402.
    Kremer, Michael. 1993. ‘‘The O-Ring Theory of Economic Development.’’ Quarterly Journal of Economics 108:551–75.
    Rosen, Sherwin. 1982. ‘‘Authority, Control, and the Distribution of Earnings.’’ Bell Journal of Economics 13:311–23.
    Rossman, Gabriel, Nicole Esparza and Phil Bonacich. 2010. “I’d Like to Thank the Academy, Team Spillovers, and Network Centrality” American Sociological Review 75:31–51.
    Saint-Paul, Gilles. 2001. ‘‘On the Distribution of Income and Worker Assignment under Intrafirm Spillovers, with an Application to Ideas and Networks.’’ Journal of Political Economy 109:1–37.
    Stinchcombe, Arthur L. 1963. ‘‘Some Empirical Consequences of the Davis-Moore Theory of Stratification.’’ American Sociological Review 28:805–808.
    Stinchcombe, Arthur L. and T. Robert Harris. 1969. ‘‘Interdependence and Inequality: A Specification of the Davis-Moore Theory.’’ Sociometry 32:13–23.

  38. Interesting. The comment that if you do a reasonable job at everything you will do OK is encouraging. I have a good publication record but I have done it on little or no money. I simply do everything myself and most of my papers are single-authored or with a few others. Being able to do 1 – 8 yourself is a key advantage. I have come across colleagues with publication records who have never actually submitted a paper themselves or dealt with referee’s comments. I try and teach my students the entire process.

  39. Pingback: Product Design vs Research | Math Encounters Blog

  40. Pingback: Impact factors are means and therefore very noisy | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s