How do you train people in a world where standard practices are changing? (UPDATED)

Via a hate-tweet ;-)* a bunch of new readers seem to have discovered my old post on how I decide which journal to submit to. That post prompted a really good discussion of how different people make this decision. Maybe the key point that emerged from that discussion is that practices seem to be changing. So that currently, and probably for at least a number of years yet, different people are going to doing things very differently. And at some point down the road, eventually everyone will be doing things rather differently than the way I do things today.

Which at some level is fine. Everybody should do what works for them. But of course, what works for you might not work for the people you train. That’s especially likely to be the case when standard practices are changing. You need to prepare your trainees for the world as it will be in the future, not just the world as it is today, or as it was when you were trained. Ethan White talked about how he struggles with this in the context of changing practices on where to submit your paper, but that’s just one example of the broader issue.

This problem is perhaps especially difficult for things on which others are going to judge you and your students, and on which we all have some incentives to conform to the practice of the majority. Again, Ethan’s discussion of how he struggles to advise his students on where to submit is a good example. But the issue is much broader than that. It also covers things on which no one will ever judge you, at least not directly. Nobody judges you on how you filter the literature, for instance–they only judge you on things like your knowledge of the literature, the quality of the ideas you’ve thought of (which presumably reflect the reading you’ve done), etc.

I have an old post where I asked “What do you wish you’d learned as a student, but didn’t?” The most common responses were stats, R, and programming. I was kind of surprised that “sequencing and other molecular techniques” wasn’t another common answer. Because as someone who doesn’t know anything about sequencing (perhaps not even enough to recognize opportunities to rope in collaborators who do know sequencing), I’m keenly aware that there’s a risk that students pursuing eco-evolutionary projects in my lab might end up with rather incomplete or outdated training in evolution. So far, I’m reasonably confident that there’s still a place in evolutionary biology, or at the interface of ecology and evolution, for people asking and answering good, interesting questions that can be addressed without sequencing. But I’m not sure how much longer that will be true, or be seen to be true. (At some point, even if it remains the case that you can still do good evolutionary biology without sequence data, many people will think it’s no longer the case simply because papers with sequence data have become common. So they’ll start demanding sequence data from you, even if those data would be irrelevant.)

So I guess my question is: how do you train your students so that, as far as possible, they don’t look back later and say “I wish I’d learned X as a student”? Especially in cases where standard practices are changing?

*UPDATE: In calling this a hate tweet, all I meant to convey, in a lighthearted way, is my bemusement at just how strongly the tweeter in question dislikes that post. But from the comments it appears that that’s not how it came across, hence this update to clarify. I’m actually not bothered at all by really strong disagreement with my views, on that post or any other. In calling it a hate tweet, I certainly didn’t mean to convey that I felt offended, or that I think the person who tweeted it is a jerk, or anything like that! Sincere apologies for any confusion or offense caused by my failed attempt at lightheartedness here.

29 thoughts on “How do you train people in a world where standard practices are changing? (UPDATED)

  1. “hate-tweet” is a bit strong! “opinionated-tweet” perhaps? The tweeters don’t agree with your position – which you still struggle with. Doesn’t mean they hate you, or are we conflating “hate” with “dares to disagree”?

    • Oh, I don’t mind disagreement at all! But when someone says that I’m perpetuating a “hell” (!) because of how I decide which journals to submit to, I think it’s perfectly accurate to characterize that someone as “hating” the view I described in the post.

      I actually meant “hate tweet” as a joke, just a way of chuckling at how strongly Casey disagrees with me on a topic on which I have pretty common, garden-variety views. But I can see that that didn’t come across, which is my bad. I’ll update the post with a smiley face and a little footnote to make it clear.

      Not sure what you mean by saying I “still struggle with” people who don’t agree with me. What do you mean by “struggle”? Do you mean that I ignore or am dismissive of those who disagree with me, or that I personally attack them? It would help me understand your comment here if you could link to some posts or comments where you think I’ve “struggled” with disagreement.

      Here’s an old post where I talk at length about my approach to writing, and to engagement with commenters:

      • No – struggle in the sense that you appreciate the issues with chasing impact factors etc, but you also appreciate that playing the game is useful career advice.

        I agree the current situation is a “hell” of our own making (well at least out complicity with the bean counters controlling job or funding decisions) and propagating this to the next generation of scientists only compounds the error. But I’m sympathetic to the plight of young researchers.

      • Thanks for the clarification.

        My own view is that the “hell” of which you speak ultimately arises from the simple fact that there are many more people who want academic jobs, or substantial grants from funding agencies, than can have them. And it’s actually not the bean counters who control those decisions–academics themselves are the ones who sit on search & selection committees and grant review panels. As long as many people are chasing few jobs or grants, and as long as jobs and grants are handed out on the basis of “merit” (by which I mean “any ranking criterion whatsoever”), then everybody is going to feel enormous pressure to attain the highest “merit” possible, and feel very worried and depressed about their odds of success. and those who don’t get the jobs and grants are going to end up feeling seriously disappointed, while those who do get them will feel tremendously relieved. And I really don’t see how that would change if, say, everyone, say, published everything in Plos One or on ArXiv.

      • If everyone published in a single journal, then merit could not be assigned on the basis of where you published – all researchers would be ranked the same under that metric. OpenAccess is not the issue here (other than OA works tend to be more highly cited than those behind pay-walls, for obvious reasons). All to often funding awards or search committees will look at the journals someone has published in and rank accordingly, without considering the quality of the work itself or its impact. So in that sense, publishing in PLoS One etc may be detrimental and there is no supportable scientific justification for that.

        I discussed this with someone who chairs one of NSERC’s committees in a particular field and they said that they have a terrible time making sure committee members *don’t* do this, but it still happens.

        That is the hell we are in.

      • Thanks for the clarification. So you wouldn’t consider it to be “hell” at all if everyone published in Plos One, and then hiring and funding decisions were made (in part or in whole) on, say, the basis of metrics like number of times your papers had been cited, downloaded, viewed, shared, tweeted, blogged about, or whatever? That’s not a rhetorical question, I’m genuinely curious. For you, the “hell” is specific to papers being judged according to where they’re published? And has nothing to do with the fact that, even in a world in which we all published in Plos One, we’d all still be judged according to *some criteria or other*, with the vast majority of us still losing out according to those criteria? Because I could certainly imagine many people seeing both worlds as hellish. In each world, your ultimate professional fate is to a large extent out of your hands and highly stochastic.

        Or maybe I should phrase the question differently: hypothetically, let’s say everyone publishes in Plos One. How should people’s track records of publication be judged by hiring committees and funding agencies in that world? Given that (as is indeed the case) hiring committees and funding agencies do not have time to carefully read and evaluate for themselves all the papers published by all the applicants they are charged with evaluating. Indeed, they arguably don’t even have time to read 2-3 papers from each applicant they’re charged with evaluating. Again, I’m genuinely curious, and I don’t claim to have an easy answer to this question myself. And it seems like a question worth asking, since that may be the direction the world is moving.

      • In an ideal world, people doing good (we can define good, but I’d say impact-ful in ones field, novel etc, but for purposes of this reply assume it is defined and agreed upon etc.) research would all get the funding they need to carry on doing it.

        We don’t live in an ideal world and there isn’t enough money to go round. In such a world, it is right to allocate monies to the “best” research whilst maintaining a broad research portfolio. Allocating money/jobs on the basis of which vanity journal researchers X or Y published in just doesn’t make sense. You can’t judge the quality or impact of the work on the basis of where it is published. Full stop. Sorry – no argument.

        Let’s leave PLoS One out of this. We could just as well talk about us all publishing in Nature or Science or Journal X. This isn’t an OA issue other than the fact that OA papers are on average more widely read and cited than non-OA works, but tend to be viewed with derision or suspicion by the people making decisions.

        In this non-ideal situation we must accept ranking. We don’t have to accept being ranked by an unscientific, fundamentally-flawed metric such as the IF.

        I don’t buy the “don’t have time” argument. If you won’t do your job properly don’t do the job – leave it to someone who will do the job conscientiously. When you hire someone, you are writing a pretty big cheque these days. It simply just doesn’t compute that you’d base, even in part, that decision on something as stupid as the IF. It is done though, I know it, but it is just epically stupid!

        If we are going to rank people do it properly so it can be defended. If that needs to be resourced properly (to pay for the time) then it needs to be resourced.

        There’s no point saying well in an ideal world we would evaluate people properly, but we’re not, so tough.

        This is the hell we are in and it has to stop and it is why people will be critical of your views on this. Because it is a stupid thing to do, to chase the IF. It is a pragmatic thing to do and it may help you land that job, but it is stupid. A counter example is someone like Ted Hart, who did the journal merry-go-round (at the suggestion of mentor/co-authors) to try to chase the impact factor; almost a year and half later the work still isn’t published, has been scooped, and Ted’s leaving Academia. We must be clear to point out the perils of what you might tell graduate students as well as informing them of the rewards.

        I don’t have all the answers either, and yes, a lot of the problems stem from the lack of funding and jobs available, but we don’t need to compound the problem by being unscientific and stupid in our funding decisions.

    • ucfagis – you didn’t answer Jeremy’s question – at least not in a realistic way. You distracted off onto irrelevant points like whether its Science or PLOS One everybody publishes in.

      The two questions Jeremy asked that I am really curious to hear the answer to are:
      1) Are you OK if everybody is judged by some improved more fair metric but still quantitative and ultimately one-dimensional that is paper based rather than journal based like impact factor
      2) If not how exactly do you propose filtering 300 applicants for a job or ranking the personal merit of the PI on 50 grants you’re responsible for on a panel. Saying its lazy not to read and judge 3 papers from 300 people is naive.

      • Thank you Brian for pushing for an answer on this. As I say, I am genuinely curious.

        I would also clarify that I don’t want to limit #1 to just quantitative, paper-based metrics. I actually mean it more broadly. We have many more applicants for jobs and grants than can be hired or funded. And presumably we want to hire and fund the best ones, not just have a lottery. So how to we decide who’s best?

        I suppose that, for jobs, one possibility is just to do what we currently do. Applicants submit a cover letter, cv, research and teaching statements, maybe a couple of sample publications, and three letters of reference. And then search committees just evaluate those materials as best they can, however they see fit. In our hypothetical world in which everybody publishes everything in Plos One, the committee members obviously won’t be able to look at where the applicant has published. So they’ll just have to look at the other information (which might include any altmetrics the candidate chose to provide), and decide who to invite as best they can. It’s of course an empirical question if or how the outcome of that sort of hiring process would differ from the outcome of current hiring processes.

        For funding agencies, I suppose those that seek to fund the best individual research projects (which is most agencies, though not NSERC in Canada) might just choose to ignore the applicant’s track record entirely. Or else demand information on it solely for purposes of establishing that the applicant has the expertise required to carry out the proposed research.

        Those are just the first ideas that came to me, I make no pretense to have thought hard about the alternatives.

      • Sorry that was a bit rambling – blame WordPress as you can’t reply below a certain nested level and the reply window on the dashboard is tiny! Or blame me🙂 And I didn’t bring PLoS One into it – Jeremy did that – I was merely trying to point out that the “One” journal could be Nature or Science and my point still applied. PLoS One has specific connotations (OA, not fully trusted in terms of quality etc) and I felt that issue clouded the discussion.

        OK specifically:

        1) I did, and it is there if you look – I said that I am OK with being ranked using a fair ranking system. We can discuss what a fair ranking system is and what metrics it includes. I’m not convinced that things like number of tweets would make my ranking scheme (s/he who shouts loudest etc.), but it is important to give credit to things like out-reach, code, support etc which Altmetric and ImpactStory try to synthesise. We can’t just rank on the basis of the papers one produces: impact on a field is not solely through academic papers.

        2) I genuinely don’t see why this is so difficult; we rank research projects all the time and use peer review to do that. In the UK, where I have most experience, all research council grants are peer-reviewed and then a committee discusses and ranks the proposals taking into account the reviews. Something like that could work for hiring procedures but it would take more time and would cost more money. We peer-review papers, we do it for proposals, why not jobs?

        I’m not being naive here; I do appreciate the effort involved. But likewise, you can’t bury your head in the sand whilst sticking your fingers in your ears whilst screaming Lah Lah Lah just because the alternative might involve some effort or in the vain hope that by ignoring the problems with the ranking system you using, you might not hire that dud.

        That is one option.

        To be honest – it might help if position specs were more specific up front in terms of what you want to appoint. A lot of position specs are so non-specific that you do get the 300+ applications. I understand that you might not want to restrict yourself to a specific area of the given field and instead see what gems crop up in the applications, but then you can’t complain about the number of applications.

        You and Jeremy should also try to tackle this question. Ranking people in terms of IFs of the journals in which people publish is not a defensible position. What would you do? Would you still rank in terms of journal IFs even though you know they are flawed?

      • Speaking as someone who’s sat on hiring committees, grant review panels, and served as a journal handling editor, the resemblance between the hiring process and peer review is fairly superficial. So I’m not clear what you mean when you suggest that hiring committees can operate in the same way as peer review, with sufficient expenditure of time and money.

        I’m also unclear why you’re happy to take peer review as a model for how to rank job candidates and grant applicants when the whole point of publishing everything in unselective open access journals is presumably to get away from the supposed non-objectivity of peer review, at least when it comes to judging “importance”, “quality”, “novelty”, “impact”, or whatever.

        Broad job ads aren’t broadly written in the hopes that a great candidate will apply in some area the department wasn’t originally planning to hire in. They’re broadly written because one way to build a strong department is to hire broad-minded researchers. If you want to build, say, a strong ecology group around people who see themselves as ecologists (rather than as “aquatic ecologists” or “mammalian population ecologists” or some other narrower specialty), you advertise for an “ecologist”. So no, sorry, the problem here is not search committees writing over-broad ads and then complaining when they get 300 applicants.

        With respect, in a previous comment I made a tentative suggestion as to how both job search and grant review processes might work in a world in which everyone publishes everything in Plos One or a similar journal. I’ve suggested that job searches might work basically as they do now, save that search committees wouldn’t be able to judge applicants by the journals in which they’d published (since by hypothesis everyone publishes in the same place), but would have information on various altmetrics plus all the usual sorts of information applicants already provide (which can of course include things like information on software you’ve written or whatever—applicants are always free to include whatever they want on their cv’s). And I’ve suggested that funding agencies might simply ignore applicant background entirely except as needed to establish that the applicant has the expertise needed to complete the proposed project. I don’t know that those ways of doing things would be optimal, but they’d be feasible. And I don’t see that they’d be unfair (“stochastic” and “unfair” are two very different things. The outcome of a fair hiring or grant-giving process might nevertheless be very stochastic due to high sensitivity to all sorts of factors). It’s now your turn to more fully flesh out how you would see search committees and granting agencies working in this hypothetical world. I hope you will take the opportunity. You’ve said that you’d like to see, or at least wouldn’t mind, a world in which everyone publishes in the same place. Which is fair enough. I’m merely genuinely curious to hear your thoughts on the knock-on consequences of having that world come about. As I’m sure that (in contrast to me) that you’ve thought a lot about those knock-on consequences.

      • Thanks for your answer.

        WRT peer review – the big issue is on a grant panel I get given grants I’m an expert in and can quickly form a high quality opinion on. Not so easy for me to do when I’m sitting on a search committee for an animal behavior slot for example.

        I don’t specifically rank by IF and never have. As far as ranking people (e.g. for jobs), I tend to do a first order cut by some qualitative assessment of number of papers in “good” journals (yes an overtone of IF but not strongly – eg I consider AmNat an excellent journal even though its IF rank is not as high as it used to be and I don’t consider Science 4x as good as Ecology or whatever – you can tell I don’t know or care about more than the crude rankings of IF on journals), but its a first cut only – you’ll never convince me I can’t take the shortcut of ranking a paper in Ecology Letters higher than one in Northwest Midland Naturalist (even if its a shortcut that is only 92% accurate). After that,to go in detail, I go look at paper citation rates or H factors. Its one dimensional but easily available and a lot more defensible as a measure of “impact” of one’s work (when adjusted for years in print, disciplinary differences, etc). I’d be cool with altmetrics as they evolve too. And then, for final cuts I bring in a lot of other factors including reading papers, talking to colleagues I trust, etc. Bottom line – crude cuts can be made with crude metrics, fine cuts need to be made with fine, nuanced metrics.

      • For what it’s worth, I’m much the same as Brian and everyone on every search committee I’ve ever sat on is the same way. I make a crude first cut in part based on both how much and where people have published, relative to their career stage. I emphatically do not consider a Science, Nature, or Ecology Letters paper to be essential, and nor does anyone I know. Do people who have one make it past the first cut? Usually-but only because they invariably have other attributes that would get them past the first cut even without a Science, Nature, or EcoLetts paper. There really are a lot of people out there who *seriously* overestimate the relevance of impact factor (or any other quantitative metric) to hiring decisions! Other considerations for the first cut include things easily spotted on a cv, like the general area the candidate works in, competitive grants and awards received, etc. That first cut usually leaves at least a quarter of the applications in my experience. Then I read the reference letters and research & teaching statements of the remaining applicants, along with perhaps the abstracts of their papers, and on that basis make a second cut. Then I look over the remaining applicants (usually about 8-10) carefully again, rank them, and make myself some notes on the basis for my ranking. Everyone else on the search committee does the same, and we meet and discuss our rankings to narrow things down to a mutually-acceptable list of 3-4 on campus interview candidates. Things work a bit differently if, e.g., there are going to be phone interviews first, but that’s more or less how the process works in ecology at most places in N. America.

        I should also say that, if I were ever faced with the cv of a candidate who’d published all of their work in Plos One (I never have been), I’d recognize that that candidate is probably someone who believes strongly in the value of open access publishing. I would then simply do my best to evaluate that candidate on the other information available to me, probably including glancing at a couple of the candidate’s papers, rather than simply binning that candidate’s application because it lacked any papers in selective journals. And if in the end I decided that that candidate was among those on my final list, I’d go into the search committee meeting prepared to clarify to other committee members why this candidate probably doesn’t have any papers in selective journals. In order for me to do that, it would be helpful if in their cover letter or research statement the candidate has explained why they choose to publish exclusively in Plos One or whatever.

        Like Brian, I would be happy to look at whatever altmetric or other information a candidate chose to provide on their cv, though unlike him I don’t ordinarily go so far as to look up such information on Google Scholar or wherever. Indeed, it’s becoming increasingly common for job candidates to provide such information. I evaluate that information according to my own lights, just as I evaluate the other information in the application. So if you want to tell me how many times your papers have been cited or downloaded or shared, or what your h-index is (personally, I don’t care much about that one b/c it’s so tightly correlated with number of publications), or how many people read your blog (I put that on my cv!), or how you wrote a software package that lots of people use, or how you’re heavily involved in public outreach, or that you’re an avid juggler, go ahead!

        Now, will I necessarily evaluate that information the same way you do? Of course not. That’s why I’m leery of talk about making these sorts of decisions “objective” by heavier use of metrics. Just because you can put a number on something doesn’t make it “objective”. For instance, maybe you have some paper that’s been cited a bunch of times. But if in my considered professional *opinion* that paper is rubbish, having that paper on your cv isn’t going to make me inclined to invite you for an interview.

        And by the way, I still haven’t thought of any other way to *radically* change how job searches work that would be both practical and work at least as well as the current approach.

      • Brian, Jeremy –

        If you advertise broadly you’ll get 300+ applicants because, as Jeremy remarks, there just is too little funding and too few jobs for all the people we train to do those jobs (don’t get me started on the fixation on HQPs in Canada & NSERC awards that I’m just discovering!). Our viewpoints perhaps differ due to our backgrounds – I come from a Geography department in the UK with a very broad scope of interests where being more specific in the spec at the outset can help filter. Things may be different in say an Ecology department as you mention where everyone has their own specialism but is working within the same field (not literally).

        It sounds like you and Jeremy are doing the right thing by the selection process and I did not doubt that for one moment. If only everyone were as diligent as you two. My only concern is that you are still making value-judgements based somewhat on the perceived importance of the place in which a work was published, whether you do that quantitatively via an IF or qualitatively based on your judgement of the merits of particular journals. And number of papers is not a good metric to base anything on unless you are willing to assess the individual impact of those papers.

        I think you’ve both not quite understood my peer-review point for search committees. I don’t know if the promotion process in N America is similar to that in the UK (ignoring tenure) or not? But promotion up from lecturer > senior lecturer/reader > prof often involves national if not international peer review of the impact of ones work on the field in which one works. Once you’ve whittled down the initial 300 to something more manageable (not using IF) but before you select for interview you could solicit outside evaluations of the candidates. This also helps with Brian’s point about how to select when not an expert in a particular candidate’s field – should you be making a judgement on their impact using very imperfect knowledge? Outside evaluations would certainly help there as there are unlikely to be several members of a committee that are experts in each candidate’s field of work.

        I don’t see the hiring process being based solely on peer review – that would be silly, not least because the fit of the candidate and ability to collaborate/be a team player can be just as, if not more important, than hiring the on-paper-best candidate. The point is to allow those very good researchers who might not be quite as competitive on paper (when all one looks at are where people published etc) to make it past the first few rounds of cuts by taking a much broader sense of candidates’ impacts on their respective fields.

        I think we are a long way off having altmetrics be a common feature of candidate selection processes. I would advocate that you use some of these altmetrics on your CV and like Jeremy would welcome it if you pointed them out for me – that helps immeasurably. But what happens if not all candidates do this. So there is a training point; encourage your grad students to note citations against papers on CVs, use altmetrics to highlight were a study has had impact beyond the citation in papers metric. Don’t be shy about telling people about the good things you do; I got my new job in part because I noted up front that software packages I helped write have been cited somewhere in the region of 2500 times in the literature.

        As regards ranking PI’s on projects – I think this should be much much lower down the pecking order. Just because PI X is well known and did some good work in field Y, why should you base your funding decision on that? Surely you want to be funding the best projects? PI track record should only come into this in terms of feasibility and risk of the proposed work, not in evaluating the merits of the proposals. I’d rank the projects in terms of my view on the importance or the proposed work and only move things around if the applicant had been overly optimistic in what could be achieved or had not addressed the risks to the project not achieving its goals. All I should judge is the proposal in front of me, not the merits or otherwise of the PIs.

        The main problem with the fairer evaluation system that properly assess impact is that it only works if everyone follows that system – if all search and grant-awarding committees do what’s right. Until such a time I worry that the best advice we give to our graduate students is “play the game”. We should certainly be up-front about the potential rewards & the potential cost if things don’t work out.

        I would hope that as a community we could work towards goals a outlined in the San Francisco Declaration on Research Assessment (DORA). Science progresses not by virtue of publications alone. Hopefully what DORA advocates will become the norm during my career – I signed the declaration and will be doing my bit to insure it does.

      • Thank you for your further comments.

        Re: outside evaluators of some subset of job applicants, it’s a creative idea, but I have to say I think it’s a total nonstarter, for several reasons. First of all, search committees already comprise several people independently evaluating the applicants. What is the point of bringing in additional evaluators? Second, in order for external evaluations of job applicants to be at all helpful, each external evaluator would need to evaluate *all* the applicants being considered, not just one. That means you’re asking external evaluators to look at a bunch of applications, which nobody would ever agree to do. Third, getting letters from external reviewers in support of tenure and promotion decisions is a totally different matter. Candidates for tenure and promotion have a much more substantial public track record than job applicants, and it’s that extensive track record that external reviewers are asked to evaluate (e.g., is this person an international leader in the field?). External reviewers are mostly not in a position to say anything about applicants for an asst. prof. position that cannot be spotted by the search committee members. And as a reviewer for a tenure or promotion candidate, you’re only being asked to evaluate that one candidate, which is why you can get people to agree to do it.

        As for why granting agencies might want to evaluate an applicant’s track record, how about “because it’s one predictor among others of the success of the project”? But the ways in which granting agencies ought to evaluate applicants is actually a much larger issue that goes far beyond this post, as to talk about it sensibly you need to talk about it in the context of the agency’s entire portfolio of programs, the goals of those programs, etc.

        Re: “playing the game”, I think this is a very unfortunate choice of words, albeit a common one. Everything people are asked to provide as part of a job application, they are asked to provide for perfectly sensible reasons. And the ways in which search committee members evaluate those applications–including the quick and dirty criteria that they use to make “first cuts” and “second cuts”–are perfectly reasonable. Search committees desperately want to find the best person for the job and do everything in their power to maximize their chances of that happening! And *they do their jobs well*, as evidenced by the fact that the large majority of people hired into tenure-track positions at N. American universities do in fact end up getting tenure.

        Now, do the people who sit on search committees necessarily weight various factors as you would prefer to have them weighted? Of course not. But that’s their prerogative–just as it’s your prerogative, when you sit on a search committee, to weight publications in Plos One, or software development, or whatever, especially heavily if that’s what you think will help you identify the best candidate for that particular job. I get the sense that you don’t like the fact that many of your colleagues don’t value the same things you do, or don’t value them to the same degree. But with respect (and apologies if I’m misreading you here), your colleagues are merely *different* than you, not *wrong*, and they’re entitled to their own values. Different people are always going to be different. It’s always going to be the case that people entering science are going to need to take note of what sorts of characteristics their more senior colleagues value in a “good scientist”. And some people are always going to have values that place them in a minority, which is always going to make it more challenging for them to get jobs. I myself struggled to get a job in ecology in part because I work in a lab-based model system, not in the field, which puts me in a small minority of ecologists. Brian struggled to get a job in ecology because he was seen as a number cruncher rather than someone who could teach field courses. But I *never* felt that the many search committees that failed to hire me, or even interview me, were being unfair, or failing to properly assess the information provided to them, or forcing me to play some “game” I shouldn’t have had to play, or judging applicants according to the wrong criteria, or otherwise doing their jobs badly. They were doing their jobs perfectly well, and it was perfectly legitimate of them to not hire me!

        Someday, I suspect that people who share your values as to what to look for in academic job candidates are going to be in the majority. And when people with minority values complain about being evaluated based on their Plos One papers, or the number of users of the software they’ve developed, as opposed to whatever it is that they value most, will you sympathize? When they argue that it’s unfair of you to evaluate them based on your preferred criteria, as opposed to theirs, will you agree?

        Re: training people what to put on their cv, sure. I would hope that every competent supervisor already does this. In response to your question “What if not everybody provides the same information on their cv’s?”, the answer is “So what?” It’s not up to search committees to specify cv content, just as it’s not up to them to tell applicants how to write a good research statement, or give a good job talk, or whatever.

        I’m glad that you agree that Brian and I are basically doing our jobs well when we sit on search committees. That means that most everybody is, since as I’ve said before, the way Brian and I operate is the usual way most people sitting on N. American search committees operate. Perhaps this will be reassuring to you.

      • Not sure if you are intentionally misreading what I write or not – I’ll assume the latter. I don’t have a problem *at all* with colleagues weighting various measures of impact differently, as long as the measures they use haven’t been shown time and again to be wrong (the journal IF or some variation thereof, perceived quality of journal etc). That is just plain unscientific.

        Peer review could be done without much hassle and I don’t see why people would need to evaluate all candidates after a first cut. Nor would they be in a position to do this. The point is to get some independent, third-party assessment of a researcher’s impact on a field by an expert in that field. Brian made the point that not everyone, or even anyone, on a panel may be in a position to expertly judge the impact of work done at PhD or PDF level or have knowledge of contributions of a particular candidate. Candidate X may write that they did x and y and published in journal Z which may look good on paper, but an expert in the specific area may think so what.

        Re track record of PIs – I agree, but only on the use of that track record as far as it pertains to seeing projects through to completion, PhD students completing etc. The esteem of a PI should not come into the equation when evaluating new proposals as that biases against new researchers and PIs.

        By “playing the game” again, I use this in reference to advising grad students & ECRs to publish in “good” journals or to chase the IF.

        Why do you keep bringing up PLOS One?🙂

        If you think most people evaluate research like you and Brian do and how I would hope they did, then why do we have initiatives like DORA? I would say you are seeing things somewhat rose-tinted-ly.

        I must say, it is refreshing to be challenged to defend these ideas etc. So Thanks for stimulating the debate!

      • I don’t think I have misread you, intentionally or unintentionally. Rather, we’ll have to agree to disagree on some things.

        When you say that use of journal IF or publication venue is “wrong”, I’m aware of what you mean. But you are judging its use by different criteria than I am. I do not use publication venue as a predictor of the number of times a paper will be cited, or as a predictor of any other paper-level metric of impact or influence. Rather, papers in leading journals are indicators of several things for me. They indicate something about who the applicant views as the audience for their work. They indicate something about the applicant’s values–that is, they indicate that the applicant is seeking to do novel, important work. They indicate that the applicant is able to be successful at convincing knowledgeable colleagues (namely, the reviewers and editors of those papers) that their work is novel and important. And they’re an indication that the applicant shares my own values about what constitutes good science. Are there other ways an applicant can demonstrate these attributes? Sure. But frankly and with respect, I’m not wrong to use publication venue as an indicator of the things for which I use it as an indicator, and have not been proven wrong to do so. We’ll have to agree to disagree on this.

        We’ll have to agree to disagree on whether it would be feasible or helpful for job search committees to ask for external reviews of candidates in the manner you suggest.

        I keep bringing up Plos One merely as one example among many possible ones, as I believe my writing makes clear. The context of our discussion makes clear that I am merely using Plos One as shorthand for “single journal in which everybody publishes everything”. I noted your previous comment indicating why you choose not to use Plos One as an example illustrating your views. That I continue to use Plos One as shorthand does not mean that I am unaware of your comment. It merely indicates that I couldn’t think of another shorthand.

      • I think we may be talking at cross purposes Jeremy. My comments pertain to “impact”. Who my audience is is not a measure of impact; it might be useful to understand to whom my work applies but that not is not “impact”. I don’t accept the argument that where you publish is in some way indicative of impact, novelty etc. And this is perhaps where we differ most strongly – I don’t hold the peer-review merit badge in such high esteem. So what if I’ve managed to convince two reviewers that my work is interesting – I’m more interested in my peers using and citing my work.

        I actually think the whole novelty issue retards scientific progress – why spend months and months trying to get a paper into a hierarchical sequence of journals, wasting time reformatting etc? I’d much rather see works published and then evaluated. That is why I like the PLOS One criteria for acceptance; sound science, that your methods are acceptable and results are supported by sound analysis. The impact value of the work can then be evaluated by how often your work is cited (ignoring the issue of people having to cite work even if they are demonstrating it is wrong – but that applies to IF as well!). I appreciate this adds a burden to individual researchers (readers) as that “novelty” filter isn’t there – but then again that imperfect novelty filter isn’t there!

        I wasn’t saying you were wrong to use “where you publish” as a measure of things important to you. I said it [IF and hence where you publish] has been shown to be an incorrect metric for “impact” and if used that way then that is wrong. Some of the things you mention above are bordering on this misuse. Where you publish is not a good indicator of novelty, impact etc. We can disagree on that, but I can support my view with papers (e.g. Brembs et al 2013) – I’d be eager to here of counter studies in support of your usage of where people publish.

      • Most people who read your paper post-publication don’t do so critically, or even very carefully. That’s why zombie ideas persist and bandwagons get started. If you think “the crowd” always and everywhere knows better than a couple of careful reviewers, sorry, but I respectfully disagree. And there are various lines of evidence to back me up:

        There are various means to ensure better matching of papers to journals at first try, thereby saving time and reducing the burden on the peer review system, than switching to post-publication review. Journal cascades, for instance, or systems where journals “bid” on mss ( EDIT: Oh, and IIRC, surveys in ecology (or was it in all of science? Not sure…) report that something like 75% of papers are accepted by the first journal to which they’re submitted.

        I don’t need studies to support my decisions on where to publish, or the use I make of publication venue in evaluating cv’s. It’s a matter of professional judgment. All the studies you refer to supporting your point of view merely give the appearance of being “objective” because professional judgments about what to measure and how to measure it are buried in the study design. The literature you’re citing on the use of IF doesn’t show that my professional judgments are wrong or that those of the study authors are right. At least I’m being open about the fact that I’m using my professional judgment, and how, rather than pretending that “impact”, “influence”, “quality”, or whatever can be summarized in one, or even many, numbers. Not everything worth caring about in life can be summarized in numbers. The quality of someone’s science is one of them.

      • The counter argument to your critique of post-publication review is made by every bad/wrong paper that makes it through pre-publication peer-review. Dare I mention Arsenic?😉 Both systems have issues with quality. I would prefer to see things published and evaluate them later; you see if differently. I’m cool with that and I read with interest your ideas on bidding etc when you first posted those thoughts.

        Regarding your professional judgement; well fair enough. In many researchers’ “professional opinion” in the 1970s(?, earlier?) we were heading for the next ice-age (all the evidence pointed that way, until it didn’t and science moved on). I thought the scientific method was there to guard us from being seduced into believing what we think we know is correct is actually correct when it isn’t. It does worry me that (and I’m not just referring to you here) scientists aren’t very scientific when it comes to such things.

      • The meme that many researchers in the 1970s thought an ice age was on the way is a myth. Please do some background research before making silly arguments impugning my professional judgment:

        Leaving aside your unfortunate choice of example, are you seriously using the fact that other people at other times have been wrong as evidence that I am wrong in this particular case?

        With respect, I think you have a very inflated view of how “objective” science can be made to be. Our choices of what questions to ask, and how to answer them, the very starting point of science, are judgment calls. That doesn’t mean they’re completely arbitrary or that all choices are equally good. But it does mean that there’s room for legitimate disagreement on the right call, and that there’s no such thing as the “objectively” right call. The scientific method is mostly there to help keep us from getting fooled once we’ve decided precisely what question to ask and how to answer it. Questions like “who is the best candidate for a job” or “what is ‘good science'” are not the sort of questions that can be sufficiently well-specified to be addressed via the scientific method. It’s very silly to think otherwise. I have described to you at length how I judge job candidates, and you’ve said it’s sensible. So frankly I’m mystified why you’re now turning around and suggesting the opposite.

        I appreciate you taking the time to comment at such length and I’m glad you’ve found this lengthy exchange useful. I hope readers have as well. But I don’t think there’s much more useful to say, so I can’t promise to reply to any further comments.

      • Re the ice age thing – Again you misunderstand. Some researchers in all honesty thought that an ice age was imminent. They were so sure that there was going to be one that they wrote to the then President to urge action. That is not myth it is fact. What you link to is a critique of using the fact that the scientists were wrong then as reason to not believe scientists now about global warming. Two different things.

        As you’ve revealed more about your process I’ve found more to argue against. I fully agree that judgements on who is the best candidate for a job are not things that can be reduced to the scientific method. Judging the novelty and impact of particular works on the basis of where they are published can be quantified and assessed if one is willing to use certain definitions for impact. When this has been done, journal-level metrics have been found wanting. That is really the main argument I have. If you want to base your judgements on that, even in part, fine, I’m not trying to stop you, But there are issue with that approach.

        I think we can agree to leave this one here. At least I will.

  2. I like the idea of merging classic advisor-based training with training that can come from external sources. I got a lot of training from other folks on my committee and other mentors I sought out as a student. My hope is that I’ll be able to provide good training to my students in the things i know about, while also being open minded about the things I don’t know well but can acknowledge are useful tools that are important for a student to learn. I think this is some degree of deviation from the classic model where your advisor knows all and does all the training, so it takes the confidence of the advisor to encourage their student to learn something new. Perhaps it’s just the collective body of advisors being humble enough to say “i have no idea how to do that but the idea seems really cool”. As a student that was enough motivation for me to go seek out help elsewhere.

    On another note, I’m curious what you thought about the most recent eco-evo paper in eco letters. They use molecular tools that I think offer some good insight into what might be going on with their data.

  3. I try to express to my undergrad researchers that they have to get used to learning new techniques as they need them. Once you have a thoughtful, well reasoned question as your guide, find the techniques and collaborators that you need to answer it. As an example, I often point to a senior colleague at my institution, a field-based evolutionary ecologist in phased retirement, who received his PhD in the era of punch cards. Over the years, he assimilated allozymes, gene sequencing, GIS, computer modeling, web-based collaboration, and, in the last couple of years, stable isotope techniques into his research. None of these techniques were prevalent, most of them didn’t exist, and a few could probably not have been imagined when he finished his highest degree.

    So I say, train our students to be thoughtful scientists who know that the technical and cultural standards of their field will always be in flux. That way, instead of saying, “I wish I’d learned X as a student,” they simply smile and say, “Cool! I get to learn X!” (or at least, “Oh well, guess I have to learn X…”). Continuous, challenging learning is just part of the job for a practicing scientist, and to me it is one of the best parts of the job.

    • That’s a really good answer.

      The only question I have is, how do you apply this in the context of things like deciding where to submit your papers? And other contexts where there’s some fairly strong incentive to conform to the practice of the majority? Tricky to give trainees advice about that stuff when the majority consensus is in the process of breaking down. Or am I wrong to see that sort of situation as any different than, say, learning to use new techniques like stable isotopes?

      • Good point, and I thought about getting into that but did not want to ramble. However, to use my elder friend as an example again, he was editor of a journal for decades out of his office handling typed submissions through the mail, but went on to publish papers electronically and produce his own websites, writing his own html. I have had other senior colleagues contact me about what to make of publishing venues like PlosONE.

        The point is that they (our students, heck, us too, so WE) have to be ready for both technical and CULTURAL changes to our science, because in part they are both driven (in part) by technological advances. Like Buzz Holling always points out, we have to expect to be surprised.

        So I guess I don’t see much of difference.

      • Again, a very good answer. I don’t know if I agree 100%, only because there may be technical and cultural changes to science that we want to train our students to ignore or even resist. But that’s a very good answer.

  4. For techniques and skills, I agree with Drew: train them to be ready to constantly learn new things as needed. Ways to pick up these techniques and skill if not from one’s advisor: others in the department, both other faculty and peers; workshops offered locally or at conferences; sit in on / audit courses in other departments.

    For knowledge and rules-of-thumb sorts of things like what journal to submit it, encourage students to get multiple perspectives — especially from scientists at different stages in their careers. Blogs like yours are already contributing to this…

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s