Guest post: many American universities use score sheets to rank faculty job applicants

Note from Jeremy: this is a guest post from evolutionary ecologist Ruth Hufbauer. Thanks Ruth!


In helping a friend with some academic job applications, I recounted my experience on search committees. As it was eye opening to my friend, I thought it might be useful to others who are applying to academic jobs. The approach taken at my university is not taken everywhere, but my impression is that it is fairly common.

The excellent and thorough post written by Jeremy Fox a couple of years ago covers how search committees for tenure track positions work from start to finish in North America, particularly (given our fields) for Ecology/Biology positions. I will not reiterate what he wrote. It is a useful post even if you are not a biologist

My experience differs from him in how members of the search committee come up with a short list. At my R1 state school (Colorado State University, ~32,000 students, has a vet school, no med school), part of the process of conducting a search is creating a score sheet to rank applicants.

The job ad is critical because it provides the framework that the search committee uses to create the score sheet. Each of the minimum and preferred qualifications listed in the advertisement can be incorporated into the score sheet. If something is not in the ad at all, it cannot, per Office of Equal Opportunity regulations, be listed in the score sheet. That leads often to words being individually debated in writing the ad, as Fox noted in his post.

The upshot is that we do not have individualized rankings of candidates. The score sheet, like the example from an actual search linked to above, structures how the members of the search committee rate each candidate. The rating is done on a set scale that can be fairly coarse (e.g. out of 20 maximum) or it can attempt finder gradation (e.g. out of 100). Members of the committee score each candidate in each of the different areas: research, teaching, grant writing, postdoc experience, collegiality etc. These areas are often awarded different point values, according to what the position entails (e.g. a position with more teaching would allot more points toward evidence of teaching experience and ability). There can be heated debate on the search committee about what should be emphasized on the score sheet.

Using a set score sheet is not unique to my university. A colleague who is a chair at a major west coast research university system confirms that they use something similar there, and that they use it system wide across all disciplines.

For something like publications, the search committee typically takes time since PhD into account, and indeed, I have even seen committees create graphs of the productivity of the top 20 candidates by time since PhD (separated by first authored vs. total publications).

If a job attracts many applicants (e.g. >150) then often there will be a first cull of candidates who do not meet the minimum qualifications stated in the advertisement. This is typically done by just two members of the search committee. Not meeting the minimum qualifications can be things like not quite yet having a PhD, or having a PhD in an area different from that stipulated in the ad.

I haven’t personally served on a search that I would consider very large (with more than 200 minimally qualified applicants). In the last big search I was on (~90 applicants who met the minimum qualifications) every member of the search committee scored every single file.

The goal with this type of ranking system is to look at each applicant’s qualifications more objectively and holistically – not focusing solely on publications, and considering more completely what applicants bring to the table rather than, for example, their academic pedigree. It is by no means entirely successful in that effort, but it is a start. I find that it forces me, even when tired, to really evaluate each candidate in all the different areas, and not skip over aspects of an application.

A scoring system like this means, however, that applicants are reduced to a number on a spreadsheet. Major differences in individual scores given by members of the search committee are discussed in detail, but really only for the upper echelon of candidates. The majority of the applicant pool is not discussed. Only applications in the top ~10-20 are individually discussed. That group often is divided into a ranked list of potential interviewees, and a cut-off of people not acceptable to the department. The short list called for interviews is taken from the top 3 or top 5 (after discussions, during which some scores might have been adjusted if another member of the search committee points out something others hadn’t noticed). The upshot is that whether or not you make the top 20 is not at all a judgment of you personally. It is done by the numbers.

I’m curious –For those searching for an academic post – have you heard of this type of rating system before?

Among those who serve on search committees – Do you use this approach, or a more individualized/personalized ranking like at Jeremy Fox’s university?

Thanks to out to Josh Drew and Jeremy Fox whose suggestions and questions improved this post.

43 thoughts on “Guest post: many American universities use score sheets to rank faculty job applicants

  1. As a job seeker, I’ve heard of the culling, the short list, the short short list, etc.

    Scores on a page from 0-100, however, is news.

  2. I chaired a big search two years ago and this is pretty much exactly what we did. Since many of us are so used to sitting on NIH review panels and getting summary statements back, it was pretty much ingrained in us to use a 9 point system (1=best, 9=worst). We had two committee members review each person and our initial cut/discussions went pretty much like a standard study section.

  3. Nice post Ruth. This is news to me–you learn something new every day!

    “The upshot is that whether or not you make the top 20 is not at all a judgment of you personally. It is done by the numbers.”

    I know what you mean, but the pedant in me would quibble with the phrasing a little. As you say, the categories on the score sheet, and their weightings, are professional judgment calls on which people can and do have reasonable disagreements. And there’s also professional judgment involved in deciding how to score applicant X’s publications or teaching experience or whatever. Not that those judgments are totally subjective or arbitrary–they’re not. But the scores themselves ultimately reflect professional judgments earlier in the process.

    None of which is a criticism of score sheets, by the way. As you say, as a search committee member, it can be very helpful to have a device that forces you to pay attention to every aspect of the candidate’s file, even when you’re tired or whatever.

    I will say that scoring 90 files seems excessive to me. I’m curious if it felt that way to you. I’d have thought that a quick first pass through the cv’s would be able to weed out many applicants who are minimally qualified (i.e. they have a PhD in something related to the area of the search) but who just aren’t competitive compared to the best 10-20 applicants in the pool. Think of it this way: if the scoresheet weights publications, say, 20%, then there’s no way anyone who scores below some threshold value (5? 10?) is going to be among the top scorers in a big applicant pool, even if they have perfect scores in other areas. So I would think that you could do a first pass through the cv’s, eliminating applicants who don’t, or obviously wouldn’t, attain some minimum score in one or two key areas. Wouldn’t this be a more efficient way to end up at the same place you end up if you fully score all 90 minimally-qualified applicants? I mean, I’m guessing that a substantial fraction of those 90 people ended up with scores way below those of the top 10-20 applicants, suggesting that full scoring wasn’t necessary to see that that substantial fraction wasn’t competitive.

    • It was grueling to score all 90! And yes, a more stringent first cut would be much better. I am not sure if/how the office of equal opportunity would allow it however.

      I think this kind of system, while good in principle, has the potential to creep into a disaster of administrative hoops. That is definitely something to watch out for.

    • There is some variation between campuses. Some campuses don’t bother to score the obvious no fits (or I think more technically they may just give them all 10 out of 100 or something). Then the top pool (but still much larger than the list who would get phone interviews or whatever) get scored.

      Also, this is implicit in what Ruth said but this is driven by EEO (Equal Employment Opportunity) offices as a way of avoiding implicit biases that Meg has written about.

      I personally am very torn about this system. I find that it very often produces rankings of candidates differently (I would say wrongly) in comparison to if I just ranked them. This is because it depends heavily on getting the right weights between the categories correctly, assumes no interactions between scoring categories, etc. On the other hand, the motive is quite good (avoiding implicit biases).

      • “I personally am very torn about this system. I find that it very often produces rankings of candidates differently (I would say wrongly) in comparison to if I just ranked them. ”

        That would be my worry too. Which I assume is why the weightings get debated so much right at the beginning.

        Honestly, if there was a candidate who I really thought needed consideration, I would find a way to score them high enough to get them into the conversation about the top 10-20.

      • Jeremy said: “Honestly, if there was a candidate who I really thought needed consideration, I would find a way to score them high enough to get them into the conversation about the top 10-20.”

        Yes, that definitely happens. We evaluate the importance of publications differently, for example. When scores are different, we discuss them individually, and sometimes people decide to change their score after they’ve thought about an applicant from the perspective of others. Thus, in the end, there’s a very personal element about it, but we’ve at least tried to think about and evaluate all aspects of a candidate, not just one or two.

      • Hi yes – I agree with Ruth’s post and Brian’s reply. We use a ranking system here, driven by EEO policy, and can sometimes do a quick Y/N at first if doesn’t have necessary degree or something like that. But getting those weights right between categories is important.

        And to answer your other question – When I was on the job market, I had no idea that this was how things were done anywhere!

      • I certainly didn’t know either! Though back then (yes, once upon a time, a long long time ago when I got my job) they probably didn’t have score sheets. When I first saw one as a member of a search committee I thought it was a terrible idea, but I’ve come round to appreciating how it makes me look more closely at the applicants. As long as it doesn’t creep into more administrative paperwork, I think that is good.

  4. Reading this strongly highlights how competitive getting an academic job is. About 90 candidates who meet the basic qualifications for a single position is a lot! And it sounds like that number might be on the low end. I realize this is not the point of this blog… but reading this, I am very glad I decided not to go the PhD route. When I was in high school, I figured I’d become a biology professor (probably ecology or botany) because I love science and everyone told me there was a shortage of scientists, mathematicians, and engineers. Now I’m a science teacher at the secondary and primary level. Interestingly, many of my students and their parents still believe that there is a shortage of scientists, mathematicians, and engineers. A lot of my students feel, as I felt when I was a teenager, that studying science is not only interesting for its own sake, but also a good career plan. Don’t get me wrong – I still love science, I think everyone should have good foundations in science, at least through the high school level, and I’m glad to have a bachelors in biology. But, with making it into the top 20 candidates for a position as a scientist in academia being framed as a bit of an accomplishment, I think we ought to stop telling kids that there is a big demand for more scientists.

    • I wouldn’t conflate scientists with faculty. There can be intense competition for limited faculty places but society really needs well trained scientists (or whatever you want to call people with good math/science skills & training). We collectively train a large number of PhDs each year and the reality is that there aren’t necessarily positions for every one of them. One complaint that can be levelled is that as mentors/trainers of graduate students we collectively aren’t great at preparing people for those alternative career paths.

      • Most jobs don’t have that many applicants! And yes, there are many other ways to do science outside of the academic route.

      • You make a good point that scientists are not the same as faculty. And I am relieved to hear that most positions don’t actually get 90+ qualified applicants! I would be very interested in reading about alternate career paths for ecologists and other non-medical biologists. There’s tons of work for biologists to do… but I’m not aware of that many opportunities to get paid for doing it. I have looked for alternate career paths, but possibly in all the wrong places. Given the tight job market for scientists that I perceive, I usually advise my students not plan on a career in biology unless they are interested in a) medicine b) teaching at the secondary level or c) truly passionate and willing to take a risk.

  5. Score sheets of this type are common in European universities for both hiring and grading tasks. They both depersonalize and standardize practices which were largely “decided among friends”. It’s been one of the positive outcomes of EU attempts to homogenize academic systems and build-in some level of transparency where there was none before.
    Unfortunately, the adoption of this practice is far from uniform. For example from my experience, the French research system is certainly way overdue for reforms of this type.

    • Interesting – thanks. I know the French system better than I know others, as it happens, and yes, it certainly does seem that ‘deciding among friends’ happens a lot!

      • My understanding purely from reading news stories in Nature and Science is that Italian and Spanish academia are this way as well–hiring is all a matter of who you know.

    • In Italy a committee of 3 full Professors decides the criteria. Criteria that are adjusted to favor THE candidate (who may or may not be among the best applicants); I was adding the “this is an oversimplification disclaimer”, but actually this is is how it works. Peer-pressure works fairly well unless peers are friends. This is not even the biggest problem, since the biggest problem in Spain and Italy is lack of positions.

      • Fascinating how it’s done in Italy and Spain. And yes, I agree that the lack of positions is crucial in the end. Therefore the need to (1) lobby for the importance of scientific research and education and (2) prepare for careers outside of academia (see Jeremy’s links above).

      • I absolutely agree with preparing for careers outside of academia. I still remember when my colleague Jark Giske at a lab meeting said: “we would like to ask people who left academia how the life is out there, but they never came back to tell us”.
        As for lobbying, Southern European countries have been cutting funds for research since 2008 and I do not see any increase in funds in the near future. As for US colleagues who dream about working in France, I strongly recommend them to ask about the salaries before making the move…

      • I’m entirely agreed with what you commented about French salaries – they are appallingly low and promotions or mobility are rarer than unicorns. It is probably a good topic for a whole other blog!

  6. I’ve been on a number of searches, spanning three institutions, two of them R1, and this is the first time I’ve heard of such a score sheet. Interestingly, here at Berkeley we use something like that for hiring staff. We’re required to write up a standardized list of questions asked of every interviewee for a staff position, but there is no standard structure for faculty hiring. I do think that the lack of structure makes the composition of the search committee key to who actually gets hired. I’m not sure, however, that it would be possible to impose such a check sheet at my institution.

    Also, it is interesting to me that, over the years, the number of people applying to tenure track jobs at Berkeley and U Chicago always seems smaller than the numbers I’ve heard were applying to similar jobs at slightly less highly ranked state universities. I think, unfortunately, that there is a lot of self-selection by applicants, so that we don’t actually see all the best candidates. We also rarely see applicant pools with >20% females. I often see a similar phenomenon regarding various society awards – often very few people are nominated, which means that award committees aren’t even seeing the whole field. The moral of this paragraph is that you won’t get that plum job or award if you don’t apply or get nominated.

    • “the number of people applying to tenure track jobs at Berkeley and U Chicago always seems smaller than the numbers I’ve heard were applying to similar jobs at slightly less highly ranked state universities. ”

      The number of people applying for any given tenure track job varies hugely, even among seemingly-similar jobs at seemingly-similar institutions. Everybody remembers the crazy-high numbers they hear (200! 300! I even heard of 400 once). But as you say, those crazy-high numbers aren’t typical. At R1’s, I’d say anything from 40-200 applicants isn’t all *that* unusual. But even that’s just a guess based on anecdata.

      “I often see a similar phenomenon regarding various society awards – often very few people are nominated”

      Yes, many people are very reluctant to nominate themselves for awards, or even ask others to consider nominating them. They see it as “self promotion” in the worst sense. Which still puzzles and surprises me, but there it is:

    • In my department (granted, I’m in the Ag College) we have many many fewer applicants. 40 is considered a big pool, and often only 10 of those are actually viable candidates. We use this ranking system for smaller searches as well.

      I’m surprised that Berkeley doesn’t use a system somewhat like this. I know at least some other UC campuses do.

      And yes! you can’t get the job, grant, award if you don’t apply. And you need to apply. It’s taken me years to realize that “nominations” for teaching awards at my U actually stem from the person being ‘nominated’, who asks their friends and colleagues to do it. Of course, in retrospect, but I took the word ‘nomination’ far to literally.

      • “Of course, in retrospect, but I took the word ‘nomination’ far to literally.”

        In the comments on that old post on “self promotion”, it was suggested that awards committees invite people to “apply” for awards, rather than seeking “nominations”. I’m not sure how much, if any, difference this would make in practice, since after all it’s still an award rather than a job or a grant (the latter being things that scientists are used to “applying” for). But it might be worth a shot.

    • One reason might be in last year call for an Asst Prof Job at Integrative Biology at UC Berkeley only one of the 5 short-listed persons was a postdoc. The others were either Asst Profs or Ass Profs at other Institutions (info coming from the ecology job wiki). Since the majority of applicant for Asst Prof positions are postdocs, if chances are this thin, fewer postdocs are applying and so you see fewer total applicants

  7. I’ve been involved in searches at three different institutions (a small liberal arts college, a regional state university, and a private teaching-focused university). There was a pretty good convergence of methods. In no case did we start out with these quantitative rankings. It wasn’t until we were trying to narrow down from a long list to get down to the phone/skype interview list that we started getting quantitative. Before that, everybody read all of the applications and we all sorted candidates into ‘highly consider’ ‘maybe consider’ and ‘definitely not’ and if anybody was serious about a candidate then we had a discussion about them, and then we ended up narrowing down to a top 10 or top 20 pretty easily. Then you get into more detailed considerations to narrow things down, using consensus and compromise.

    • Interesting. Thanks. And yes, with discrepancies (i.e. someone is really interested even if others aren’t), we discuss, too, no matter what the score.

  8. I do not know why Institutions are not more transparent regarding their hiring practices. Something like: we assign 0 to 100 points to the following a) publications b) years since PhD c) teaching d) citations etc. Then, a shortlist of 15 people is created and we proceed with phone interviews. Or something else if a different method is used.
    It seems to me that the secrecy surrounding the hiring “material and methods” (including confusing statements like “we are looking for a terrestrial ecologist but anyone interested in spatial ecology is encouraged to apply”) leads to huge inefficiencies, waste of time, and frustration for anyone involved. Postdocs applying to 70 calls each years, faculties writing 70 letters of recommendation (maybe for 10 different postdocs on the job market for a grand total of 200 letters to be sent out each job season by someone like my ex-advisor), committees dealing with 250 applicants for 1 job.

    • “It seems to me that the secrecy surrounding the hiring “material and methods” (including confusing statements like “we are looking for a terrestrial ecologist but anyone interested in spatial ecology is encouraged to apply”) leads to huge inefficiencies, waste of time, and frustration for anyone involved. ”

      See my old post, which Ruth linked to, for comments on why institutions sometimes write quite broad job ads, or ads for which the rationale may be unclear to outsiders. In general, there is almost always a very good reason why the ad is written as it is. But those reasons are invisible to, and cannot easily be made visible to, outsiders.

      FWIW, your ex-advisor is *very* unusual in having to write anything like 200 reference letters in a single job season! And as Ruth noted, very few ecology and evolution jobs get anything like 250 applicants.

      EDIT: I’d add that, insofar as some jobs get huge numbers of applicants, many of whom aren’t at all competitive, I don’t think that has much to do with job ads being overbroad or unclear. Ultimately, it’s because many more people would like to have a faculty position than there are faculty positions. That would be true even if job ads were written more narrowly. And while it’s true that some people don’t apply for jobs for which they’d be competitive, many people apply for many jobs for which they aren’t competitive for all sorts of reasons having nothing to do with the breadth or clarity of the ads. For instance, when I was a grad student, I applied for some faculty positions, just to get the practice of putting together an application. In my experience, having served on multiple search committees, only a minority of uncompetitive candidates are uncompetitive because they don’t do anything remotely like what we’re searching for. In my admittedly-limited experience, most uncompetitive candidates are uncompetitive because their publications and/or overall qualifications just aren’t anywhere close to those of the strongest candidates.

      • My source is the ecology job wiki and between 150 and 200 applications sent to R1s are relatively common (maybe just a few at 250). There is certainly the problem of insiders and outsiders looking at things under a different light (I read your post btw), but I am very confident that lack of transparency is not the right direction, especially in this age of open data, open science, all open. Institutions can get away with it because when you have more than 50 qualified people applying for jobs, 20 of whom are highly qualified (multiple postdocs, grants, several publications, enthusiastic recommendation letters), you can get away with basically anything. Finding a postdoc applying for jobs who is ok with the hiring practices in academia would be very challenging. In other jobs, not so much.

      • Simone says: “I do not know why Institutions are not more transparent regarding their hiring practices. Something like: we assign 0 to 100 points to the following a) publications b) years since PhD c) teaching d) citations etc. Then, a shortlist of 15 people is created and we proceed with phone interviews. Or something else if a different method is used.”

        I think the main issue there is not trying to high the process, but rather it being just ordinary people overly busy like all of us are with the job of writing ads, coming up with the score sheet. Typically the score sheet isn’t quite finalized at the time of the job posting.

        About the letters – many jobs don’t require letters up front (much more civilized that way!) So only the references of people who are on the short list have to write letters. After one is written (a long process), modifying it to keep it both up to date and relevant to the specific job at hand isn’t difficult.

      • Jeremy, let’s start from the goal, because we have a moving target here. Of course, the goal of the institute is to hire the best candidate, so we see all the bizarre things like the same job posted 3 years consecutively because the top candidate decided to go somewhere else or other things. At the same time, in the spirit of small steps towards a better world and since most of the research is paid with public money, I think it is a disservice and is in general not respectful not to be upfront and transparent regarding the criteria. You say that people are applying to all sort of jobs because of the unfavorable odds. I answer that while some people do not get it, other will. I think it is very bad for all science in general that postdocs every fall spend hours and hours of their time preparing application packages (it is not uncommon for otherwise very smart people to send more than 50 job applications every job season, something I always refused to do), sending request for recommendation letters to faculties who are busy and have other people claiming their attention, instead of spending time that could be otherwise spent doing research (and for which they are paid). Of course there are all sorts of don’t ask, don’t tell. Calls for Asst Prof position for which all interviewees are at least Asst Profs. Others in which all interviewees are non-gender-diverse. Others in which only people with PhDs from very specific Institutions are interviewed. I am not complaining about choices (I would be for a system in which universities actively hire instead of receiving applications containing strange things, like the teaching statement in which all applicants love teaching and were praised by students and 100% of postdocs are in the 5% of the best postdocs) and I am not specifically discussing my experiences in places in which I have applied. But I know a lot of postdocs and there is a big discontent with the practices. However, who does not get a job, gets out of academia never to be seen again and the discontent is maintained at manageable temperatures. As I said before, being an academic is an uncommon profession, since you can only work in academia and for a handful of places, and these non-transparent, inefficient practices can be maintained long-term. In different markets, those practices would have a very short life. I understand it is not easy to understand for people in hiring committees, as there is a strong selection bias. I still find absurd that I spend time preparing applications for places in which I have zero chances to get an interview for reasons I do not know or reasons I find out after n years of job applications.

    • There really isn’t any literal “secrecy” here. Regardless of the formula of the evaluation matrix, essentially all institutions value the same things (see Jeremy’s post on faculty searches: ), i.e., mainly science output (quality, quantity, etc), fit, teaching, etc. Most search committees search very broadly, not knowing who will apply / what they’ll get (and to make sure the application numbers are high – it would be really embarrassing for a search to get a relatively small number). And also because they rarely know what they are looking for specifically, or at least they are rarely, if ever, in agreement.

      Ill have to disagree w Jeremy and Ruth on the number of applicants though. At US institutions, there are usually 150-300 applicants for jobs in ecology / biology. But the odds aren’t as bad as they seem; at least half of applicants aren’t remotely qualified (e.g., they may have a PhD in physics, or some sub discipline of biology). Whenever we do searches for ecologists at UNC, maybe half the applicants are evolutionary biologists, population genetics, behavioralists, taxonomists, etc, which inevitably leads to debates about who is and isn’t an “ecologist” (my definition; at minimum you publish in Ecology (or similar) and go to the ESA meeting (ditto)).

  9. It is also “funny” asking for advice to faculties for the application package. Basically, apart from having a great CV, everything else is extremely subjective. Since there are peculiarities in each Institution/Department, as I gather from this post and Jeremy’s and a fairly large number of other faculties, each faculty has a bias toward their hiring practices it seems.
    Things like proposing research projects with faculties (for me is a no, for some faculties it may help), how long the cover letter should be (from 1 to 3 page), re-applying to jobs for which you were previously rejected (yes, maybe, try to ask, you never know), how long and detailed the research statement should be (from 2 to 5 pages). Then, since you do not know why you were rejected (especially when you think you are highly qualified for that job), you go back to “maybe I need to focus more on this aspect in my cover letter”, “maybe I should cite this collaboration”, “maybe I should separate my pub list in Ecology and Evolution”, an endless circle of frustration that I am sure is shared by many other postdocs and it is helpful to no one.

    • Simone, I don’t know if this will be any comfort, but in my experience your CV counts way more than anything else for a job at a research university. Most of the categories that Ruth is talking about could be rated based only on the CV, without reading the cover letters or statements. Maybe the teaching statement is a little more important, since it conveys information that often isn’t on a CV. Letters can be important to make distinctions among the very top group, but that’s out of your control. It can’t hurt to have a great cover letter and a polished research statement, but those things probably won’t come into play unless your CV gets you into the top group of 10-20.

      • I absolutely agree with Peter here – it’s the CV that counts. The rest is nice but is not the heart of the matter. CV meaning (largely) pubs, funding, talks, teaching, awards.

      • I totally agree. The CV is really all that counts. (and I dont mean the style! its the substance that committees zero in on). Especially if you are using a matrix to evaluate candidates. Personally, I think letters of recommendation are useless (and a huge source of bias) and I always do to convince colleagues to ignore them.

    • If you think you’re a good fit somewhere and don’t make the short list, then I’d definitely take the time to ask people there what you could most effectively improve in your application the next time around! And if it’s a job you think you’re a good fit for, I’d also recommend contacting the chair of the search committee prior to submitting your application to get a sense of what (with respect to e.g. the formatting issues you mention) is expected in their department. It really is a good idea, and if you’re a good fit, anyone on the search committee should be trying to woo you, and will be happy to talk to you about how to make your application strong.

      • Thanks Peter and Ruth for your comments and advice, I appreciate.
        I do not want necessarily to write about my experience with job applications, but I tried a couple of times to ask for clarification and the answer was “there was nothing wrong with your application, but there were so many exceptional candidates”. I must say that I would give the same answer, especially in a sue-prone society like the US. It basically repeats the hilarious content of the rejection letters, which I suspect have been drafted by the same communication agency, since they use almost the same wording, e.g. “we appreciated your application, but this year there are exceptional pool of applicants ….”, which seems to imply that in the case next time a less exceptional pool of applicants will apply, maybe we will give you call.
        Another aspect that I find very strange is the absence of official documents regarding the selection process and its result, while internally it seems that the committee has to follow some strict guidelines regarding equal employment et similia that nobody outside is able to see. I bring the example of my home (not scientific) country, in which after every call an official document is produced and published online with the grading of each candidate, along with an analytical assessment. The fact that there are problems with the grading and the selection process in my home country is off-topic. I do not see why institutions adopting the excel-grading system could not do the same and publish the online the grading. Actually I know, it is because transparency has to be enforced (as in my home country, where universities are all part of the public administration), otherwise all sort of justifications are used for not being transparent. It is the same with code and data in publication, if it is not enforced, the large majority of scientists are not publishing either code or data, mostly because (this is entirely my opinion) are afraid of having made mistakes (being scooped is mostly an excuse).
        I conclude by saying that I am not blaming hiring committees for hiring processes that in my opinion should be dramatically changed. Those inefficient, strange systems are typical of markets in which there is a huge asymmetry of power.

  10. Pingback: Redacted ecology faculty search | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s