Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

Improving Research Assessment in the Triage Phase of Review

Authors: Helen Sitar is the community coordinator for DORA and a Science Policy Programme Officer at EMBO and Anna Hatch is DORA’s program director. DORA receives financial support from PLOS via organizational membership, and PLOS is represented on the steering committee of DORA.

When seeking to fill an open faculty position, it’s common for universities to receive hundreds of applications for a single job. Unsurprisingly, such a high level of competition creates a bottleneck early in the hiring process. The ‘triage’ phase of review therefore becomes a critical point, as it determines which applicants receive further consideration.

The ability to efficiently yet effectively triage applications has been a long-standing challenge for the academic community. Time constraints make triage prone to the influence of unintended biases and proxy measures of success, such as the journals where the candidate has published or their institutional affiliations.

Meanwhile, consistency between reviewers presents another challenge.  Even when selecting for the same purpose, processes and standards can vary greatly between individual reviewers or committees, due to differing assumptions about the process or the criteria for making judgments. Reviewers may prefer certain institutions, labs, or journals, or attach different values to funding track records. If reviewers are unaware of their own biases for certain community affiliations, these can influence decision-making. Ultimately, biased hiring processes impede underrepresented and minoritized scholars from making progress in their academic careers, resulting in inequalities in academic leadership and more. Without any intervention, one current model of postdoc to faculty transitions suggests that faculty diversity will not increase significantly until 2080.

Testing three approaches to triage

To stimulate discussion of the challenges associated with the triage process, and to identify possible solutions, DORA organized an interactive session for the 2019 ASCB | EMBO Meeting. Through a mock-review exercise, participants worked in small groups to test different approaches to the triage process. The seventeen participantsーrepresenting different groups and career stages, including postdocs, faculty members, administrators, publishers, and staff from other non-profit initiativesーwere given the description of a faculty position to ‘hire,’ general instructions on how to conduct the triage, and a set of anonymized application materials (CVs and cover letters). Because the session was limited to 75 minutes, other application materials, including letters of reference, teaching statements, and diversity statements, were not included in our informal exercise.

The application materials were anonymized, and the author names from the references listed in the bibliographies were also removed. A note under each reference, however, indicated the applicant’s position in the author list (e.g. “2nd author, 8 authors total”). The CVs and cover letters provided to two of the participant groups were further altered, by removing institutional names and journal titles when they were presented. One of these groups was provided with an assessment matrix to guide the evaluation of different competency areas: research, teaching and mentorship, equity and inclusion, and service to scientific or academic communities.

By informally experimenting with these triage approaches, our goal was to understand what information people use to make judgments during the triage phase. We were also keen to surface ideas about improving the triage process. Below we summarize the discussion and insights that emerged from our exercise.

What’s in a name?

CVs are littered with names, including those of applicants’ advisors, collaborators, coauthors, and references, which may yield strong associations for reviewers. These names can divert the reviewer’s focus from the work presented and onto the context and community in which it has been conducted. One way that names have the potential to mislead reviewers is through ingroup favoritism, wherein reviewers may gravitate towards the individuals that present affiliations most like themselves.

Surprisingly, some individuals did not notice that names had been omitted from the application materials in our exercise. And others reported that it did not negatively impede the review process. Without names, participants found they became more attentive to the remaining details. For example, one group reported paying greater attention to the institutions where candidates received their training.  But relying on the names of institutions is also problematic, because they do not speak directly to the accomplishments or contributions of an individual, which should be the focus when hiring.

Institution and journal prestige

Interestingly, the lack of institutional and journal names in materials did not detract from the participants’ ability to understand an applicant’s profile, as long as sufficient detail on their academic record had been included. Specifically, participants were interested in knowing how an applicant had contributed to the community at large, or to their department or research group, as this is a better indicator of their skills than their institutional affiliation.

Journal names were deemed generally unnecessary as well, especially when an applicant had provided a brief 2-4 sentence explanation summarizing what an article achieved and their specific contribution. This is not the first time DORA has received this feedback; it was suggested at a previous conference session in 2018.

The participants who saw journal names in the application materials reported that they realized they were using them as a way of evaluation; these participants proposed that their assessments might have been fairer had the journal titles been removed. However, participants still wanted access to the articles’ DOIs to make papers easier to find, read, and evaluate.

One group reported that in the absence of journal names, they paid more attention to the author order and used this to determine the importance of the applicants’ contributions to each work. But others argued that knowing the authorship order for a publication did little to indicate an applicant’s role in contributing to a paper or project, and therefore was not particularly useful information. Tools like the Contributor Roles Taxonomy (CRediT) can help parse apart how an individual contributed to an article or project.

Details instead of names

The problem with preferring candidates with well-known labs, institutions or awards named on their CVs is that by valuing reputation, departments narrow their search for talent to within known spheres. In doing so, they will miss out on hiring great researchers who have yet to be recognized by a prestigious award or hired into a well-known lab. When personal, institution, and journal names were removed at the same time in our exercise, some participants noticed that many CVs lacked details necessary to understand applicants’ skill areas. Some such details may have been in materials, such as teaching statements or reference letters, not available for the exercise. The reaction of participants to omission of these details from applicants’ CVs and cover letters underscores the importance of communicating one’s own research interests, approach to science, unique skills, and teaching and mentoring abilities. Only with this information can reviewers accurately evaluate candidates and suitability to a particular faculty role.

There was some disagreement regarding the necessity of including several details common to applicant CVs, including assessing candidates based on their receipt of awards or grants. Some participants paid close attention to mentions of grants, and attributed great value to the fact that an applicant had received an independent grant. Others debated the level of confidence one should have in a person’s abilities based on their receipt of grants, as this can contribute to the “Mathew effect” of people already recognized receiving further recognition because of previous awards.

Consistency is key

In the triage process, it is clearly important for reviewers to be able to quickly find relevant information. The necessity of this was a major topic of discussion in our session, particularly when it came to the question of being able to compare candidates with ease. The lack of consistent organization of CVs made it complicated for participants to find the relevant information needed to make judgments of applicant’s strengths in various areas (e.g. research, teaching, leadership, inclusivity, service). Moreover, some applicants’ CVs were much longer than others, which reviewers found cumbersome as the length made finding specific pieces of information even more challenging.

To address the need for organized CVs, the concept of a structured narrative format was mentioned in our discussions as an option to build consistency into the triage process. The Royal Society in the United Kingdom, the Dutch Research Council, and the Swiss National Science Foundation are each experimenting with standard structured narrative CV formats. So far, the Dutch Research Council has received mostly positive feedback on its new narrative. Reviewers report appreciating the added context the narrative format provides, and the Council has found that it leads to a more diverse selection of researchers. Some participants in our session, however, were skeptical of enforcing a standard template CV, noting that it could restrict creativity and limit applicants in describing work which may not fit neatly into any of the preconceived categories.

Standardizing judgments

Every reviewer has their own ‘way’ of sorting through applications, and specific criteria that they look for when deciding who to ‘keep’ or ‘eliminate’ in a triage round of review. In our session, participants using an assessment matrix advocated for a short discussion between reviewers to find a common set of standards before embarking on the triage process, as a way to increase consistent decision making and help minimize the effects of implicit bias. The Germans call this process towards reaching a collective agreement—‘abstimmen’. Directly translated it means “off-voting,” or voting before one begins something. Asking reviewers to calibrate how they interpret the categories in the rubric would ideally result in more consistent assessment of all individuals.

The overarching framing for the discussion was the tension between the need for efficient, accurate assessment versus the burden of devoting attention to details in every application. While some participants stated firmly that the use of rubrics may slow down the triage process, the group generally agreed that rubrics allow for uniform standards to be applied and that consistency should be valued. Still, a few others were resistant, stating that it was not desirable to ‘over-engineer’ the process.

Looking ahead

Universities and research funders are increasingly reconsidering the relevance and importance of researchers’ contributions when assessing them for hiring, promotion or funding. Across the DORA and wider research communities, there are numerous discussions on how to demonstrate the value of different contributions, whether it be via descriptive narratives, article-level metrics, teaching evaluations, or other means. The triage process is the front line to further review and consideration—and is therefore both important and influential in determining the makeup of the research community and its leadership.

The scholarly community has not yet reached a consensus regarding which methods for evaluation are best suited to different purposes or situations, and many believe that there will never be a ‘one-size-fits-all’ solution to research evaluation. Yet demands for employment and funding continue to grow, resulting in more challenges for applicants and review committees in every selection process. And as the number of PhD graduates continues to exceed the number of available positions in academia, academics can expect the triage process for applications to become ever more challenging.

Instilling standards and structure into research assessment processes, especially during the triage phase of review for faculty searches, appears to be a particularly robust option for reducing bias. To achieve this, universities can ask applicants to use a structured narrative CV template. Institutions can also develop evaluation assessment matrices and have conversations amongst reviewers to set standards and identify desirable applicant qualities and characteristics. By removing journal titles and replacing them with 2-4 sentence article summaries that articulate key findings and specify the researcher’s contributions, reviewers would be provided with information better suited to judging researchers’ contributions at the triage stage. When researchers and institutions reflect on and carefully choose what types of information are used to judge scholarly contributions, evaluation processes can be brought into alignment with community values.

Acknowledgements

We thank the participants at our career enhancement programming session at the 2019 ASCB|EMBO Meeting for a robust and thoughtful discussion about research assessment. Thanks to Stephen Curry, Chris Pickett, and Gary Ward for very helpful comments.

Related Posts
Back to top