Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

Rethinking Research Assessment: Addressing Institutional Biases in Review, Promotion, and Tenure Decision-Making (part II)

Authors: Ruth Schmidt is an Associate Professor at IIT’s Institute of Design and Anna Hatch is the program director for DORA. PLOS supports DORA financially via organizational membership, and PLOS is represented on the steering committee of DORA.

In part one of our series on institutional biases in review, promotion, and tenure decision making we showed how objective comparisons are not equitable. In our second part of this series we’ll show how individual data points can accidentally distract from the whole.

Some pieces of data, such as the reputation of an applicant’s mentor or institution, or publishing in highly respected journals, are “shinier” than others. This can give initial or exceptional data points and personal reference points more perceived importance than they warrant, and privilege them over other available information.

The notion of anchoring, described in our first post, can keep us from more even-handed or equitable judgments about data points due to the fact that we are typically heavily swayed by initial pieces of data. In a personal context, for example, when the first item on a menu is $15, other offerings that cost $12-13 seem like a pretty good deal. If, by contrast, the first item is $7, that $13 item is likely to feel comparatively overpriced. An academic version of this can occur when the first person to interview for a position sets the bar — for better or worse — for others who interview after them. Just as we can’t ever truly “unhear” something once we’ve heard it, initial anchors for value persist longer than we might expect, and tend to impact our perception of the information that follows.

An additional factor that can compromise our ability to accurately gauge data points is the Halo effect, in which positive impressions of individual attributes influence our overall opinions of people or products; for example, a candidate from a prestigious institution may be assumed to have more potential than one from a lesser known university. But giving preferential treatment to people based on attributes they inherited, or which signify a small component of their overall value, may also reinforce inequitable norms by using current hierarchies as a stand-in for more thoughtful deliberation. This keeps us from considering otherwise worthy candidates or considering individuals against equal criteria.

What can institutions do?

Create intentional diversity: The halo effect and anchoring can be amplified when the definition of what is valuable in academic assessment is implicit and goes unquestioned. Assembling diverse teams — across gender, seniority, cultures, and under-represented minoritized populations — brings a range of perspectives and experiences into decisions, which forces us to interrogate our own assumptions about value and quality.

Question institutional norms: Becoming more institutionally open-minded can be challenging when everyone is steeped in the same environment and set of processes. Looking outside your individual institution or discipline can provide a reality check on processes, and also broaden a sense of what’s “normal,” or even possible.

First things last: Academics are traditionally used to seeing classic “halo” data — schools, affiliations, and publications — early and prominently in review, promotion, and tenure materials or dossiers. Placing reputation-based indicators at the end of applicant materials can reduce preconceived notions or premature perceptions of quality.
To encourage the adoption of more equitable hiring, and RPT processes, the authors are collaborating on a series of tools for DORA to assist institutions in experimenting with new processes, indicators, and principles. Available here.

Acknowledgements
We thank Stephen Curry for very helpful comments. We also thank Stephen, Olivia Rissland and Stuart King for and the accompanying briefing document on the DORA webpage.

Related Posts
Back to top