As a signatory of the Royal Society of Chemistry’s Joint Commitment for Action on Inclusion and Diversity in Publishing, PLOS is pleased to announce…
by Veronique Kiermer, Iain Hrynaszkiewicz, & James Harney.
Today we’ve posted a report, along with accompanying data, on qualitative research we conducted about how researchers assess the credibility and impact of research. This study, which has not yet been peer reviewed, was supported by a grant from the Alfred P. Sloan Foundation and conducted with the assistance of the American Society for Cell Biology. The findings will inform future PLOS activities to support improved research assessment practices — specifically to support efforts to move emphasis towards individual research outputs and away from journal-level metrics.
As we wrote in October 2020, we are interested in how researchers evaluate research outputs when (1) conducting their own research, and (2) when they take part in committees for hiring or grant review. In particular, we were interested in how researchers make judgments about the credibility and impact of the research outputs — including papers, preprints, research data — that they encounter in these contexts.
We interviewed 52 cell biology researchers. Our approach focused on the goals they are trying to achieve (e.g.”identify impactful research to read”), rather than the tools they are presently using to carry out these tasks. By focusing on researchers’ goals (the what) rather than how they are achieving them, we sought to better understand how we might influence those practices. This qualitative research will be followed by survey work to better quantify our findings. This will provide insights into opportunities for better solutions for improved research assessment. In particular, we’ll understand what signals of credibility and impact might provide researchers with more useful ways than journal impact factor or journal prestige to assess the quality and credibility of individual studies and individual researchers.
Our results confirmed our initial hypothesis that the credibility (or trustworthiness) of research outputs is the central concern for researchers when conducting their own research, and that impact was a strong focus when researchers are part of hiring or grant review committees. But we established that researchers also assess attributes of research outputs related to reproducibility, quality, and novelty.
In addition, we found that researchers said they assessed credibility in committees more frequently than we anticipated, given that impact considerations — including journal impact factor — are prevalent in committee guidance and research assessment objectives (see for example McKiernan et al. (2019), Niles et al. (2020), Alperin et al. (2020), and Sugimoto & Larivière (2018)).
Our interviews confirmed that convenient proxies for credibility and impact, usually those based on journals, are used pervasively and are common in both research discovery and committee activities.
Our research also indicates that when researchers inspect publications to evaluate credibility they try to minimize the amount of time they spend reading and understanding publications. Their tactics included selective reading of the abstracts, figures, and methods sections. Sometimes they said that they also look for signals such as whether data was available and had been reused, whether peer-reviewed versions of preprints have been published, and whether open peer review reports were available.
Insights that help us better understand what researchers’ goals are and how they make judgements about credibility when discovering and reading research may offer opportunities to provide more reliable signals that help them with these tasks, yet are better tailored for credibility judgments than journal-level metrics. The stated importance of assessing credibility by researchers who participate in research assessment committees also suggests an opportunity for funders and institutions to better align their guidelines with the practice and motivations of committee members.
After our follow-up survey work to validate these preliminary findings, we will report back and hope that this research will help others in the understanding and development of better methods of research assessment.