On March 31 2021, PLOS Computational Biology introduced a new journal requirement: mandated code sharing. If the research process included the creation…
How do researchers evaluate research?

PLOS has received a grant from the Alfred P. Sloan Foundation to study how researchers evaluate both the credibility and impact of research outputs (e.g. articles, preprints, data, and code). We will be conducting this research in partnership with the American Society for Cell Biology (ASCB).
We’ll be looking at this in two contexts: (1) when researchers are discovering and reading these outputs in the process of their own research, and (2) when they are assessing these outputs while participating in grant application review panels and hiring committees.
We are interested in characterizing the steps that researchers go through to form judgments of both credibility and impact in these two contexts.
Why is this new research needed?
Previous research has explored the factors that influence trust and how researchers decide which articles to read and cite, showing that personal inspection, social clues, and peer review, are important. Recently, preprints have created new challenges for researchers who evaluate large amounts of new information outside of the traditional framework of journal peer review. A survey by the Center for Open Science suggested that cues related to Open Science content (e.g. signaling open availability of data) and independent verification are important for judging credibility of preprints.
Yet, the current system for assessing research is dominated by signifiers of impact. Since impact cannot be known until after time has passed, various proxies have been used that signal perceived, or the potential for, impact such as publication in a high Impact Factor journal.
PLOS and other organizations, in particular the multi-stakeholder organization DORA, have stressed the negative consequences of such a narrow focus on proxies of impact and the need for reform. Most problematically, researchers are pushed to place a higher priority on pursuing these proxies than on making their research credible, reproducible, and reusable.
Many have suggested that this focus on proxies for impact in research assessment is at least in part due to the practical limitations in evaluating credibility and impact. By understanding what truly matters to researchers in forming these judgments, we hope to find insights that can inform better practice in research assessment, inspire better tools for researchers, and help us evolve how we signal the markers of credibility in the articles we publish.
Collaboration with ASCB
We are not undertaking this project alone! We are excited to be conducting this research in collaboration with the ASCB. This partnership helps maximize the connection of this project to the research community. As the organizational sponsor of DORA, ASCB is also a scientific association demonstrably dedicated to advocating sound research policies and improving research culture.
What comes after the research?
We will publish a report on the insights obtained through this research. This report will have a CC BY license, with anonymized and aggregated data.
Future research might include a quantitative initiative to validate the findings with a broader group of researchers. Ultimately, we would like to understand if there are opportunities to better serve researchers’ needs to assess and discover new research than currently available methods. Therefore, we also hope that our initial report will prompt further research by others, which we believe is important for the research and scholarly communication communities. We are very thankful to the Alfred P. Sloan Foundation for their interest in this research project and dedicated support of Scholarly Communication issues in general.
We are confident that improvements to research assessment culture and practice are possible. And we also believe that there is no more important a moment to be doing this, as we experience a global crisis during which unbiased, rigorous, and credible research will play an unprecedented role.
Acknowledgements
The successful research proposal was inspired and improved by conversations with many representatives from the research, funding, and publishing community. We wish to acknowledge and thank, in particular, Jessica Polka and Ron Vale for insightful discussions.
For many (often applied) researchers, a key factor in deciding where to publish and when evaluating the value of published research, is whether it will be read by practitioners (e.g. I will publish in regional journals with low impact factors to achieve this) and whether it actually delivered change/benefit on the ground (e.g. I will often co-author with stakeholders to ensure our recommendations meet real need and can be implemented).
As an expert in research impact, I would be interested in advising the project if there is any interest in expanding the scope to consider these broader conceptions of societal impact. Please get in touch if you are interested.
This research content is very interesting, particularly because many novice researchers are unaware of the rules and therefore often lack the necessary experience to meet the standards established by scientific journals. In this sense, I think it has a great impact to validate the information emanating from research and how other researchers can assess said advances.
Research into research is a hot topic and deserves more attention than received so far. Quality of research was under question anyway due to the proliferation of “predatory” and other journals, now excerbated by preprints. The basic concern is to be able to assess quality of research in place of just what is reported.
Having said that, in a pandemic situation now seen with COVID-19, preprints have served well by providing quick information to scientists around the world. In this situation, time was crucial and preprints have been invaluable in advancing the science despite qestions about their quality.