Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

The Open Science Atlas

Note: PLOS is delighted to once again partner with the Einstein Foundation Award for Promoting Quality in Research with applications and nominations for this year’s awards open until April 30th, 2023 at 10:00 pm UTC.  The Einstein Foundation Awards program honors researchers who reflect rigor, reliability, robustness, and transparency in their work. To get a flavor of what the Einstein Foundation is looking for we have asked last year’s finalists to write about their work (watch this space for other finalists in the coming weeks). First up are four researchers (bios and photos at the end of the blog) who are introducing a platform to monitor transparent research practices across scientific disciplines . You can watch a video here and/or read more below.

What?

The Open Science Atlas[1] will be a web-platform that enables continuous, large-scale monitoring of transparent research practices across the scientific ecosystem. The platform will be driven by a unique data extraction engine that combines global crowdsourcing and automated algorithms to produce living maps of the research transparency landscape. These maps can be explored at various levels of detail, from high-level scientific disciplines, down to individual institutions or journals, and will evolve over time to reveal temporal trends.

This bird’s eye view of the transparency landscape will provide an empirical measure of progress, helping us to avoid becoming unduly despondent (because we think nothing is improving) or complacent (because we think everything is fine). The Open Science Atlas will also support further meta-research on the implementation of transparent research practices; for example, whether preregistrations are being properly reported [1] and whether shared data are Findable, Accessible, Interoperable, and Reusable (FAIR) [2,3]. In certain situations, the data we gather may support causal inferences about the effectiveness of specific interventions (e.g., journal policy) intended to promote transparency [4]. The results will be available to all on a public data dashboard, motivating and informing community efforts to improve research transparency [5].

Why?

Transparency is a fundamental principle of scientific methodology that is widely neglected in practice. Our research has found that scientists in psychology [6] rarely share data (2%), analysis scripts (1%), or research materials (14%), seldom use preregistration (3%), and only occasionally disclose funding sources (62%) or conflicts of interest (39%) (the situation is similar in the social sciences [7] and biomedicine [8]).

Although many initiatives have been introduced to try and increase adoption of rigorous and transparent research practices [9], there is usually no plan in place to monitor progress. This limits our ability to gauge effectiveness, detect unintended side effects, and identify areas in need of additional attention or support. Monitoring research transparency will facilitate a virtuous cycle in which policy initiatives are strategically designed and refined in response to empirical data, maximizing their benefit [5].

How?

Previous efforts to monitor transparent research practices [6,7] have been limited in scope by the amount of manual labour required to extract information from scientific articles. Text-mining algorithms have shown promise at automating this process [8], but algorithms still require validation against a manually extracted gold standard (i.e., human judgment). Validation also needs to be an ongoing exercise, as reporting of transparency research practices varies across contexts (disciplines, journals, etc.) and will shift over time. The Open Science Atlas will address this problem through its unique data extraction engine that combines (1) automated algorithms and (2) global crowdsourcing.

Our algorithm approach will be based on the prior work of our team member Dr Serghiou, who led the development and validation of text-mining algorithms to measure transparency indicators across 2.75 million open access biomedical articles [8]. We plan to use algorithms because they are functional, not because they are flashy, and we will always be clear about their capabilities and limitations. Whenever the Open Science Atlas uses algorithms, the relevant validation and accuracy statistics will be prominently displayed so users can calibrate their confidence in our results. Moreover, our algorithms will always be open source, so they can be verified and reused by others.

Our crowdsourcing approach will be based on a project led by our team member Dr Mody known as ‘repliCATS’ [10], and similar projects, like Cochrane Crowd. These projects have demonstrated the feasibility of building large volunteer communities willing to collaborate on meta-scientific projects. For example, repliCATS successfully generated credibility judgements for 4000 social science articles from over 1000 researchers located in over 30 countries.

Contributors to the Open Science Atlas will receive training from our core team, be eligible for various prizes, and be acknowledged for their work on any relevant scientific publications. We will build a semi-automated workflow for article selection, quality control, and data management, which will enable volunteers to contribute to the Atlas whenever it is convenient for them. We will also arrange virtual and in-person hackathons around the world so folks can work alongside each other and get to know each other — data extraction is much more fun over a shared pizza.

What’s next?

We’re super excited about the Open Science Atlas, but alas, we did not win the Einstein Award[2]. So we are currently seeking funding to help us make our vision a reality. We’re also keen to hear from folks who have expertise developing and validating algorithms in scientific contexts who might be interested in working with us. And if you’re excited about joining our global community of contributors, let us know and we’ll get in touch when we’re up and running. You can drop us a line at open.sci.atlas@gmail.com and follow us on Twitter (@OpenSciAtlas) or Mastodon @OpenSciAtlas@mas.to.

Tom Hardwicke, PhD, is a Research Fellow at the University of Melbourne, Australia

Stylianos Serghiou, PhD, is the Head of Data Science at Prolaio, Inc.

Robert Thibault, PhD, is a Research Fellow at Stanford University, USA.

Fallon Mody, PhD, is a Research Fellow at the University of Melbourne, Australia.

References

1.    TARG Meta-Research Group. Discrepancy review: a feasibility study of a novel peer review intervention to reduce undisclosed discrepancies between registrations and publications. Royal Society Open Science 9, 20142 (2022).

2.    Roche, D. G., Kruuk, L. E. B., Lanfear, R. & Binning, S. A. Public data archiving in ecology and evolution: how well are we doing? PLOS Biology 13, e1002295 (2015).

3.    Towse, J. N., Ellis, D. A. & Towse, A. S. Opening Pandora’s Box: Peeking inside psychology’s data sharing practices, and seven recommendations for change. Behav Res 53, 1455–1468 (2020).

4.    Hardwicke, T. E. et al. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. Royal Society Open Science 5, 180448 (2018).

5.    Hardwicke, T. E. et al. Calibrating the scientific ecosystem through meta-research. Annu. Rev. Stat. Appl. 7, 11–37 (2020).

6.    Hardwicke, T. E. et al. Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014–2017). Perspect Psychol Sci 17, 239–251 (2022).

7.    Hardwicke, T. E. et al. An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017). Royal Society Open Science 7, 190806 (2020).

8.    Serghiou, S. et al. Assessment of transparency indicators across the biomedical literature: How open is open? PLOS Biology 19, e3001107 (2021).

9.    Munafò, M. R. et al. A manifesto for reproducible science. Nature Human Behaviour 1, 1–9 (2017).

10.         SCORE Collaboration et al. Systematizing Confidence in Open Research and Evidence (SCORE). Preprint at https://doi.org/10.31235/osf.io/46mnb (2021


[1] Our project was previously called the “Open Science Observatory”; however, we recently became aware that this name is already in use.

[2] Check out our amazing co-finalists: Dr Elisa Bandini and Dr Sofia Forss’s winning Ape Research Index; Dr Jessica Flake and Dr Nicolas Coles’s Translated Instruments Validation Initiative; and Dr Nicholas DeVito’s Trials Tracker.

Related Posts
Back to top