Skip to main content
SearchLoginLogin or Signup

Reflections on Preregistration: Core Criteria, Badges, Complementary Workflows

Published onMay 15, 2023
Reflections on Preregistration: Core Criteria, Badges, Complementary Workflows
·

Abstract

Clinical trials are routinely preregistered. In psychology and the social sciences, however, only a small percentage of studies are preregistered, and those preregistrations often contain ambiguities. As advocates strive for broader uptake and effective use of preregistration, they can benefit from drawing on the experience of preregistration in clinical trials and adapting some of those successes to the psychology and social sciences context. We recommend that individuals and organizations who promote preregistration: (1) Establish core preregistration criteria required to consider a preregistration complete; (2) Award preregistered badges only to articles that meet the badge criteria; and (3) Leverage complementary workflows that provide a similar function as preregistration.

Keywords: badges, blind data analysis, Open Science, prospective registration, preregistration

Clinical trials are routinely preregistered1 (Al-Durra et al., 2020). However, in other fields such as psychology and the social sciences only a small percentage of studies are preregistered,2 and they often contain ambiguities in the description of their study design, hypotheses, and analysis plans (Bakker et al., 2020; Akker et al., 2022). As advocates strive for broader uptake and more effective use of preregistration, the research community could benefit from drawing on the success of preregistration in clinical trials, where preregistration is commonplace.3 Preregistered clinical trials contain itemized and relatively explicit outcome measures, and most report their results.4

We propose three actions for the research community to consider to improve the function of preregistration in psychology and the social sciences (see Table 1 & Box 1). These proposals stem from insights developed while conducting research on preregistration across disciplines, including meta-analyses of discrepancies between preregistrations and published manuscripts (TARG Meta-Research Group & Collaborators, 2021) and a feasibility study of a peer review intervention to address these discrepancies before publication (TARG Meta-Research Group & Collaborators, 2022). We discuss the function of preregistration in terms of reducing bias and making risk of bias transparent (as outlined in Hardwicke & Wagenmakers, 2023), as well as the auxiliary benefit of improved research quality.5 Our proposals are by no means exhaustive; more comprehensive overviews of preregistration are available elsewhere (e.g., Hardwicke & Wagenmakers, 2023; DeVito, 2022).

We propose that advocates for preregistration consider to:

  1. Establish core preregistration criteria (i.e., a minimum amount of information required to consider a preregistration complete—as the World Health Organization’s International Clinical Trials Registry Platform has done for clinical trial registration).

  2. Award preregistered badges only to articles that meet the badge criteria (of which few currently do).

  3. Leverage complementary workflows that provide a similar function as preregistration (e.g., blinded data analysis to minimize data-dependent analytical decisions).

Table 1: Problems and proposed solutions for preregistration in psychology and the social sciences.

Problem

Proposed solutions*

  1. Low uptake of preregistration

  2. Imprecise and ambiguous preregistrations

  3. Poor alignment between preregistrations and manuscripts

A. Establish core preregistration criteria
B. Award preregistered badges only to articles that meet the badge criteria
C. Leverage complementary workflows that provide a similar function as preregistration

(1) Preregistration provides additional benefits once a substantial proportion of studies are preregistered—such as facilitating evidence synthesis and reducing duplication. (2) Imprecise language and ambiguities in preregistrations leave them open to various interpretations, and in turn, limit their ability to reduce bias and transparently communicate study plans. (3) Poor alignment between preregistrations and manuscripts—both in terms of the overall structure of the documents as well as the specific content—make it difficult to compare the texts to assess risk of bias. The solutions we propose are partial solutions in the sense that they remain unlikely to fully solve shortcomings in preregistration. They can be implemented individually or alongside other efforts.

*We itemize the problems with numbers and the proposed solutions with letters to indicate that they are not aligned in a one-to-one manner; each proposed solution could impact each of the three problems to different extents.

1. Establish core preregistration criteria

For clinical trials to be considered fully registered, they must provide information regarding 24 specific items, known as the Trial Registration Data Set (World Health Organization, 2017). Although these 24 items do not include a detailed analysis plan, they set a minimal standard that organizations such as the International Committee of Medical Journal Editors can promote (ICMJE, 2022; ICMJE, 2023). This itemized standard laid the foundation for regulations and institutional infrastructure, which in turn drove the widespread uptake of preregistration in clinical trials.6 It allows for transparent updating of preregistrations and makes comparisons between preregistrations and publications relatively easy (see Figure 1). The structure is sufficiently clear-cut, such that the Health Research Authority in the UK now uses information from ethics applications to register trials on behalf of clinical trialists (NHS Health Research Authority, 2021). These researchers can go beyond the minimum 24 items, add as many details as they would like to the registration, and append a study protocol.7

In contrast to clinical trial registrations which include discrete items followed by short responses (e.g., primary outcome; sample size), preregistration templates in psychology and the social sciences often include broad headers followed by blocks of text (e.g., hypotheses, analysis plan—see Figure 1).8 Single hypotheses can contain multiple elements that would be better divided into several distinct hypotheses. Preregistrations sometimes list several variables and analyses but provide a sample size calculation for only one analysis. Within a preregistration, aligning a single hypothesis to its outcome measure and analysis can be far from trivial. Matching these to text in the manuscript presents an additional challenge. Thus, the less structured information provided in many psychology preregistrations can obscure a reader’s understanding of what the researchers planned to do and whether they did it.9,10

Figure 1

Comparison of a clinical trial preregistration excerpt (left) to an OSF preregistration excerpt (right)*. The clinical trial preregistration excerpt demonstrates several features that the psychology and social sciences community could benefit from considering. These include: (1) The option for an itemized and tabular format; (2) Clear demarcation of the 24 items contained in the WHO Trial Registration Data Set, which is supported by the International Committee of Medical Journal Editors and demarcated with the superscript ICMJE; (3) Clear demarcation of the primary outcome measure and time frame of assessment; (4) Easy identification of updates to the primary outcome measure (e.g., several time points were added); (5) Easy identification of core items that are not provided (e.g., secondary outcome measures); and (6) A link to a Change History log, which looks similar to a Microsoft Word document with track changes. The OSF has recently begun to provide a function allowing researchers to update their preregistration. The updated preregistration identifies sections that were updated (e.g., “Hypotheses”), but does not provide the track-changes style functionality that clinicaltrials.gov does. OSF preregistrations also often contain a statistical analysis plan whereas clinical trial preregistrations rarely do.
* These excerpts are copied from (Arnold & DeBeus, 2013) and (Berent, 2021). We selected the clinical trials registration because the lead author (RTT) was familiar with it, and it clearly depicts several benefits of clinical trial preregistrations in a relatively small screenshot. The OSF registration was selected by going to www.osf.io/registries and selecting Provider: “OSF Registries” and OSF Registration Type: “OSF Preregistration,” and then choosing a recent preregistration that depicts the text-block response format.

Given the low prevalence of preregistration in psychology and social sciences research (Hardwicke et al., 2020; Hardwicke et al., 2021; Scoggins & Robertson, 2023), alongside the difficulty of comparing preregistered study details to published study reports (TARG Meta-Research Group & Collaborators, 2022; Akker et al., 2022) we argue that establishing core preregistration criteria would complement ongoing initiatives that strive for ideal practice.11

Efforts have been made to create standard preregistration templates in psychological science (e.g., Open Science Framework, AsPredicted), but these can vary substantially, and there is no broad agreement regarding the details they must include. In an attempt toward standardization, a Preregistration Task Force consisting of the American Psychological Association (APA), the British Psychological Society (BPS), and the German Psychological Society (DGP), supported by the Center for Open Science (COS) and the Leibniz Institute for Psychology, developed a consensus template12 for the preregistration of quantitative psychology research (the PRP-QUANT template; Bosnjak et al., 2022). On the one hand, the template is exhaustive and was purposefully designed to parallel the structure of the APA Style Journal Article Reporting Standards (Appelbaum et al., 2018); its proper use would present a very effective implementation of preregistration. On the other hand, there is no evidence that user-testing informed the template13 and its uptake remains limited at this time.14

Comparable guidelines in clinical trials have been developed through formal consensus processes that involve diverse stakeholders, include a user-testing stage (i.e., piloting), and are widely used by researchers.15 These documents were designed to apply across clinical trials research, regardless of the specific discipline. Their structure is such that researchers can create extensions to the guidelines to target their specific disciplines more fully (e.g., traditional Chinese medicine: Zhang et al., 2020; pilot trials:, Thabane et al., 2016). Core preregistration criteria could be developed through a similar process and designed to accommodate the diversity of study types in psychology and the social sciences. They could facilitate broad adoption of preregistration by setting a minimum standard that is relatively easy to achieve and a benchmark upon which publishers, funders, and institutions can develop regulations.

2. Award preregistered badges only to articles that meet the badge criteria

As of April 2023, the Center for Open Science website lists 80 journals that award badges to articles that claim to have used open science practices such as preregistration, open materials, and open data (www.cos.io/initiatives/badges). To receive a preregistered badge, a publication should have no undisclosed discrepancies from the preregistration (COS, 2023). And yet, two studies analyzing psychology publications with preregistered badges found that 89% of 27 articles contained at least one undisclosed discrepancy (Claesen et al., 2021)16 and 67% of 258 articles selectively reported at least one hypothesis (Akker et al., 2022).17 The organization that developed the badges—the Center for Open Science—describes two ways to award badges: author self-disclosure or peer review (COS, 2016).

Disclosure

Some journals, in their instructions for authors, state that they use the self-disclosure method to award badges (e.g., Psychological Science, Journal of Experimental Social Psychology), but the disclosure statement provided by the COS which these journals use does not align with the preregistered badge criteria. The four criteria for a preregistered badge are: “(1) A public date-time stamped registration is in an institutional registration system; (2) Registration pre-dates the intervention; (3) Registered design corresponds directly to reported design; and (4) Full disclosure of results in accordance with registered plan” (COS, 2023). However, the disclosure form used by these journals asks authors to complete five disclosure items (COS, 2016), none of which match the third and fourth badge criteria. Thus, authors can both truthfully complete the disclosure form and not meet the badge criteria. Even if the disclosure items were realigned to match the badge criteria, it remains unclear whether the proportion of badged papers that fully meet all criteria would rise in the absence of a verification mechanism.

Peer review

We are not aware of any journal that systematically peer reviews articles to ensure they meet the criteria for a preregistered badge. Moreover, based on our experience (TARG Meta-Research Group & Collaborators, 2022) and that of other researchers who have systematically examined publications awarded with a preregistered badge (Olmo van den Akker, personal communication, 2021; Aline Claesen, personal communication, 2019), we feel it is very difficult to confidently state that the “Registered design corresponds directly to reported design” or “Full disclosure of results in accordance with registered plan.” Indeed, one study found that researchers could only agree on the number of hypotheses present in 14% of preregistrations (Bakker et al., 2020). Another study with a strict operationalization of what constituted a hypothesis had 54% agreement between coders regarding the number of hypotheses (Akker et al., 2022). The lack of itemized core preregistration criteria alongside differences in the structure of preregistrations and manuscripts renders many comparisons ambiguous.

One could argue that the issues we present regarding inaccurate awards are outweighed by the benefit that badges may have on the uptake of preregistration. Indeed, the badge criteria were designed to represent a high aspirational standard, rather than setting a minimal bar. However, there is a possibility that badges in their current implementation have negative effects. Given that most articles awarded a preregistered badge do not fully meet the criteria for earning that badge, awarding badges can create a false impression that rigorous research practices are being used and therefore lend undue trust to studies awarded a preregistered badge. This practice could also have downstream impacts on the trustworthiness of these types of initiatives more broadly.18

Changing the criteria for the preregistered badge could be one way to make clearer what the badge signals. They could be revised, for example, to require only the existence of a permanent and public preregistration in a repository that provides a DOI, without requiring that the preregistration was followed. This criterion would be easy to audit19 and achieves at least one main function that preregistration was designed to address—putting a timestamp on study plans to help demarcate confirmatory research (Nosek et al., 2018). An additional criterion could demand that the preregistration include all the items in an established core preregistration criteria, as outlined earlier in this commentary. Machine-readable preregistration statements could also be employed to facilitate automated compliance monitoring from funders or institutions. Based on the current badge criteria, a publication whose preregistration has almost no detail could earn a preregistered badge, whereas a publication with a very detailed preregistration and a minor discrepancy should not earn a badge.20

Taken together, current practices for awarding preregistered badges reward researchers even if their preregistration is of low quality and aligns poorly with the associated publication. We commend the development and testing of new initiatives; at the same time, we advocate for follow-up and evaluation to investigate whether they work as intended.

3. Leverage complementary workflows that provide a similar function as preregistration

Preregistration can reduce bias, increase transparency, and may also improve research quality (Hardwicke & Wagenmakers, 2023; Sarafoglou, Kovacs, et al., 2022). However, there are no checks and balances to evaluate whether the study outlined in a preregistration is well designed or clearly described (except when using the Registered Reports format; see Chambers & Tzavella, 2022). Journal policies and peer review can improve the quality of reporting in relation to a preregistration, but they occur too late in the research pipeline to impact the study design or preregistration quality. Complementary research workflows could achieve some of the same functions as preregistration and may come with additional benefits (e.g., blind data analysis, Experimental Design Assistants, protocol peer review).21

For observational research, data management organizations could employ workflows that necessitate open research practices. For example, they could provide researchers with a synthetic dataset, which researchers could use to develop an analysis script. The researchers would then run their analysis in a Trusted Research Environment (TRE) where the results are output, the real data remains hidden, and the analysis is logged and made public (e.g., as done at OpenSAFELY.org). If a Trusted Research Environment is not available, data management organizations could simply provide the complete dataset after the researchers register their analysis script (e.g., as done in Sarafoglou, Hoogeveen, et al., 2022; and surveyed in, Thibault et al., 2023). These workflows make executable analyses—as opposed to sometimes ambiguous blocks of text—publicly available, while also protecting researchers from making data-dependent analytical decisions. They also overcome arguments raised against preregistration for observational research, including that the data often already exist, knowledge of the data may be necessary to devise a reasonable analysis plan, and registration can inhibit exploration (Lash & Vandenbroucke, 2012).

Researchers can execute a comparable workflow for experimental studies by writing an analysis script based on simulated data and preregistering it before beginning data collection. In other words, the preregistration would include a results section based on a simulated dataset and the numbers are simply updated after running the analysis on the real data.

Another research tool—the Experimental Design Assistant—can be employed in a similar manner. This web application, developed by The National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), uses a graphical interface to walk preclinical animal researchers through designing their experiment. Unlike preregistration, the EDA is an interactive tool that prompts users to input thorough information and gives warnings when the input fails to align. It then outputs a PDF which the NC3Rs encourages researchers to append to funding applications sent to their organization (NC3Rs, 2021). This tool holds the potential to simultaneously help researchers design effective experiments and reduce their workloads by using the PDF output as a component of a preregistration.

These examples hold the potential to increase the uptake of preregistration and improve the precision of preregistrations. They do so by embedding the research pipeline with a user-friendly workflow that documents precise study plans.

Box 1. Agents of change

Our three recommendations target the research community broadly and differ in their implementation pathways. Establishing core preregistration criteria would require coordination across various stakeholders including publishers, funders, institutions, learned societies, researchers, and other end-users of research findings. Such an initiative would take a concerted effort and could gain momentum through a grassroots push from researchers or a top-down mechanism from major funders. In contrast, any journal can improve their own use of preregistered badges, and the organization who created them—the Center for Open Science (COS)—holds the ability to redefine the badge criteria. As for our final recommendation, any individual research group, funder, data management organization, or other stakeholder can explore the use of complementary workflows. Highly successful workflows could then be adopted more widely.

Conclusion

In psychology and the social sciences, preregistrations can reduce bias and improve transparency. At the same time, they remain underused, can lack clarity, and are often difficult to compare directly with their associated publication. Current efforts to promote the uptake of preregistration (e.g., badges) and improve preregistration quality (e.g., the PRP-QUANT template) rely largely on the willingness and scrupulousness of research teams alone. We propose that the research community consider parallel initiatives to simplify and standardize preregistration (e.g., adopt itemized core preregistration criteria), and to leverage complementary workflows that necessitate open research practices.

Contributions

RTT wrote an initial draft. All other authors were involved in relevant discussions and contributed to the final draft.

Funding

Robert Thibault is supported by a general support grant awarded to METRICS from Arnold Ventures and a postdoctoral fellowship from the Canadian Institutes of Health Research. Marcus Munafo and Robert Thibault are part of the MRC Integrative Epidemiology Unit (MC_UU_00011/7). The funders have no role in the preparation of this manuscript or the decision to publish.

Acknowledgements

We thank Gustav Nilsonne, Steven Goodman, Mario Malički, Marton Kovacs, and Lisa Spitzer for feedback on earlier drafts of this commentary.

Competing interests

All other authors declare no conflict of interest.

References

Abrams, E., Libgober, J., & List, J. (2020). Research Registries: Facts, Myths, and Possible Improvements. Artefactual Field Experiments, Article, 00703. https://ideas.repec.org//p/feb/artefa/00703.html

Al-Durra, M., Nolan, R. P., Seto, E., & Cafazzo, J. A. (2020). Prospective registration and reporting of trial number in randomised clinical trials: Global cross sectional study of the adoption of ICMJE and Declaration of Helsinki recommendations. BMJ, 369, 982. https://doi.org/10.1136/bmj.m982

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA. Publications and Communications Board Task Force Report. The American Psychologist, 73(1), 3–25. https://doi.org/10.1037/amp0000191

Arnold, L. E., & DeBeus, R. (2013). Double-Blind 2-Site Randomized Clinical Trial of Neurofeedback for ADHD. 02251743. www.clinicaltrials.gov.

Bakker, M., Veldkamp, C. L. S., van Assen, M. A. L. M., Crompvoets, E. A. V., Ong, H. H., Nosek, B. A., Soderberg, C. K., Mellor, D., & Wicherts, J. M. (2020). Ensuring the quality and specificity of preregistrations. PLOS Biology, 18(12), 3000937. https://doi.org/10.1371/journal.pbio.3000937

Berent, M. (2021). Candidate priming. https://doi.org/10.17605/OSF.IO/F39KX

BIH QUEST. (2023). Charité Dashboard on Responsible Research. https://quest-dashboard.charite.de/#tabStart

Bosnjak, M., Fiebach, C. J., Mellor, D., Mueller, S., O’Connor, D. B., Oswald, F. L., & Sokol, R. I. (2022). A template for preregistration of quantitative research in psychology: Report of the joint psychological societies preregistration task force. American Psychologist, 77(4), 602. https://doi.org/10.1037/amp0000879

Campbell, L., Harris, K., Flake, J. K., Fried, E. I., Beck, E. D., Struhl, M. K., Etz, A., Lindsay, D. S., Feldman, G., van ’t Veer, A., & Vazire, S. (2019). https://osf.io/xv5rp/

Chambers, C. D., & Tzavella, L. (2022). The past, present and future of Registered Reports. Nature Human Behaviour, 6(1), 29–42. https://doi.org/10.1038/s41562-021-01193-7

Claesen, A., Gomes, S., Tuerlinckx, F., & Vanpaemel, W. (2021). Comparing dream to reality: An assessment of adherence of the first generation of preregistered studies. Royal Society Open Science, 8(10), 211037. https://doi.org/10.1098/rsos.211037

COS. (2016). Badges to Acknowledge Open Practices. OSF. https://web.archive.org/web/20230420043737/https://osf.io/tvyxz/wiki/2.%20Awarding%20Badges

COS. (2023). Badges to Acknowledge Open Practices. OSF. https://web.archive.org/web/20230508205525/http://web.archive.org/screenshot/https://osf.io/tvyxz/wiki/1.%20View%20the%20Badges/

De Angelis, C., Drazen, J. M., Frizelle, F. A., Haug, C., Hoey, J., Horton, R., Kotzin, S., Laine, C., Marusic, A., Overbeke, A. J. P. M., Schroeder, T. V., Sox, H. C., & Weyden, M. B. V. D. (2004). Clinical Trial Registration: A Statement from the. International Committee of Medical Journal Editors. New England Journal of Medicine, 351(12), 1250–1251. https://doi.org/10.1056/NEJMe048225

DeVito, N. J. (2022). Trial registries for transparency and accountability in clinical research [Doctoral Thesis]. University of Oxford.

DeVito, N. J., Bacon, S., & Goldacre, B. (2019). FDAAA TrialsTracker: A live informatics tool to monitor compliance with FDA requirements to report clinical trial results. BioRxiv, 266452. https://doi.org/10.1101/266452

Gabelica, M., Bojčić, R., & Puljak, L. (2022). Many researchers were not compliant with their published data sharing statement: mixed-methods study. Journal of Clinical Epidemiology.

Hardwicke, T. E., Thibault, R. T., Kosie, J. E., Wallach, J. D., Kidwell, M. C., & Ioannidis, J. P. A. (2021). Estimating the Prevalence of Transparency and Reproducibility-Related Research Practices in Psychology (2014–2017). Perspectives on Psychological Science, 1745691620979806. https://doi.org/10.1177/1745691620979806

Hardwicke, T. E., & Wagenmakers, E. J. (2023). Reducing bias, increasing transparency and calibrating confidence with preregistration. Nat Hum Behav, 7, 15–26. https://doi.org/10.1038/s41562-022-01497-2

Hardwicke, T. E., Wallach, J. D., Kidwell, M. C., Bendixen, T., Crüwell, S., & Ioannidis, J. P. A. (2020). An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017. Royal Society Open Science, 7(2), 190806. https://doi.org/10.1098/rsos.190806

Hopewell, S., Boutron, I., Chan, A.-W., Collins, G. S., de Beyer, J. A., Hróbjartsson, A., Nejstgaard, C. H., Østengaard, L., Schulz, K. F., Tunn, R., & Moher, D. (2022). An update to SPIRIT and CONSORT reporting guidelines to enhance transparency in randomized trials. Nature Medicine, 1–4. https://doi.org/10.1038/s41591-022-01989-8

ICMJE. (2022). Journals stating that they follow the ICMJE Recommendations. https://web.archive.org/web/20230508211128/https://www.icmje.org/icmje-recommendations.pdf

ICMJE. (2023). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. https://web.archive.org/web/20230508211643/https://www.icmje.org/journals-following-the-icmje-recommendations/

Lash, T. L. (2022). Getting Over TOP: Epidemiology. https://journals.lww.com/epidem/fulltext/2022/01000/getting_over_top.1.aspx

Lash, T. L., & Vandenbroucke, J. P. (2012). Should Preregistration of Epidemiologic Study Protocols Become Compulsory? Reflections and a Counterproposal. EPIDEMIOLOGY, 23(2), 184–188. https://doi.org/10.1097/EDE.0b013e318245c05b

McPhetres, J. (2020). What should a preregistration contain? PsyArXiv. https://doi.org/10.31234/osf.io/cj5mh

NC3Rs. (2021). NC3Rs Funding Schemes Applicant and Grant Holder Handbook. https://www.nc3rs.org.uk/sites/default/files/documents/Funding/Handbook.pdf

NHS Health Research Authority. (2021). Make it Public: Transparency and openness in health and social care research. https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/research-transparency/make-it-public-transparency-and-openness-health-and-social-care-research/

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114

Nosek, B. A., & Lindsay, D. S. (2018). Preregistration becoming the norm in psychological science. APS Observer, 31.

Sarafoglou, A., Hoogeveen, S., & Wagenmakers, E.-J. (2022). Comparing Analysis Blinding With Preregistration In The Many-Analysts Religion Project. PsyArXiv. https://doi.org/10.31234/osf.io/6dn8f

Sarafoglou, A., Kovacs, M., Bakos, B., Wagenmakers, E.-J., & Aczel, B. (2022). A survey on how preregistration affects the research workflow: Better science but more work. Royal Society Open Science, 9(7), 211997. https://doi.org/10.1098/rsos.211997

Scoggins, B., & Robertson, M. P. (2023). Measuring Transparency in the Social Sciences. Political Science and International Relations, 14.

Serghiou, S., Contopoulos-Ioannidis, D. G., Boyack, K. W., Riedel, N., Wallach, J. D., & Ioannidis, J. P. A. (2021). Assessment of transparency indicators across the biomedical literature: How open is open? PLOS Biology, 19(3), 3001107. https://doi.org/10.1371/journal.pbio.3001107

Spitzer, L., Mueller, S., & Bosnjak, M. (2021). Preregistration: Testing the Usability of the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template.

Srivastava, S. (2018). Sound Inference in Complicated Research: A Multi-Strategy Approach. PsyArXiv. https://doi.org/10.31234/osf.io/bwr48

TARG Meta-Research Group, & Collaborators. (2021). Estimating the prevalence of discrepancies between study registrations and publications: A systematic review and meta-analyses. https://doi.org/10.1101/2021.07.07.21259868

TARG Meta-Research Group, & Collaborators. (2022). Discrepancy review: A feasibility study of a novel peer review intervention to reduce undisclosed discrepancies between registrations and publications. https://doi.org/10.1101/2022.01.18.22269507

Thabane, L., Hopewell, S., Lancaster, G. A., Bond, C. M., Coleman, C. L., Campbell, M. J., & Eldridge, S. M. (2016). Methods and processes for development of a CONSORT extension for reporting pilot randomized controlled trials. Pilot and Feasibility Studies, 2(1), 25. https://doi.org/10.1186/s40814-016-0065-z

Thibault, R. T., Kovacs, M., Hardwicke, T. E., Sarafoglou, A., Ioannidis, J. P. A., & Munafo, M. R. (2023). Reducing bias in secondary data analysis via an Explore and Confirm Analysis Workflow (ECAW): A proposal and survey of observational researchers. https://doi.org/10.31222/osf.io/md2xz

van den Akker, O., van Assen, M. A. L. M., Enting, M., de Jonge, M., Ong, H. H., Rüffer, F., Schoenmakers, M., Stoevenbelt, A. H., Wicherts, J., & Bakker, M. (2022). Selective Hypothesis Reporting in Psychology: Comparing Preregistrations and Corresponding Publications. MetaArXiv. https://doi.org/10.31222/osf.io/nf6mq

Weaver, E. J., & Rehbein, S. T. (2022). Durchgeführt wie geplant? In Ein detaillierter Vergleich zwischen Studien und ihren prä-registrierten Plänen. https://psycharchives.org/en/item/2462b05e-5d58-426b-8b43-a26556294a32

World Health Organization. (2017). WHO Trial Registration Data Set (Version 1.3.1. https://www.who.int/clinical-trials-registry-platform/network/who-data-set

Zhang, X., Lan, L., Chan, J. C. P., Zhong, L. L. D., Cheng, C.-W., Lam, W.-C., Tian, R., Zhao, C., Wu, T.-X., Shang, H.-C., Lyu, A.-P., & Bian, Z.-X. (2020). WHO Trial Registration Data Set (TRDS) extension for traditional Chinese medicine 2020: Recommendations, explanation, and elaboration. BMC Medical Research Methodology, 20(1), 192. https://doi.org/10.1186/s12874-020-01077-w

Comments
0
comment
No comments here
Why not start the discussion?