Skip to main content
SearchLoginLogin or Signup

Three Persistent Myths about Open Science

Published onApr 08, 2024
Three Persistent Myths about Open Science
·

Abstract

Knowledge and implementation of open science principles and behaviors remains uneven across the sciences, despite over 10 years of intensive education and advocacy. One reason for the slow and uneven progress of the open science movement is a set of closely held myths about the implications of open science practices, bolstered by recurring objections and arguments that have long been addressed. This paper covers three of these major recurring myths: 1) that open science conflicts with prioritizing diversity, 2) that “open data” is a binary choice between fully open and accessible and completely closed off, and 3) that preregistration is only appropriate for certain types of research designs. Putting these myths to rest is necessary as we work towards improving our scientific practice.

Keywords: open science, diversity, data sharing, preregistration, meta-science

I am hoping that we can at least all agree that we have some problems with how we conduct our science. Serious problems. In my home discipline of psychology, these problems include, but are not limited to, publication bias, low statistical power, p-hacking/questionable research practices/researcher degrees of freedom, HARKing, fraud, lack of diversity, weak measurement, weak theory, mistakes/sloppiness, jingle jangle fallacies, and problematic incentive structures (Chambers, 2017; Spellman, 2015; Syed, 2019).

It would be a mistake to believe that these problems only exist in psychology, or even more narrowly in social psychology. The problems are much larger and more widespread. Researchers across the natural sciences, social sciences, and humanities have all voiced concerns about the quality of research in their fields (Baker, 2016; Knöchelmann, 2019; Munafò et al., 2017). Similarly, efforts to improve research practices are evident in economics (Askarov et al., 2023), biomedicine (Errington et al., 2021), educational sciences (Fleming et al., 2021), ecology (Fraser et al., 2018), qualitative research (Humphreys et al., 2021), and many others.

Accordingly, none of us are immune from the problems that have been identified. For this reason, I use an all-inclusive “we” throughout this essay. Yes, that includes you. We know there are problems, and although we can debate how relevant and prevalent they are across different fields or sub-areas of individual fields, our time would be more productively used by admitting that there are problems and getting down to the work of improving our science, whatever that might mean for the area you work in.

Unfortunately, progress towards improving our science has been rather slow. This is, in part, because of the sheer scale of the problems and the fact that they infect all aspects of the scientific ecosystem (researchers, journals, institutions, funders, etc.). This makes change understandably hard. Another reason for the slow progress, however, is that not everyone is in agreement that change needs to happen. We don’t all agree that there are actually problems in need of fixing. That we don’t all agree is not what troubles me—that should be expected and even welcomed. No, what troubles me is the nature of some of the arguments.

The purpose of this essay is to take up a few recurring arguments to demonstrate that they are without merit. These arguments are 1) that diversity and open science conflict with one another, 2) that open data is a binary choice between being fully open or fully closed, and 3) that preregistration is only relevant for experimental designs. I refer to these arguments as “myths” because of their enduring nature and seeming resistance to counter-evidence. This is an opinion essay, which means that there could of course be some disagreements with my opinions, but the thrust of my arguments against the myths are based on the best available evidence or just plain facts. In making these arguments, I largely draw from psychology, but it would be a mistake to think that this essay is not relevant to you if you are from another field. Although many of the examples come from psychology, the terrain covered by the myths themselves are relevant to just about any discipline.

Brief background and rationale

Within psychology, most people will point to the year 2011 as the beginning of the “replication crisis.” There were three notable events that year. First, was Bem’s (2011) empirical evidence for the existence of extra-sensory perception published in the Journal of Personality and Social Psychology, an article that was met with widespread confusion and outrage. Second, was the False-Positive Psychology paper published in Psychological Science by Simmons et al. (2011), which unknowingly provided the “key” for how Bem was able to empirically demonstrate the impossible (i.e., researcher degrees of freedom). Third, was the discovery of rampant fraud by the respected social psychologist Diederik Stapel (Wicherts, 2011). Although problems in the field had been known long before 2011, they had also been mostly ignored by the mainstream of the field (see Syed, 2019, for several examples). As of 2011, that was no longer the case, and the three focal events begot a cascade of events that would eventually be known as the replication crisis (even though the three focal events were not about replication, per se, subsequent events were focused squarely on replication), as well as a groundswell of reform efforts that would come to be referred to as the open science movement.

The current year is 2024 which means it has been 13 years since our collective consciousnesses have been raised. It has been 13 years of learning more and more of the details and specifics of what is wrong with how we do our science, and it has been 13 years of proposed and implemented reforms. Thirteen years is a long time–what have we gained in 13 years?

On my optimistic days, I would say quite a lot. Terms like publication bias, p-hacking, questionable research practices, HARKing, and preregistration are no longer on the fringes of the field. An increasing number of scientific journals are supporting—and in some cases requiring—sharing of study data and materials (Kidwell et al., 2016; Nosek et al., 2018). Over 300 journals across a wide range of disciplines have adopted Registered Reports (Chambers & Tzavella, 2021) as a submission option, which to me is the single most effective intervention to fix our science that we currently have at our disposal. And, importantly, early career researchers are acutely aware of the problems and the need to address them (Farnham et al., 2017). We have made a lot of advances, and in many ways, the future of our science looks bright.

But the pessimism also lurks. As I think about the 300 journals across the sciences that have adopted Registered Reports, I also think about the denominator. How many journals are there? I find it amusing that nobody seems to know, but I have seen several references to 30,000 (e.g., Brembs, 2018). From personal experience, and the stories of others, there is a lot of resistance among journal editors towards adopting Registered Reports (see Chambers & Tzavella, 2021). It is not just editors’ resistance to Registered Reports, but the general reluctance among scientists to change the way they go about their business. Beyond resistance, which could be for very good reasons, one thing that really gets me down is seeing the same arguments circulating again and again across these 13 years. Arguments that have been addressed and shown to be without merit. Recognizing those moments makes me think we have not made as much progress as I would like to think.

The current essay is about some of these recurring arguments against open science that seemingly just will not die: 1) that diversity and open science conflict with one another, 2) that open data is a binary choice between being fully open or fully closed, and 3) that preregistration is only relevant for experimental designs. I have chosen to focus on these three arguments based on my own extensive experience being part of open science conversations, giving open science presentations, and engaging in advocacy around scientific reform, particularly with respect to journal operations. I am not claiming that these are the only recurring arguments or even that they are the most important. Someone could just as easily have selected different ones. They are, however, the ones I tend to come across most often and felt were worthy of taking head-on. For each of the three topics, I describe the basic background, followed by my argument for why the expressed concerns are misplaced, or in some cases, simply wrong.

Beyond these specific arguments, the purpose of this paper is to make a broader point about progress and criticism in the field. If you are going to have strong opinions about some topic, if you are to use your platform to argue against a particular initiative, then you best be informed about that topic. Too often, arguments are rooted in misunderstandings, faulty assumptions, and ignorance. This is unacceptable, and arguments made from this stance should no longer be treated as reasonable objections that must be addressed.

To be clear, I am not arguing that people should avoid criticizing open science practices. They should. And I am not arguing that reasonable criticisms should not be taken seriously. They should. I am arguing that we should differentiate between informed and uninformed criticisms. Additionally, too often concerns are framed as a way to end the conversation, rather than as a way to begin a conversation about how the concerns can be addressed.

Myth #1: Diversity and Open Science are in conflict

The lack of diversity in psychological science, both in terms of global diversity and within-country racial/ethnic diversity, has been a persistent problem in the field. Indeed, the problem has been long-recognized, but also long-ignored. Many folks in psychology did not seem to think too much about the issue until Henrich et al.’s (2010) paper on the overreliance on WEIRD (Western, Educated, Industrialized, Rich, and Democratic) samples and then again with Roberts et al.’s (2020) article on racial inequality in publishing and editing. But the first major work to highlight the lack of racial/ethnic diversity in the field was Guthrie’s (1976) book, Even the Rat was White, and since then there has been a consistent stream of papers raising the issue (e.g., Arnett, 2008; Draper et al., 2022; Graham, 1992; Green et al., 2022; Hall & Maramba, 2001; Hartmann et al., 2013; Lin & Li, 2022; Moriguchi, 2022; Nielsen et al., 2017; Ponterotto, 1988; Thalmayer et al., 2021). I think at this point we have a pretty good sense of the problem, and maybe—maybe—we are now serious about our efforts to actually make some changes (see Syed, 2023, for a discussion of the complexities of the issue).

The timing of the Henrich et al. (2010) WEIRD article is notable: 2010—awfully close to the 2011 “ground zero” year for the replication crisis and ensuing open science movement. Thus, interest and energy around diversifying the field was occurring alongside heightened awareness of the many problematic research practices widespread in the field.

These dual concerns, not surprisingly, largely existed in parallel with one another, and I don’t think it is controversial to say that diversity was not a primary concern in the early days of the replication crisis (e.g., Beer et al., 2023; Lewis, 2017). Furthermore, because diversity was not a core component of the replication crisis, it did not figure heavily into proposed reforms of the time. The result was a familiar dynamic, where folks were advocating for field-wide shifts in how we go about our business without a strong consideration of the implications for diversity in the field. In other words, those with interests and concerns about diversity in open science were not part of the conversation.

Indeed, as has been discussed elsewhere in detail (Syed & Kathawalla, 2022), the open science movement can be understood as a structural movement, seeking to change the governing structure from an oligarchy to a democracy. However, it is not a social-structural movement, in that social power dynamics (e.g., racial and gender) were not part of the movement. Accordingly, the open science movement runs the real risk of reproducing, rather than disrupting, existing power imbalances in the field, despite its democratic focus (Grzanka & Cole, 2021).

Does the fact that diversity was not central to the open science movement mean that the two are inherently in conflict? No, and the distinction between diversity being included in the movement and diversity being in conflict with open science is one that is critical to maintain considering this important issue. Here, I elaborate on two primary reasons why the claims that diversity and open science are in conflict are without merit and, therefore, a myth.

First, the arguments that have been advanced have lacked evidence or compelling argumentation. This statement may be taken as dismissive or even disrespectful. It is not at all intended that way—I greatly appreciate that researchers who are concerned about diversity and representation are engaging with issues around open science. But at the end of the day, the central arguments made are without merit and do not square with the actual open science practices that they criticize (e.g., Bahlai et al., 2019; Fox Tree et al., 2022; Fuentes et al., 2022; Grzanka & Cole, 2021). Some of these concerns include worries about sharing sensitive or identifiable data, sharing data that were resource-intensive to collect, that exploratory studies are devalued, that qualitative research will be further marginalized, and that open science practices could lead to researchers’ ideas being “scooped.” All of these concerns have been addressed in the literature and either have clear solutions, have been demonstrated to be unfounded, or are based on incomplete understanding of the core issue. That is to say, it’s not that the aforementioned concerns are not important—they very much are. Rather, the issue is that they are all concerns that have clear solutions and are, in a sense, resolved. What I find interesting about these stated concerns is that they are not at all unique to diversity-related research, and indeed are the same concerns expressed by folks generally skeptical of open science reforms (see Tackett et al., 2017, for a discussion in the context of clinical psychology). This is, in fact, why they have since been addressed.

One specific example that is frequently stated across the sciences is concern about article-processing charges (APCs) associated with open access publishing (e.g., Bahlai et al., 2019). Concern about APCs is framed as a diversity issue because researchers at under-represented institutions and/or who do work on under-represented populations and topics may have less access to the resources needed to cover the APC, which can indeed be ridiculously large. Such discussions, however, are often framed as though there is a mandate to pay APCs to make articles freely available. This so called “gold open access” is but one route to making research products openly available. An option that is possible for nearly all journals in psychology is to publish a “green open access” version of an article, which usually consists of an author-formatted version of the article that is made publicly available by posting it to an institutional or organizational repository (e.g., OSF, bioRxiv, PsyArXiv) or on the author’s personal website (see Moshontz et al., 2021, for details). Moreover, there is an increasing number of “diamond open access” journals that do not charge any APCs at all (for example, journals published by PsychOpen or listed with the Free Journal Network). APCs and their role in scientific publishing are a major issue that needs be addressed, but there is no mandate within the open science movement for researchers to pay them to make their work open. That is just false and is an erroneous argument that is used to support the claim that diversity and open science are in conflict.

The second reason why the claim that diversity and open science are in conflict is a myth is that it overlooks the substantial work that has successfully integrated them. Indeed, we need to be careful of cautionary narratives about the relation between diversity and open science because they can inadvertently erase the efforts of the people who actually are working in this area. When Lee (2017) was the Editor of Cultural Diversity and Ethnic Minority Psychology he instituted a whole array of open science practices—and it was the first journal published by the American Psychological Association to do so! The Psychological Science Accelerator, which is a collaborative network of research teams from 84 countries around the world, has contributed some of the most substantial and rigorous work in the field, while elevating the contributions of scholars from under-represented regions (Moshontz et al., 2018). In a similar vein, the Framework for Open and Reproducible Research Training (FORRT; Azevedo et al., 2019) is a large global network of researchers and educators working together to produce educational resources and original research, with diversity central to their mission and their work (Elsherif et al., 2022). Syed & Kathawalla (2022) outlined how various open science practices could contribute to cultural psychology, but also how cultural psychology could help further push open science. Many others have contributed empirical, conceptual, and structural analyses about open science and diversity (Humphreys et al., 2021; Ledgerwood et al., 2022; Lui et al., 2022). Taking these examples together—which is not an exhaustive list—it is important to recognize that there are those who do not view diversity and open science as in conflict, and indeed that there are researchers showing, with their actions, just how the two can work together quite well.

Diversity and open science are clearly not in conflict with one another. Remember the crucial distinction though, between the potential conflict and the fact that diversity has never been a central concern to the movement at large. The latter continues to be an issue, but concerns about lack of inclusion in decision-making, mistrust, and resource constraints are persistent structural problems in the field, and not new problems that have popped up in the context of open science. Because the open science movement never took issues of diversity as central to the mission, it is understandable that folks for whom that is a prime concern will be skeptical of the effort and perhaps something they should be wary of getting involved in. Adopting this perspective would be a huge mistake. Open science is a structural reform, and should take diversity seriously, but that can only happen if people with such expertise participate. If diversity-focused researchers opt out of open science, they run the risk of even greater marginalization within the field (see Causadias et al., 2021, for a similar argument). Moreover, the mainstream of the field would greatly benefit from their expertise.

To repeat myself from earlier, rather than being the end of the conversation, concerns that are raised should be the beginning of a conversation on how we can successfully move forward. In the next sections I take on the specific practices of open data and preregistration.

Myth #2: Open data is a binary: fully open or fully closed

The myth about diversity and open science is clearly complex, and I admittedly have extreme views on the matter that others will certainly not share. What I am trying to do is clearly lay out my arguments, and most critically, not base them on lack of information, misunderstanding, or unfounded fears. That is not to say that any of the preceding is absent from my arguments—I do not claim to be all knowing and always reserve doubt—but that is what I am attempting to do, at least.

The myth around open data is an entirely different matter. To me, this is a clear-cut issue, and one that commentators get wrong repeatedly. This is a recurring concern raised in the context of diversity and open science (e.g., Fox Tree et al., 2022; Grzanka & Cole, 2021), as well as clinical psychology (see Tackett et al., 2017) and researchers working with qualitative data (see Field et al., 2021), to name just a few. I just stated that I always reserve doubt—and that is true here, too—but not all reservations are of the same magnitude, and here it is quite tiny.

Open data refers to making available the data that are reported in a research product (e.g., published article, preprint, technical report). The central issue is that open data is viewed as a binary at the extremes: either the data are fully open and publicly available for anyone to access, or they are fully closed, never to be seen by anyone except for members of the research team (or, in reality, the person responsible for collecting the data and/or the data analyst). The first clue that this construction is false is that it is a binary, which is rarely an accurate representation of beliefs or practices. The same can be said about open science in general, which is often seen as an “all or nothing” enterprise, rather than a wide variety of policies and practices that can be implemented (see Bergmann, 2023; Kathawalla et al., 2021; Silverstein et al., 2024).

If your understanding of what “open data” means is the extreme end of openness, where the full data are publicly available for anyone to access if they so choose, then I certainly understand why you would have concerns. Concerns about sensitive details in the data, re-identification risk of participants, further plans for using the data, betraying what the participants consented to be a part of, and so on, are all serious and important concerns when we are talking about fully open and publicly accessible data.

Such concerns, however, are divorced from the reality of the practice. The reality of the practice is not a binary, but a continuum (Figure 1). There are many meanings of open data, with gradations of practice that fill the wide space between radically open and radically closed. Readers are directed to Meyer’s (2018) excellent paper on the topic, as well as the discussion in Syed & Kathawalla (2022) with respect to work with marginalized populations. Here, I just briefly describe some key options along the continuum and why one might choose each option:

Figure 1

A continuum of data sharing

  • Freely open (full dataset)
    This is the situation I described previously: the full dataset is publicly posted and available for download and reuse. I have no data to support this claim, but I believe strongly that this is what people have in mind when they argue against data sharing.

  • Freely open (analytic dataset)
    An argument that I hear frequently from people is that they would be ok with making their data available, but they are not yet done with it. That is, they have plans for further analyses that could lead to publication, so they do not want someone else to publish their ideas. This is a totally reasonable concern and comes up frequently in the context of large datasets. That said, I suspect that we all vastly overestimate the degree to which we are “not yet done” with our datasets. Speaking for myself, I have had many plans for analyzing data that never came to fruition. More to the point, this concern seems to stem from the belief that the full dataset needs to be shared. Alternatively, authors could share only the data used for the analyses reported in the paper. It is extremely unlikely that a research team would have further plans to publish with the exact same data (at least, I hope so), but making the analytic data available would be useful for error detection or other researchers interested in fitting alternative models to the same data. The latter can even generate an additional publication, with the advent of the Verification Report format (Chambers, 2020). If there are no privacy or ethical concerns, then there is little reason for authors to not do this.

  • Freely open, other than sensitive data
    Of course, oftentimes there are privacy and ethical concerns with making data available. In this case, an option that can be paired with making the full dataset or analytic dataset available is to not share any of the sensitive variables but make all other data available. For example, if researchers are concerned that including demographics could lead to participant re-identification—which is a real risk for minority participants—then those data could be held back, with a specified process outlined for how they could be obtained and under what conditions (see next).

  • Open following a specified process
    Indeed, sometimes the data simply cannot be made openly available for a variety of reasons. We can now make an important distinction between freely and publicly open data, and data that are openly available only following a specified process. This process could involve an application, which might require researchers to specify how they will store the data and for what purpose they will use it. Researchers can specify, in advance, the conditions under which the data will be shared. This is a suitable option for when the data are potentially identifiable, include sensitive information, or where the consent form may not permit public sharing or certain types of analyses. This option also addresses a concern that arises in discussions of diversity and open science, that sharing data could lead to exploitation of marginalized research participants, as the terms of use of the data can be set by the research team to prevent such a thing. The good news is that researchers do not need to develop this process themselves. The Inter-university Consortium for Political and Social Research (ICPSR) has long provided this service. There are also discipline-specific protected repositories, such as Databrary (Gilmore et al., 2016) for developmental research and LDbase (Hart et al., 2020) for educational and developmental research. This option will also typically be a strong fit for qualitative data which, more often than not, will contain identifying information. In addition to the previously named repositories, the Qualitative Data Repository was developed for precisely this purpose.

  • Synthetic dataset
    Creating a specified process for data sharing is a great way to share data, but it is also time-consuming to set up and manage requests (although I would say a worthy investment of resources). Quintana (2020) developed an alternative option, developing the R package synthpop, which generates simulated data that reproduces various statistical properties of the original dataset while preserving confidentiality of the data. This simulated file is shared in lieu of the actual data.

  • “Data available upon request
    It is now well-documented that statements of “data available upon request,” which are commonly found in published articles, effectively mean that the data are not available at all (Gabelica et al., 2022; Miyakawa, 2020; Wicherts et al., 2006). Accordingly, researchers should not use these statements, and journals should not allow them, as markers of data sharing.

  • Functionally closed
    Unfortunately, this is the state of much of the research data in psychology, and there is little justification for it. I say “little” justification because there are of course some situations in which the data are so sensitive that they need to be heavily restricted (e.g., some genetic data) or an institutional ethics board has taken a hard line, but these situations are the exception rather than the rule. Sometimes data are functionally closed even within a research team. If you are part of a project, and the data are not available for your inspection, absent compelling rationale, you should be very concerned.

All of the above, with the exception of “data available upon request” and functionally closed, are perfectly acceptable forms of data sharing for different data situations. Readers are encouraged to embrace the maxim, “as open as possible, as closed as necessary,” and seek to make their data FAIR (Findable, Accessible, Interoperable, and Reusable; Wilkinson et al., 2016). Data are too complex and varied to have a one-size-fits-all approach, and I tried to highlight some of the options along the continuum of fully open to fully closed. I urge you to be wary of criticisms about data sharing that do not acknowledge these complexities—or even better, I urge you to actively push back on them, because if they present a binary of extremes, they are simply presenting false information. This is not acceptable.

Myth #3: Preregistration is only for experimental designs

I often think about a parallel universe in which the replication crisis did not start in social psychology, but in a different area like developmental or clinical psychology. In many ways, it is unfortunate that social psychology was the origin, because it is one of the most methodologically narrow subfields in psychology. Now, some of you will take issue with that statement and ask for empirical evidence, but plenty of social psychologists have discussed the fact that simple lab experiments dominate the field (Baumeister et al., 2007; Cialdini, 2009; Rozin, 2001). Accordingly, the focus of the initial reforms was on the kinds of issues that come up in lab experiments. That meant that we thought about reforms starting with the simplest case, and then had to layer the complexity on top of it. That has certainly been the case with preregistration.

Preregistration involves specifying the research questions, hypotheses, methods, and analyses before conducting a study, via a time-stamped and unalterable repository (Nosek et al., 2018). There are quite a lot of claims about why we should be skeptical about preregistration (MacEachern & Van Zandt, 2019; McDermott, 2022; Pham & Oh, 2021; Szollosi et al., 2020). These include that it will stifle creativity, that it devalues exploratory work, that it does not allow you to make mistakes or change your mind, that it is unnecessary if you have strong theory, that is it redundant with the ethics application, that it is too bureaucratic, and that it is not appropriate for ________ work (fill in the blank with qualitative, longitudinal, secondary data, etc.). These are all false claims based on limited understanding of what preregistration is and why we should do it1, but my focus here is on the last one, that it is not appropriate for certain types of work.

One of the challenges of understanding preregistration—and the criticisms of it—are that there are different rationales for why researchers should do it. These include clearly distinguishing between what decisions were made prior to seeing the data (“confirmatory” analyses) from what decisions were made after seeing the data (“exploratory” analyses), and preventing the latter as being framed as the former in a research report (Wagenmakers et al., 2012); reducing the prevalence of undisclosed data-dependent decision-making (i.e., p-hacking, questionable research practices, researcher degrees of freedom; Srivastava, 2018); evaluating the severity of a test (Lakens, 2019); and serving as formal documentation of the study design and analysis plan (Haven et al., 2019).

Of course, these rationales are not mutually exclusive, and can all work together. Indeed, for me it is the final rationale—to serve as formal documentation of the design and analysis plan—that is the most functional way to think about preregistration and subsumes all of the other rationales. Thinking of it in this way has rather massive implications for the practice. If you think of preregistration as formal specification of the study design, then it is clearly applicable to any type of research. There is no form of research in which absolutely no plans or intentions exist prior to data collection or analysis. That is just not possible. Even a radically exploratory qualitative study involves the specification of a research question, a plan for recruiting participants into the study, some idea of what questions will be asked or how observations will be made, and ideally some plans for how the responses will be analyzed. All of this can be specified ahead of time, and to great benefit to the project.

Some might counter this argument by stating that preregistration is not necessary to achieve the aforementioned goals. Rather, researchers can maintain a rigorous and detailed “lab notebook” approach in which they include all details of a project prior to implementation (see Crüwell et al., 2019, for a discussion of this approach in relation to preregistration). I think that this is technically true, but also there is something quite different about posting the plans to a permanent, unalterable, and public repository. Doing so makes the plans feel somehow more real, and it is also a clear step in the project development phase, marking the transition to project execution. After working with many students on preregistrations over many years, I can assure you that they hold a very different meaning from if they were only kept internally.

Indeed, a major reason why I am so keen on preregistration is that I have witnessed the benefits time and again when working with students. I assure you that I am quite averse to anything that increases the bureaucratic nature of our work. What preregistration does, more than anything, is make you think about what you are doing. If you have to specify your plans in advance, then you have to think about why you are doing what you plan to do and how you plan to do it. These are questions that are relevant for all types of research. Since we started preregistering the work in my research group, I have had much more challenging and generative discussions with students about conceptualization, study design, and data analysis.

That is not to say that preregistration can be implemented with the same ease across study designs. I have experience with many different types of designs, and so have developed a continuum (once again) of difficulty with the practice (Figure 2). Difficulty here corresponds to the quantity of details and degrees of uncertainty involved in the preregistration process. At the far end of most difficult, we have longitudinal designs, meta-analyses, and most qualitative designs. At the far end of least difficult, we have simple experiments and exploratory studies. Somewhere in the middle falls observational work and projects using secondary data. This continuum is not meant to be taken as a formal pronouncement of the relative difficulty of preregistering these specific designs that is applicable to all situations. Rather, it is based on my own experience using all of these designs, and more specifically my experience preregistering these designs. All study designs can be preregistered, but the details and difficulty will vary. The value, as far as I am concerned, is high throughout all situations.

Figure 2

Another continuum, this time of preregistration difficulty

Thus, any statements about, “preregistration is not appropriate for ________” are simply wrong. Preregistration can be used for any study design, although of course it will look different for different designs. Once again, we should actively reject any arguments—those that are both for and against open science practices—that take a one-size-fits-all approach. This may have been the case in the early days of advocacy for the practice, but our collective thinking has evolved on the issue, and the associated criticisms must be aware of that progress. There are some very smart and industrious people working on how to handle preregistration across the variability in study designs. For example, there are now templates and guides for how to preregister qualitative studies (Haven et al., 2019), secondary data (Weston et al., 2019), and meta-analyses (Moreau & Gamble, 2022), among many others. These folks have embraced the attitude I have repeated several times, that criticisms should not end a conversation, but serve as a launching point to continue the conversation and generate solutions.

Importantly, preregistration is not about getting everything right or perfect. We are fallible humans, and mostly mediocre scientists, so we will make mistakes. Deviations from a preregistration plan are perfectly acceptable so long as they are transparently reported (Willroth & Atherton, 2023). At that point, it is up to the reader to determine how the deviation impacts the credibility of the study. On that note, I want to stress that just because a study is preregistered, that does not mean that it is a quality study. I have seen plenty of bad preregistered studies, including those that have undisclosed deviations (see Claesen et al., 2021). It does, however, allow readers greater information about how the study was planned relative to how it was reported, and thus facilitates stronger interpretations of the stated claims.

Conclusion

Thirteen years since the start of the replication crisis in psychology, along with the many efforts to identify problems in other fields, tells me that we should all know and admit that we have problems with how we do our science. As I stated from the outset, I very intentionally use “we” because it is true for all of us. None of us are immune, within psychology for sure, but also across the sciences. To paraphrase Simine Vazire, a prominent leader in the scientific reform movement, if you don’t think your field has a problem, that’s probably because you haven’t looked.2

At this point, given all that we know, it is irresponsible to be an active psychological scientist and to not be informed about the problems that have been unearthed via the replication crisis, and the solutions that have been proposed as part of the open science movement. The keyword is informed. Opinions are cheap and plentiful, and it is easy to have a negative reaction about a new practice that is quite different from what you have been accustomed to. Such reactions, however, are often based on uninformed, surface understandings of the issues, rather than careful study and consideration of the details. To reiterate, criticism is good and needed, and it is far from clear that certain open science practices are an unqualified good or benefit to the quality of research. But uninformed criticism and criticism that retreads previously resolved issues is not helpful. Here, I highlighted three recurring debates—about diversity and open science, data sharing, and preregistration—that I argue have been based on insufficient information. As active scientists in the field, it is our responsibility to reject and correct such false claims if we are to have any hope for progress.

References

Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614. https://doi.org/10.1037/0003-066X.63.7.602

Askarov, Z., Doucouliagos, A., Doucouliagos, H., & Stanley, T. D. (2023). Selective and (mis)leading economics journals: Meta-research evidence. Journal of Economic Surveys, 1–26. https://doi.org/10.1111/joes.12598

Azevedo, F., Parsons, S., Micheli, L., Strand, J., Rinke, E., Guay, S., Elsherif, M., Quinn, K., Wagge, J. R., Steltenpohl, C., Kalandadze, T., Vasilev, M., de Oliveira, C. F., Aczel, B., Miranda, J., Galang, C. M., Baker, B. J., Pennington, C. R., Marques, T., & FORRT. (2019). Introducing a Framework for Open and Reproducible Research Training (FORRT). OSF Preprints. https://doi.org/10.31219/osf.io/bnh7p

Bahlai, C., Bartlett, L., Burgio, K., Fournier, A., Keiser, C., Poisot, T., & Whitney, K. (2019). Open science isn’t always open to all scientists. American Scientist, 107(2), 78. https://doi.org/10.1511/2019.107.2.78

Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, 533, Article 452–454. https://doi.org/10.1038/533452a

Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2(4), 396–403. https://doi.org/10.1111/j.1745-6916.2007.00051.x

Beer, J., Eastwick, P., & Goh, J. X. (2023). Hits and misses in the last decade of open science: Researchers from different subfields and career stages offer personal reflections and suggestions. Social Psychological Bulletin, 18, 1–23. https://doi.org/10.32872/spb.9681

Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425. https://doi.org/10.1037/a0021524

Bergmann, C. (2023). The buffet approach to open science. CogTales. https://cogtales.wordpress.com/2023/04/16/the-buffet-approach-to-open-science/

Brembs, B. (2018). Prestigious science journals struggle to reach even average reliability. Frontiers in Human Neuroscience, 12, 37. https://doi.org/10.3389/fnhum.2018.00037

Causadias, J. M., Korous, K. M., Cahill, K. M., & Rea-Sandin, G. (2021). The importance of research about research on culture: A call for meta-research on culture. Cultural Diversity and Ethnic Minority Psychology, 29(1), 85–95. https://doi.org/10.1037/cdp0000516

Chambers, C. D. (2017). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press.

Chambers, C. D. (2020). Verification Reports: A new article type at Cortex. Cortex, 129, 1–3. https://doi.org/10.1016/j.cortex.2020.04.020

Chambers, C. D., & Tzavella, L. (2021). The past, present and future of Registered Reports. Nature Human Behaviour, 6, 29–42. https://doi.org/10.1038/s41562-021-01193-7

Cialdini, R. B. (2009). We have to break up. Perspectives on Psychological Science, 4(1), 5–6. https://doi.org/10.1111/j.1745-6924.2009.01091.x

Claesen, A., Gomes, S., Tuerlinckx, F., & Vanpaemel, W. (2021). Comparing dream to reality: An assessment of adherence of the first generation of preregistered studies. Royal Society Open Science, 8(10), 211037. https://doi.org/10.1098/rsos.211037

Crüwell, S., Stefan, A. M., & Evans, N. J. (2019). Robust Standards in Cognitive Science. Computational Brain & Behavior, 2(3–4), 255–265. https://doi.org/10.1007/s42113-019-00049-8

Draper, C. E., Barnett, L. M., Cook, C. J., Cuartas, J. A., Howard, S. J., McCoy, D. C., Merkley, R., Molano, A., Maldonado-Carreño, C., Obradović, J., Scerif, G., Valentini, N. C., Venetsanou, F., & Yousafzai, A. K. (2022). Publishing child development research from around the world: An unfair playing field resulting in most of the world’s child population under-represented in research. Infant and Child Development, 32(6), Article Article e2375. https://doi.org/10.1002/icd.2375

Elsherif, M. M., Middleton, S. L., Phan, J. M., Azevedo, F., Iley, B. J., Grose-Hodge, M., Tyler, S. L., Kapp, S. K., Gourdon-Kanhukamwe, A., Grafton-Clarke, D., Yeung, S. K., Shaw, J. J., Hartmann, H., & Dokovova, M. (2022). Bridging neuropersity and open scholarship: How shared values can guide best practices for research integrity. https://doi.org/10.31222/osf.io/k7a9p.

Errington, T. M., Denis, A., Perfito, N., Iorns, E., & A, N. B. (2021). Challenges for assessing replicability in preclinical cancer biology. eLife, 10, Article Article e67995. https://doi.org/10.7554/eLife.67995

Farnham, A., Kurz, C., Öztürk, M. A., Solbiati, M., Myllyntaus, O., Meekes, J., Pham, T. M., Paz, C., Langiewicz, M., Andrews, S., Kanninen, L., Agbemabiese, C., Guler, A. T., Durieux, J., Jasim, S., Viessmann, O., Frattini, S., Yembergenova, D., Benito, C. M., & Hettne, K. (2017). Early career researchers want Open Science. Genome Biology, 18(1), Article Article 221. https://doi.org/10.1186/s13059-017-1351-7

Field, S. M., van Ravenzwaaij, D., Pittelkow, M. M., Hoek, J. M., & Derksen, M. (2021). Qualitative open science–pain points and perspectives. OSF Preprints. https://osf.io/e3cq4/

Fleming, J. I., Wilson, S. E., Hart, S. A., Therrien, W. J., & Cook, B. G. (2021). Open accessibility in education research: Enhancing the credibility, equity, impact, and efficiency of research. Educational Psychologist, 56(2), 110–121. https://doi.org/10.1080/00461520.2021.1897593

Fox Tree, J., Lleras, A., Thomas, A., & Watson, D. (2022). The inequitable burden of open science. https://featuredcontent.psychonomic.org/the-inequitable-burden-of-open-science/

Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLOS ONE, 13(7), Article Article e0200303. https://doi.org/10.1371/journal.pone.0200303

Fuentes, M. A., Zelaya, D. G., Delgado-Romero, E. A., & Butt, M. (2022). Open science: Friend, foe, or both to an antiracist psychology? Psychological Review, 130(5), 1351–1359. https://doi.org/10.1037/rev0000386

Gabelica, M., Bojčić, R., & Puljak, L. (2022). Many researchers were not compliant with their published data sharing statement: mixed-methods study. Journal of Clinical Epidemiology, 150, 33–41. https://doi.org/10.1016/j.jclinepi.2022.05.019

Gilmore, R. O., Adolph, K. E., & Millman, D. S. (2016). Curating identifiable data for sharing: The Databrary project. In 2016 New York Scientific Data Summit (NYSDS) (pp. 1–6). IEEE. https://doi.org/10.1109/NYSDS.2016.7747817

Graham, S. (1992). “Most of the subjects were White and middle class”: Trends in published research on African Americans in selected APA journals, 1970–1989. American Psychologist, 47(5), 629–639. https://doi.org/10.1037/0003-066X.47.5.629

Green, K. H., Van De Groep, I. H., Te Brinke, L. W., van der Cruijsen, R., van Rossenberg, F., & El Marroun, H. (2022). A perspective on enhancing representative samples in developmental human neuroscience: Connecting science to society. Frontiers in Integrative Neuroscience, 16, Article Article 981657. https://www.frontiersin.org/articles/10.3389/fnint.2022.981657

Grzanka, P. R., & Cole, E. R. (2021). An argument for bad psychology: Disciplinary disruption, public engagement, and social transformation. American Psychologist, 76(8), 1334–1345. https://doi.org/10.1037/amp0000853

Guthrie, R. V. (1976). Even the rat was white: A historical view of psychology. Allyn & Bacon.

Hall, G. C. N., & Maramba, G. G. (2001). In search of cultural persity: Recent literature in cross-cultural and ethnic minority psychology. Cultural Diversity and Ethnic Minority Psychology, 7(1), 12–26. https://doi.org/10.1037/1099-9809.7.1.12

Hart, S. A., Schatschneider, C., Reynolds, T. R., Calvo, F. E., Brown, B. J., Arsenault, B., Hall, M. R. K., van Dijk, W., Edwards, A. A., Shero, J. A., Smart, R., & Phillips, J. S. (2020). LDbase. https://doi.org/10.33009/ldbase.

Hartmann, W. E., Kim, E. S., Kim, J. H. J., Nguyen, T. U., Wendt, D. C., Nagata, D. K., & Gone, J. P. (2013). In Search of Cultural Diversity, Revisited: Recent publication trends in cross-cultural and ethnic minority psychology. Review of General Psychology, 17(3), 243–254. https://doi.org/10.1037/a0032260

Haven, T., Grootel, V., & L, D. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229–244. https://doi.org/10.1080/08989621.2019.1580147

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X

Humphreys, L., Lewis, N. A., Sender, K., & Won, A. S. (2021). Integrating qualitative methods and open science: Five principles for more trustworthy research. Journal of Communication, 71(5), 855–874. https://doi.org/10.1093/joc/jqab026

Kathawalla, U.-K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684. https://doi.org/https://doi.org/10.1525/collabra.18684

Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T. M., Fiedler, S., & Nosek, B. A. (2016). Badges to Acknowledge Open Practices: A simple, low-cost, effective method for increasing transparency. PLOS Biology, 14(5), Article Article 1002456. https://doi.org/10.1371/journal.pbio.1002456

Knöchelmann, M. (2019). Open Science in the Humanities, or: Open Humanities? Publications, 7(4), 65. https://doi.org/10.3390/publications7040065

Lakens, D. (2019). The value of preregistration for psychological science: A conceptual analysis. PsyArXiv. https://doi.org/10.31234/osf.io/jbh4w

LeBel, E. P., & Peters, K. R. (2011). Fearing the future of empirical psychology: Bem’s (2011) evidence of psi as a case study of deficiencies in modal research practice. Review of General Psychology, 15(4), 371–379. https://doi.org/10.1037/a0025172

Ledgerwood, A., Hudson, S. K. T. J., Lewis Jr, N. A., Maddox, K. B., Pickett, C. L., Remedios, J. D., Cheryan, S., Diekman, A. B., Dutra, N. B., Goh, J. X., Goodwin, S. A., Munakata, Y., Navarro, D. J., Onyeador, I. N., Srivastava, S., & Wilkins, C. L. (2022). The pandemic as a portal: Reimagining psychological science as truly open and inclusive. Perspectives on Psychological Science, 17(4), 937–959. https://doi.org/10.1177/17456916211036654

Lee, R. M. (2017). Editorial. Cultural Diversity and Ethnic Minority Psychology, 23(3), 311. https://doi.org/10.1037/cdp0000172

Lewis, N. (2017). Reflections on SIPS (guest post by Neil Lewis, Jr.). The Hardest Science. https://thehardestscience.com/2017/08/11/reflections-on-sips-guest-post-by-neil-lewis-jr/

Lin, Z., & Li, N. (2022). Global persity of authors, editors, and journal ownership across subdisciplines of psychology: Current state and policy implications. Perspectives on Psychological Science, 18(2), 358–377. https://doi.org/10.1177/17456916221091831

Lui, P. P., Gobrial, S., Pham, S., Giadolor, W., Adams, N., & Rollock, D. (2022). Open science and multicultural research: Some data, considerations, and recommendations. Cultural Diversity and Ethnic Minority Psychology, 28(4), 567–586. https://doi.org/10.1037/cdp0000541

MacEachern, S. N., & Van Zandt, T. (2019). Preregistration of modeling exercises may not be useful. Computational Brain & Behavior, 2(3), 179–182. https://doi.org/10.1007/s42113-019-00038-x

McDermott, R. (2022). Breaking free: How preregistration hurts scholars and science. Politics and the Life Sciences, 41(1), 55–59. https://doi.org/10.1017/pls.2022.4

Meyer, M. N. (2018). Practical tips for ethical data sharing. Advances in Methods and Practices in Psychological Science, 1(1), 131–144. https://doi.org/10.1177/2515245917747656

Miyakawa, T. (2020). No raw data, no science: another possible source of the reproducibility crisis. Molecular Brain, 13(1), 1–6. https://doi.org/10.1186/s13041-020-0552-2

Moreau, D., & Gamble, B. (2022). Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. Psychological Methods, 27(3), 426–432. https://doi.org/10.1037/met0000351

Moriguchi, Y. (2022). Beyond bias to Western participants, authors, and editors in developmental science. Infant and Child Development, 31(1), Article Article e2256. https://doi.org/10.1002/icd.2256

Moshontz, H., Binion, G. E., Walton, H., Brown, B. T., & Syed, M. (2021). A guide to posting and managing preprints. Advances in Methods and Practices in Psychological Science, 4(2), 1–11.

Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., & Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501–515. https://doi.org/10.1177/2515245918797607

Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), Article Article 0021. https://doi.org/10.1038/s41562-016-0021

Nielsen, M., Haun, D., Kärtner, J., & Legare, C. H. (2017). The persistent sampling bias in developmental psychology: A call to action. Journal of Experimental Child Psychology, 162, 31–38. https://doi.org/10.1016/j.jecp.2017.04.017

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114

Pham, M. T., & Oh, T. T. (2021). Preregistration is neither sufficient nor necessary for good science. Journal of Consumer Psychology, 31(1), 163–176. https://doi.org/10.1002/jcpy.1209

Ponterotto, J. G. (1988). Racial/ethnic minority research in the Journal of Counseling Psychology: A content analysis and methodological critique. Journal of Counseling Psychology, 35(4), 410–418. https://doi.org/10.1037/0022-0167.35.4.410

Quintana, D. S. (2020). A synthetic dataset primer for the biobehavioural sciences to promote reproducibility and hypothesis generation. eLife, 9, Article Article e53275. https://doi.org/10.7554/eLife.53275

Roberts, S. O., Bareket-Shavit, C., Dollins, F. A., Goldie, P. D., & Mortenson, E. (2020). Racial inequality in psychological research: Trends of the past and recommendations for the future. Perspectives on Psychological Science, 15(6), 1295–1309. https://doi.org/10.1177/1745691620927709

Rozin, P. (2001). Social psychology and science: Some lessons From Solomon Asch. Personality and Social Psychology Review, 5(1), 2–14. https://doi.org/10.1207/S15327957PSPR0501\_1

Silverstein, P., Elman, C., Montoya, A., McGillivray, B., Pennington, C. R., Harrison, C. H., Steltenpohl, C. N., Röer, J. P., Corker, K. S., Charron, L. M., Elsherif, M., Malicki, M., Hayes-Harb, R., Grinschgl, S., Neal, T., Evans, T. R., Karhulahti, V.-M., Krenzer, W. L. D., Belaus, A., … Syed, M. (2024). A guide for social science journal editors on easing into open science. Research Integrity and Peer Review, 9(1). https://doi.org/10.1186/s41073-023-00141-5

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632

Spellman, B. A. (2015). A short (personal) future history of revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899. https://doi.org/10.1177/1745691615609918

Srivastava, S. (2018). Sound inference in complicated research: A multi-strategy approach. PsyArXiv. https://doi.org/10.31234/osf.io/bwr48

Syed, M. (2019). The open science movement is for all of us. PsyArxiv https://doi.org/10.31234/osf.io/cteyb

Syed, M. (2023). The slow progress towards persification in psychological research. PsyArXiv. https://psyarxiv.com/bqzs5/

Syed, M., & Kathawalla, U. K. (2022). Cultural psychology, persity, and representation in open science. In K. C. McLean (Ed.), Cultural methods in psychology: Describing and transforming cultures (pp. 427–454). Oxford University Press. https://psyarxiv.com/t7hp2/

Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2020). Is preregistration worthwhile? Trends in Cognitive Sciences, 24(2), 94–95. https://doi.org/10.1016/j.tics.2019.11.009

Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L., Krueger, R. F., Miller, J. D., Oltmanns, T. F., & Shrout, P. E. (2017). It’s time to broaden the replicability conversation: Thoughts for and from clinical psychological science. Perspectives on Psychological Science, 12(5), 742–756. https://doi.org/10.1177/1745691617690042

Thalmayer, A. G., Toscanelli, C., & Arnett, J. J. (2021). The neglected 95% revisited: Is American psychology becoming less American? American Psychologist, 76(1), 116–129. https://doi.org/10.1037/amp0000622

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. https://doi.org/10.1177/1745691612463078

Weston, S. J., Ritchie, S. J., Rohrer, J. M., & Przybylski, A. K. (2019). Recommendations for increasing the transparency of analysis of preexisting data sets. Advances in Methods and Practices in Psychological Science, 2(3), 214–227. https://doi.org/https//doi.org/10.1177/2515245919848684

Wicherts, J. M. (2011). Psychology must learn a lesson from fraud case. Nature, 480, 7. https://doi.org/10.1038/480007a

Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726–728. https://doi.org/10.1037/0003-066X.61.7.726

Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., & Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), 1–9. https://doi.org/10.1038/sdata.2016.18

Willroth, E. C., & Atherton, O. E. (2024). Best laid plans: A guide to reporting preregistration deviations. Advances in Methods and Practices in Psychological Science. 7(1), 1-14. https://doi.org/10.31234/osf.io/dwx69

Comments
0
comment
No comments here
Why not start the discussion?