Skip to main content
SearchLoginLogin or Signup

Questionable Metascience Practices

Published onApr 24, 2023
Questionable Metascience Practices
·

Abstract

Metascientists have studied questionable research practices in science. The present article considers the parallel concept of questionable metascience practices (QMPs). A QMP is a research practice, assumption, or perspective that has been questioned by several commentators as being potentially problematic for metascience and/or the science reform movement. The present article reviews ten QMPs that relate to criticism, replication, bias, generalization, and the characterization of science. Specifically, the following QMPs are considered: (1) rejecting or ignoring self-criticism; (2) a fast ‘n’ bropen scientific criticism style; (3) overplaying the role of replication in science; (4) assuming a replication rate is “too low” without specifying an “acceptable” rate; (5) an unacknowledged metabias towards explaining the replication crisis in terms of researcher bias; (6) assuming that researcher bias can be reduced; (7) devaluing exploratory results as being more “tentative” than confirmatory results; (8) presuming that questionable research practices are problematic research practices; (9) focusing on knowledge accumulation; and (10) focusing on specific scientific methods. It is stressed that only some metascientists engage in some QMPs some of the time, and that these QMPs may not always be problematic. Research is required to estimate the prevalence and impact of QMPs. In the meantime, QMPs should be viewed as invitations to ask questions about how we go about doing better metascience.

Keywords: metascience, open science, questionable research practices, replication crisis, science reform

In 2011, Simmons et al. demonstrated that researchers can present “anything as significant” (p. 1359) by conducting numerous analyses (e.g., using different outcome variables, sample sizes, and/or covariates) and then selectively reporting only those analyses that yield significant results. A year later, John et al.’s (2012) published the results of a survey which purported to show that questionable research practices (QRPs), such as HARKing and p-hacking, are prevalent among psychologists. A few years later, an attempt to replicate 100 psychology studies found that only 39% of effects were rated as replicable (Open Science Collaboration, 2015).

In light of this and other work, some metascientists have concluded that QRPs play a significant role in increasing the publication of “false positive” results and, therefore, lowering replication rates (e.g., Bishop, 2019; Bishop, 2020; Munafò et al., 2017; Nosek et al., 2012; Open Science Collaboration, 2015; Schimmack, 2020; Spellman et al., 2018). Partly in response, science reformers have advocated new “open science” research practices that are intended to reveal and reduce QRPs (e.g., preregistered research plans, publicly accessible research data and materials, Munafò et al., 2017).

In the present article, I consider questionable research practices in the field of metascience. A questionable metascience practice (QMP) is a research practice, assumption, or perspective that has been questioned by several commentators as being potentially problematic for metascience and/or the science reform movement. I outline ten QMPs that are grouped into the five broad categories of (a) criticism, (b) replication, (c) bias, (d) generalization, and (e) science characterization.

Please note that I have not provided an exhaustive list of QMPs (for some additional QMPs, please see, Devezer et al., 2021, p. 2). In addition, unlike John et al.’s (2012) study of QRPs, I have not attempted to estimate the prevalence of the QMPs that I consider. It is possible that only a few metascientists have engaged in the QMPs, and that they have engaged in only a few QMPs a few times. Nonetheless, under some circumstances, a few low frequency QMPs may be quite influential and problematic, especially when they are undertaken by prominent metascientists who are regarded as leaders and role models in the field. Hence, it is worthwhile considering QMPs even if they have a low prevalence.

Finally, in my view, QMPs are not always problematic. They are merely “questionable” in the sense that they warrant questioning before a conclusion is reached about whether they are problematic in any given situation. Hence, my aim is not to cast aspersions on the field of metascience but, instead, to encourage a deeper consideration of its more questionable research practices, assumptions, and perspectives.1

Criticism-Related QMPs

Rejecting or Ignoring Self-Criticism

As several commentators have noted, some metascientists react particularly negatively and defensively towards criticisms of their proposed science reforms (Bastian, 2021; Gervais, 2021; Malich & Rehmann-Sutter, 2022, p. 5; Walkup, 2021, p. 132). For example, as Flis (2022) explained, there was a rather extreme negative reaction on social media to an article by Szollosi et al. (2020) that criticized the open science practice of preregistration. Flis suggested that this highly negative reaction may have represented a defensive response that was learned during metascientists’ interactions with so-called “status-quoers” who questioned the reality of the replication crisis and opposed the need for science reform. In other words, some first-generation metascientists and reformers may have adopted a particularly negative reaction to self-criticism because they perceived it to be a challenge to their raison d’être.

Instead of rejecting self-criticism, some metascientists may simply ignore it, especially in the more authoritative space of the published literature. For example, as of February 2023, 228 articles have cited a pro-preregistration article by Nosek et al. (2019) that was published around the same time and in the same journal as Szollosi et al.’s (2020) critical article. However, only 17% of these 228 articles (k = 39) have also cited Szollosi et al. (To identify these 39 articles, I clicked on “cited by” in Google Scholar for the Nosek et al. article and then selected “search within citing articles” and searched for “Is preregistration worthwhile?”). This low co-citation rate may reflect a citation bias against an article that is critical of a prominent science reform (for another example of potential citation bias, please see, Flis, 2022, p. 6). This type of citation bias creates an illusion of consensus in the literature, and it may obstruct the motive for theory improvement by giving the impression that current theories are adequate and undisputed (see also, Bishop, 2020; Hoekstra & Vazire, 2021, p. 1604). Hence, “failing to cite publications that contradict your beliefs” is regarded as a QRP (Allum et al., 2023, p. 8). To prevent this QRP from becoming a QMP, metascientists should encourage self-criticism, cite their critics’ work, and respond in a thoughtful manner (Altenmüller et al., 2021; Gervais, 2021, p. 828; Haig, 2022, p. 235; Hoekstra & Vazire, 2021, p. 1604). To be clear, metascientists do not always need to concede to their critics’ arguments. However, they do need to engage with those arguments publicly, formally, and carefully (see also, Longino, 1990).

Fast ‘n’ Bropen Criticism

Concerns have also been raised about the style and tone of some metascientists’ interactions with scientists, especially on social media (e.g., Fiske, 2016; Hamlin, 2017, p. 692; Pownall & Hoerst, 2022; Whitaker & Guest, 2020). For example, Whitaker & Guest (2020) coined the term bropenscience to refer to a dismissive, mocking, school-yard style of scientific criticism that some metascientists sometimes use on social media (e.g., Anonymous, 2021; see also, Derksen & Field, 2022; Pownall et al., 2021, pp. 529-530). Similarly, Pownall (2022) noted that, in contrast to the appeal for more thoughtful and “slower” science, there is a “growing culture of fast, hostile, and superficial critiques of research” on social media.2

Although a fast ‘n’ bropen criticism style may be used rarely and by few metascientists, it can be problematic if it is used by relatively prominent metascientists who are regarded as being representative of the field. In particular, it may (a) distract from and/or deter legitimate criticism, (b) cause scientists to feel personally attacked and/or excluded (e.g., Derksen & Field, 2022; Hamlin, 2017, p. 692; Pownall et al., 2021), (c) damage the reputation of metascience, and/or (d) reduce the uptake of beneficial science reforms (Gervais, 2021). Metascientists should undertake thoughtful, “critical evaluation with civility and mutual respect” (Society for the Improvement of Psychological Science, 2022).

Replication-Related QMPs

Overplaying Replication

Some metascientists assume that direct replications are a method for assessing the “truth” of a claim or effect. For example, Nosek et al. (2012 p. 617) stated that “replication is a means of increasing the confidence in the truth value of a claim”; Nelson et al. (2018 p. 520) stated that, “to a scientist, a true effect is one that replicates under specifiable conditions”; and Simmons et al. (2021 p. 153) stated that “many published findings do not replicate under specifiable conditions and so are, by the standards of science, untrue” (for further examples, see, Devezer et al., 2021, pp. 6-8). Some metascientists also regard replication as an essential and defining aspect of science. For example, the Open Science Collaboration (2015 p. 1) described reproducibility as “a defining feature of science,” and Zwaan et al. (2018 p. 13) explained that replication is “an essential component of science…a foundational principle of the scientific method” (see also, Asendorpf et al., 2013, p. 108; Chambers, 2017, p. 48; Nosek et al., 2012, p. 618; for further examples, see, Drummond, 2019, p. 64; Haig, 2022, p. 226; Maxwell et al., 2015, p. 487). In response, critics have argued that these sorts of statements overplay the role of replication in science (De Boeck & Jeon, 2018; Devezer et al., 2019; Devezer et al., 2021; Feest, 2019; Greenfield, 2017; Guttinger, 2020; Haig, 2022; Iso-Ahola, 2020; Leonelli, 2018; Norton, 2015).

Replication does not indicate whether research claims or findings are true. Replicable results may be “false” due to model misspecification, reliable but invalid measures, or overly liberal evidence thresholds, and “true” results may be nonreplicable due to model misspecification, unreliable methods, or irreversible changes in the population over time (Bak-Coleman et al., 2022; Buzbas et al., 2023; De Boeck & Jeon, 2018; Devezer et al., 2019; Devezer et al., 2021; Errington, Mathur, et al., 2021; Guttinger, 2020; Iso-Ahola, 2020; Norton, 2015; Nosek et al., 2022, p. 739; Rubin, 2021a; D. J. Stanley & Spence, 2014). Furthermore, replication is not an essential component of science. Scientists often use other methods to demonstrate the reliability of their results, such as robustness analyses (Haig, 2022; Leonelli, 2018). Alternatively, they may provide a repeat demonstration of the existence of a phenomenon within the same study using a different set of variables that are nonetheless representative of the theoretical constructs that were used in the original demonstration.

Certainly, replication is important in some areas of science. However, it is a QMP to overplay replication as an “essential” aspect of science that indexes the “truth” of findings (Devezer et al., 2021, p. 10).

Unspecified Replication Rate Targets

Some metascientists claim that replication rates need to be improved. For example, the Open Science Collaboration (2015 p. 7) concluded that “there is room to improve reproducibility in psychology,” and Munafò et al. (2017 p. 1) explained that “data from many fields suggests reproducibility is lower than is desirable.” However, it is unclear how replication rates can be judged to be “low” and in need of improvement in the absence of clear targets for “acceptable” replication rates. Logically, this reasoning represents an incomplete comparison.

In their recent review, Nosek et al. (2022) found that 64% of 307 replications reported statistically significant evidence in the same direction as the original studies. Is this replication rate “too low” or is it “acceptable?” Nosek et al. were unsure, asking: “what degree of replicability should be expected?” (p. 730) and “what is the optimal replicability rate at different stages of research maturity?” (p. 738). They suggested that these questions should be addressed in future metascience research (see also, Open Science Collaboration, 2015, p. 7). However, the deferral of this question implies that metascientists are trying to solve a problem that they are not yet sure exists. After all, future research may reveal that current replication rates are “acceptable” (Bird, 2020; Freiling et al., 2021, p. 692; Guttinger, 2020, p. 8; Lewandowsky & Oberauer, 2020). Alternatively, the meaningfulness of quantifying replication rates may be called into question (Buzbas et al., 2023; Rubin, 2021a).

In the absence of clear targets for “acceptable” replication rates, it is not surprising that several commentators have questioned whether current replication rates are at “crisis” levels (e.g., Barrett, 2015; Bird, 2020; Buzbas et al., 2023; Fanelli, 2018; Firestein, 2016; Freiling et al., 2021; Haig, 2022; Maxwell et al., 2015; Morawski, 2019; Shrout & Rodgers, 2018; Stroebe & Strack, 2014; Wood & Wilson, 2019). Certainly, claiming that a replication rate is “too low” without specifying an “acceptable” replication rate represents a QMP.

Bias-Related QMPs

Metabias

As several commentators have observed, contemporary metascientists tend to be concerned with how bias and motivated reasoning influence scientists’ methods, analyses, and interpretations (Field & Derksen, 2021; Flis, 2019; Morawski, 2019; Morawski, 2022; Peterson & Panofsky, 2020, p. 7; for examples, see, Bishop, 2020; Chambers, 2017, chapter 1; Chambers & Tzavella, 2022; Hardwicke & Wagenmakers, 2023; Ioannidis et al., 2014; Munafò et al., 2017; Nosek et al., 2012; Simmons et al., 2021, p. 153). Indeed, Morawski (2022) has suggested that metascientists may be biased towards explaining the replication crisis in terms of researcher bias because they are overrepresented by psychologists (Moody et al., 2022; see also, Flis, 2019; Malich & Rehmann-Sutter, 2022), who tend to be familiar with cognitive and motivational biases (i.e., a type of availability heuristic bias). Consistent with Morawski’s interpretation, it is interesting to note that psychologists’ metabias may also explain their emphasis on researcher bias during the 1960s-1970s crisis of confidence in social psychology (Peterson & Panofsky, 2021, p. 600; Rosnow, 1983). In this previous crisis, psychologists were concerned about researchers biasing the behavior of their participants (e.g., experimenter expectancy effects). In the current replication crisis, they are more concerned about researchers biasing their methods and analyses.

To be consistent with their concerns about researcher bias, metascientists should acknowledge their own metabias towards explanations of the replication crisis that refer to researcher bias. There are multiple mutually compatible explanations for failed replications that do not refer to researcher bias, including data errors, fraud, a base rate fallacy, low power, unreliable measurement, poor validity, hidden moderators, and heterogenous effects (e.g., Bird, 2020; De Boeck & Jeon, 2018; Fabrigar et al., 2020; Maxwell et al., 2015; Rubin, 2021a; D. J. Stanley & Spence, 2014). Researcher bias and associated QRPs represent only one potential explanation, yet they have been given a disproportionate amount of attention in explanations of, and solutions to, the replication crisis (e.g., Hardwicke & Wagenmakers, 2023; Munafò et al., 2017; Schimmack, 2020, p. 372). Focusing on researcher bias at the expense of other viable explanations represents a form of causal reductionism (Devezer et al., 2019, p. 17), and an acknowledgement of metabias may help to produce a more balanced and comprehensive multicausal account of the replication crisis.

The Bias Reduction Assumption

Some metascientists believe that preregistration and registered reports reduce researcher bias. For example, Hardwicke & Wagenmakers (2023 p. 15) explained that “preregistration…reduces the risk of bias by encouraging outcome-independent decision-making”; Vazire et al. (2022 p. 166) explained that “the aim of the Registered Report format is to reduce bias by eliminating many of the avenues for undisclosed flexibility in research”; and Chambers (2018) described “Registered Reports as a vaccine against research bias” (see also, Chambers & Tzavella, 2022, p. 32; Scheel et al., 2021, p. 2; for commentary, see, Field & Derksen, 2021). There are three problems with this claim.

First, researcher bias influences not only the post hoc selection of hypotheses, data, analyses, and results (i.e., selective reporting), but also the a priori selection of hypotheses, methods, analyses, evidence thresholds, and interpretations (i.e., selective questioning;, Rubin & Donkin, 2022), and considering selective reporting without also considering selective questioning may lead to a biased evaluation of researcher bias. For example, preregistering the number of times that a researcher will toss a coin may help to identify and reduce any selective reporting of their results (e.g., only reporting when the coin lands heads and not when it lands tails). However, the reduction of this selective reporting will not reduce researcher bias if the researcher’s preregistered decision rule is “heads I win, tails you lose!” As Clark et al. (2022) put it, “the dice have often been loaded before pre-registration” (p. 13, see also, Dellsén, 2020; Jamieson et al., 2023). Consequently, it is a QMP to assume that a preregistered study is less biased than a non-preregistered study, because selective questioning in the preregistered study may be more problematic than selective reporting in the non-preregistered study (for similar concerns, see, Devezer et al., 2021, p. 16; Freiling et al., 2021, p. 698; Jamieson et al., 2023; McDermott, 2022; Oberauer, 2019; Pham & Oh, 2021, p. 167; Rubin & Donkin, 2022; Szollosi et al., 2020, p. 95; Wiggins & Christopherson, 2019, p. 212).

Second, it might be argued that preregistration reduces selective reporting when all other variables are held constant, including variables associated with selective questioning. However, even if, ceteris paribus, preregistration reduces selective reporting, it may also increase other types of researcher bias, such as (a) the researcher commitment bias (sticking with a planned research approach, even when it is inappropriate), (b) the researcher prophecy bias (misattributing a researcher’s lucky, atheoretical prophecy to a theory’s predictive power), and (c) a bias towards committing data fraud (for a discussion, please see, Rubin & Donkin, 2022). Again, it is a QMP to consider bias reduction in terms of selective reporting per se and ignore other forms of researcher bias.

Finally, and more generally, the metascientific concept of “bias reduction” assumes that researchers can get closer to an “unbiased” evaluation, which smacks of naïve objectivism, naïve empiricism, naïve realism, and value-free science (Field & Derksen, 2021; Morawski, 2019, p. 228; Reiss & Sprenger, 2020; Strong, 1991; Dijk, 2021; Wiggins & Christopherson, 2019). According to these philosophical positions, scientists can observe an immutable reality directly and in an unbiased and objective manner. However, contrary to these positions, research is always undertaken from one perspective or another, so it is always “biased” from one perspective or another, and what are seen as decreases in bias from one perspective may be regarded as increases in bias from another. Consequently, a more tenable position is that open science practices help to reveal different perspectives rather than to reduce bias (Field & Derksen, 2021; Grossmann, 2021; Jamieson et al., 2023; Pownall, 2022). For example, a robustness or multiverse analysis allows readers to understand how different analytical approaches produce or “enact” different results (Del Giudice & Gangestad, 2021; Morey, 2019; Rubin, 2020; for a discussion of the “enactment” perspective, see, Derksen & Morawski, 2022). In addition, researcher positionality statements can reveal researchers’ perspectives rather than reduce their biases (Jamieson et al., 2023).

Sweeping Generalization QMPs

Devaluing Exploratory Hypothesis Tests

Some metascientists devalue unplanned exploratory tests of post hoc hypotheses relative to preregistered confirmatory tests of a priori hypotheses, even when the exploratory tests are correctly reported as being exploratory. For example, relative to the results of confirmatory hypothesis tests, the results of exploratory tests are supposed to have a “higher risk of bias” (Hardwicke & Wagenmakers, 2023, p. 19) and entail greater “uncertainty” (Nosek et al., 2018, p. 2601), which makes their associated conclusions more “tentative” (Errington, Denis, et al., 2021, p. 19; Ioannidis et al., 2014, p. 238; Nelson et al., 2018, p. 519; Nosek & Lakens, 2014, p. 138; Simmons et al., 2021, p. 154). Consequently, “confirmatory analyses…have much greater evidential impact than exploratory analyses” (Wagenmakers, 2012, p. 13), and research conclusions should be “appropriately weighted in favour of the confirmatory outcomes” (Chambers & Tzavella, 2022, p. 36). There are two problems with this perspective.

First, critics have argued that the distinction between exploratory and confirmatory hypothesis tests is unclear and irrelevant, both from a statistical perspective (Devezer et al., 2021; Rubin, 2020; Rubin, 2021b) and from a philosophical standpoint (Rubin, 2020; Rubin, 2022; Rubin & Donkin, 2022; Szollosi & Donkin, 2021). In particular, it has been shown that the “double use” of the same data to (a) generate hypotheses and then (b) test those hypotheses is not necessarily problematic (Devezer et al., 2021), and that any “circular reasoning” involved in this process can be identified by checking the contents of the reasoning without needing to know the timing of the reasoning (Rubin & Donkin, 2022).

Second, even if we accept the validity of the confirmatory-exploratory distinction and agree that, all other things being equal, exploratory results tend to be more tentative than confirmatory results, it would be a fallacy of the general rule to conclude that all exploratory results are more tentative than all confirmatory results. For example, an exploratory result may be evaluated as being less tentative than a confirmatory result when it is based on higher quality theory, methods, and analyses than the confirmatory result and when it is accompanied by greater transparency vis-à-vis robustness analyses and open data and materials (Devezer et al., 2021; Morey, 2019; Rubin, 2020; Szollosi et al., 2020). Consequently, it would be a QMP to argue that “exploratory studies cannot be presented as strong evidence in favor of a particular claim” (Wagenmakers et al., 2012, p. 635), because high quality exploratory studies can provide stronger evidence than low quality confirmatory studies (see also, Rubin, 2017b, p. 314).

Presuming QRPs are Problematic

Another sweeping generalization QMP is to presume that questionable research practices are always problematic research practices. For example, Hartgerink & Wicherts (2016 p. 1) defined QRPs as “practices that are detrimental to the research process…[and that] harm the research process”; Chambers (2014) described QRPs as “soft fraud”; and Schimmack (2020 p. 372) proposed that “the most obvious solution [to the replication crisis] is to ban the use of questionable research practices and to treat them like other types of unethical behaviours.” There are two problems with this position.

First, QRPs can be perfectly acceptable research practices (Fiedler & Schwarz, 2016; Moran et al., 2022, Table 6; Rubin, 2022, p. 551; Sacco et al., 2019). For example, the QRP of “failing to report all of a study’s dependent measures” (John et al., 2012, p. 525) may not indicate p-hacking if (a) there are good reasons to exclude the measures from the research report and (b) the excluded measures are irrelevant to the final research conclusions (Fiedler & Schwarz, 2016, p. 46; John et al., 2012, p. 531; Rubin, 2017b; Rubin, 2020). As their name implies, QRPs need to be “questioned” by other researchers and interpretated in specific research situations before they can be judged to be potentially problematic.

Second, even potentially problematic research practices such as HARKing and p-hacking may not always be problematic for research credibility and replicability (e.g., Bak-Coleman et al., 2022; Devezer et al., 2019; Fanelli, 2018; Leung, 2011; Rubin, 2017a; Rubin, 2017b; Rubin, 2020; Rubin, 2022; T. D. Stanley et al., 2018; Ulrich & Miller, 2020; Vancouver, 2018). Hence, a more tenable position is to assume that only some QRPs are potentially problematic in specific research situations, and only some potentially problematic research practices are actually problematic under some conditions.

Science Characterization QMPs

Focusing on Knowledge Accumulation

Some metascientists assume that the goal of science is to accumulate knowledge (e.g., Errington, Mathur, et al., 2021, p. 1; Munafò et al., 2017; Nosek et al., 2012; Vazire, 2018). For example, Nosek et al. (2012 p. 617) explained that “the primary objective of science as a discipline is to accumulate knowledge about nature,” and Vazire (2018 p. 416) explained that “the common goal among all scientists is to accumulate knowledge.” Commentators have noted that, from this perspective, some metascientists view low replication rates as indicating an “inefficient” accumulation of knowledge (Morawski, 2022; Peterson & Panofsky, 2021; for examples, see, Errington, Mathur, et al., 2021; Munafò et al., 2017; Nosek et al., 2012; Vazire, 2018; for discussions, see, Hostler, 2022; Uygun Tunç et al., 2022). The proposed open science reforms are supposed to improve the efficiency of knowledge accumulation (e.g., Chambers & Tzavella, 2022, p. 37; Nosek et al., 2012, p. 626). For example, Nosek et al. (2012 p. 626) concluded that “scientific practices can be improved to enhance the efficiency of knowledge building.”

However, there are two reasons that knowledge accumulation may not be regarded as the primary objective of science. First, different philosophies of science emphasize different goals. For example, besides knowledge accumulation, Dellsén (2018) described three alternative goals of science: truth-seeking, problem-solving, and understanding. Second, any philosophy of science that posits knowledge as a goal should also acknowledge the complementary role of ignorance: “What does this unexpected effect mean?” and “why did we find a null result in this study?” These sorts of known unknowns are essential for scientific progress because they motivate the generation of hypotheses for future studies. Hence, according to this “knowledge-and-ignorance” perspective, scientific progress is achieved through not only knowledge accumulation, but also specified ignorance (Firestein, 2012; Merton, 1987; Open Science Collaboration, 2015, p. 7; Rubin, 2021a, p. 5826; Smithson, 1996).

Importantly, knowledge accumulation and specified ignorance have opposite associations with replicability. Successful replications represent scientific progress by confirming current hypotheses. However, failed replications also represent scientific progress by motivating the generation of new hypotheses that explain why the replications failed (e.g., by positing boundary conditions; for an example, see, Firestein, 2012). Hence, although low replication rates may indicate poor knowledge accumulation, they may also represent scientific progress vis-à-vis greater specified ignorance.

In summary, definitions of scientific progress depend on the types of goals to be achieved (Haig, 2022, p. 236). Metascientists who assume that knowledge accumulation is central to scientific progress should also acknowledge that (a) other philosophies of science regard other objectives as being more important, and (b) specified ignorance is equally as important as knowledge accumulation.

Table 1. Questionable Metascience Practices

Table 1

Name

Definition

Recommended Practice

1

Rejecting or ignoring self-criticism

Rejecting or ignoring criticisms of metascience and/or science reform

Encourage self-criticism, cite critics’ work, and respond in a thoughtful manner

2

Fast ‘n’ bropen criticism

A quick, superficial, dismissive, and/or mocking style of scientific criticism

Undertake careful “critical evaluation with civility and mutual respect” (Society for the Improvement of Psychological Science, 2022)

3

Overplaying replication

Assuming that replication is essential to science, and that it indexes “the truth”

Qualify and contextualize claims about the centrality and role of replication in science

4

Unspecified replication rate targets

Assuming that a replication rate is “too low” without specifying an “acceptable” rate

Elaborate on the meaning of “low” when discussing “low replication rates”

5

Metabias

A bias towards explaining the replication crisis in terms of researcher bias

Undertake a more balanced and comprehensive assessment of explanations for the replication crisis

6

The bias reduction assumption

Focusing on selective reporting as the primary form of researcher bias and assuming that it can be reduced without increasing other forms of bias

Consider other forms of researcher bias (e.g., selective questioning, researcher commitment bias) and reveal different research perspectives (e.g., through robustness analyses and researcher positionality statements)

7

Devaluing exploratory hypothesis tests

Devaluing an exploratory result as being more “tentative” than a confirmatory result without considering other relevant issues (e.g., quality of associated theory, methods, analyses, transparency)

Acknowledge that some exploratory results can be less tentative than some confirmatory results

8

Presuming QRPs are problematic

Presuming that questionable research practices are always problematic research practices

Acknowledge that only some QRPs are potentially problematic in specific research situations, and only some potentially problematic research practices are actually problematic under some conditions

9

Focusing on knowledge accumulation

Conceiving knowledge accumulation as the primary objective of science without considering (a) the role of specified ignorance or (b) different objectives in other philosophies of science

Acknowledge that (a) knowledge accumulation and specified ignorance go hand-in-hand and (b) different philosophies of science define scientific progress differently

10

Homogenizing science

Focusing on specific approaches as “the scientific method”

Diversify membership in the metascience community and embrace scientific diversity and pluralism

As several commentators have noted, some metascientists appear to assume that there is a single scientific method rather than a collection of diverse methods (for commentators, see, Drummond, 2019; Malich & Rehmann-Sutter, 2022; Peterson & Panofsky, 2020, p. 21; see also, Guttinger, 2020, p. 2). Malich & Rehmann-Sutter (2022 pp. 4-6) argued that this “homogenizing view” is apparent every time a metascientist refers to “the scientific method” in the singular and without qualification (e.g., Munafò et al., 2017, p. 7; Nosek et al., 2012, p. 618; Zwaan et al., 2018, p. 13; for further examples, see, Drummond, 2019, p. 64).

In addition, and at the risk of homogenizing metascience (Field, 2022), some (not all) metascientists focus their concerns on particular aspects of “the scientific method” (Flis, 2019). In particular, the contemporary metascientific view of science tends to focus on:

  1. a priori predictions (e.g., Chambers & Tzavella, 2022, p. 36; Simmons et al., 2021, p. 154);

  2. quantitative methods (Bennett, 2021; Hamlin, 2017, p. 691; Pownall et al., 2021, p. 530);

  3. rigorous statistical analyses (for a review, see, Moody et al., 2022);

  4. replicable effects (e.g., Nosek et al., 2012, p. 617; Simmons et al., 2021, p. 153);

  5. unbiased interpretations (e.g., Hardwicke & Wagenmakers, 2023; Vazire et al., 2022, p. 166); and

  6. a Popperian philosophy of science (Flis, 2019; Grossmann, 2021, p. 74; Morawski, 2019; Morawski, 2022; for examples, see, Derksen, 2019).

However, from a critical perspective, these foci may be associated with:

  1. predictivism: the view that a priori predictions are superior to post hoc inferences (Oberauer & Lewandowsky, 2019, p. 1605; Rubin, 2017b; Rubin, 2022);

  2. methodolatory/methodologism: the prioritizing of methodological rigor over other research concerns, such as theory (Chamberlain, 2000; Danziger, 1990, p. 5; Gao, 2014);

  3. statisticism/mathematistry: an overemphasis on statistics as both a problem and a solution in science (Boring, 1919; Brower, 1949; Fiedler, 2018; Proulx & Morey, 2021);

  4. naïve empiricism (Strong, 1991): the view that science progresses through the accumulation of replicable effects (Flis, 2022; Proulx & Morey, 2021; Rooij & Baggio, 2021);

  5. naïve objectivism: the view that it is possible for scientists to adopt unbiased and objective perspectives (Field & Derksen, 2021; Penders, 2022; Wiggins & Christopherson, 2019); and

  6. a fairly narrow and outdated philosophy of science (Derksen, 2019; Flis, 2019, p. 170; Grossmann, 2021, p. 74; Morawski, 2019, p. 226, p. 233).

Furthermore, several commentators have noted that these metascientific foci may have the unintended consequence of alienating scientists whose work does not fit with this particular view of science (Bennett, 2021; Kessler et al., 2021; Levin & Leonelli, 2017; Malich & Rehmann-Sutter, 2022; McDermott, 2022, p. 58; Penders, 2022; Pownall et al., 2021, p. 530; Prosser et al., 2022; Wentzel, 2021, p. 170). To address this problem, and to facilitate the recognition of their own biases, metascientists should continue to diversify their membership and embrace scientific diversity and pluralism (Andreoletti, 2020; Flis, 2022; Gervais, 2021; Grossmann, 2021; Leonelli, 2022; Pownall, 2022).

Table 1 summarizes the 10 QMPs that I have discussed and includes recommended practices in relation to each one.

Conclusion

Paralleling John et al.’s (2012) concept of questionable research practices, the present article considered a nonexhaustive list of 10 questionable metascience practices. Readers may disagree about the importance of specific QMPs. However, in my view, it remains useful for metascientists to consider the basic concept of QMPs and to reflect on the ways in which they (a) handle criticism, (b) conceptualize replication, (c) consider researcher bias, (d) avoid sweeping generalizations, and (e) acknowledge the diversity and pluralism of science.

In discussing QMPs, we should be careful not to homogenize metascience (Field, 2022) or to presume that QMPs are necessarily problematic. It is likely that only some metascientists engage in some QMPs some of the time and that QMPs are only problematic in some situations. Future metascientific research may wish to assess the prevalence and impact of various QMPs in order to obtain a clearer understanding of these issues. In the meantime, QMPs should be regarded as invitations to reflect on metascientific practices, assumptions, and perspectives and to ask “questions” about how we go about doing better metascience.

References

Allum, N., Reid, A., Bidoglia, M., Gaskell, G., Aubert-Bonn, N., Buljan, I., ... & Veltri, G. (2023). Researchers on research integrity: a survey of European and American researchers. F1000Research, 12(187), 187. https://doi.org/10.12688/f1000research.128733.1

Altenmüller, M. S., Nuding, S., & Gollwitzer, M. (2021). No harm in being self-corrective: Self-criticism and reform intentions increase researchers’ epistemic trustworthiness and credibility in the eyes of the public. Public Understanding of Science, 30(8), 962-976. https://doi.org/10.1177/09636625211022181

Andreoletti, M. (2020). Replicability crisis and scientific reforms: Overlooked issues and unmet challenges. International Studies in the Philosophy of Science, 33(3), 135-151. https://doi.org/10.1080/02698595.2021.1943292

Anonymous. (2021, November 25). It’s 2021… and we are still dealing with misogyny in the name of open science. University of Sussex School of Psychology Blog. https://blogs.sussex.ac.uk/psychology/2021/11/25/its-2021-and-we-are-still-dealing-with-misogyny-in-the-name-of-open-science/

Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J., Fiedler, K., ... & Wicherts, J. M. (2013). Recommendations for increasing replicability in psychology. European Journal of Personality, 2(27), 108-119. https://doi.org/10.1002/per.1919

Bak-Coleman, J. B., Mann, R. P., West, J., & Bergstrom, C. T. (2022, April 28). Replication does not measure scientific productivity. SoArXiv. https://doi.org/10.31235/osf.io/rkyf7

Barrett, L. F. (2015, September 1). Psychology is not in crisis. The New York Times. https://www3.nd.edu/~ghaeffel/ScienceWorks.pdf

Bastian, H. (2021, October 31). The metascience movement needs to be more self-critical. PLOS Blogs: Absolutely Maybe. https://absolutelymaybe.plos.org/2021/10/31/the-metascience-movement-needs-to-be-more-self-critical/

Bennett, E. A. (2021). Open science from a qualitative, feminist perspective: Epistemological dogmas and a call for critical examination. Psychology of Women Quarterly, 45(4), 448-456. https://doi.org/10.1177/03616843211036460

Bird, A. (2020). Understanding the replication crisis as a base rate fallacy. The British Journal for the Philosophy of Science, 72(4), 965-993. https://doi.org/10.1093/bjps/axy051

Bishop, D. (2019). Rein in the four horsemen of irreproducibility. Nature, 568(7753), 435-436. https://www.nature.com/articles/d41586-019-01307-2

Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 1-19. https://doi.org/10.1177/1747021819886519

Boring, E. G. (1919). Mathematical vs. scientific significance. Psychological Bulletin, 16(10), 335-338. https://doi.org/10.1037/h0074554

Brower, D. (1949). The problem of quantification in psychological science. Psychological Review, 56(6), 325–333. https://doi.org/10.1037/h0061802

Buzbas, E. O., Devezer, B., & Baumgaertner, B. (2022, August 12). The logical structure of experiments lays the foundation for a theory of reproducibility. bioRxiv. https://doi.org/10.1101/2022.08.10.503444

Chamberlain, K. (2000). Methodolatry and qualitative health research. Journal of Health Psychology, 5(3), 285-296. https://doi.org/10.1177/135910530000500306

Chambers, C. (2014, June 10). Physics envy: Do ‘hard’ sciences hold the solution to the replication crisis in psychology? The Guardian. http://www.theguardian.com/science/head-quarters/2014/jun/10/physics-envy-do-hard-sciences-hold-the-solution-to-the-replication-crisis-in-psychology

Chambers, C. (2017). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press.

Chambers, C. (2018, January 25). Registered Reports as a vaccine against research bias: Past, present and future. Presentation at Registered Reports Workshop, Trier, Germany. https://doi.org/10.23668/psycharchives.797

Chambers, C. D., & Tzavella, L. (2022). The past, present and future of Registered Reports. Nature Human Behaviour, 6, 29–42. https://doi.org/10.1038/s41562-021-01193-7

Clark, C. J., Tetlock, P. E., Frisby, R. E., O’Donohue, W. T., & Lilienfeld, S. O. (2022). Adversarial collaboration: The next science reform. In C. L. Frisby, R. E. Redding, W. T. O’Donohue, & S. O. Lilienfeld (Eds.), Political bias in psychology: Nature, scope, and solutions. Springer.

Crețu, A.-M. (2019). Perspectival realism. In M. A. Peters (Ed.), Encyclopedia of educational philosophy and theory. Springer.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge University Press.

De Boeck, P., & Jeon, M. (2018). Perceived crisis and reforms: Issues, explanations, and remedies. Psychological Bulletin, 144(7), 757-777. https://doi.org/10.1037/bul0000154

Del Giudice, M., & Gangestad, S. W. (2021). A traveler’s guide to the multiverse: Promises, pitfalls, and a framework for the evaluation of analytic decisions. Advances in Methods and Practices in Psychological Science, 4(1). https://doi.org/10.1177/2515245920954925

Dellsén, F. (2018). Scientific progress: Four accounts. Philosophy Compass, 13(11), e12525. https://doi.org/10.1111/phc3.12525

Dellsén, F. (2020). The epistemic impact of theorizing: Generation bias implies evaluation bias. Philosophical Studies, 177, 3661–3678. https://doi.org/10.1007/s11098-019-01387-w

Derksen, M. (2019). Putting Popper to work. Theory & Psychology, 29(4), 449-465. https://doi.org/10.1177/0959354319838343

Derksen, M., & Field, S. (2022). The tone debate: Knowledge, self, and social order. Review of General Psychology, 26(2), 172-183. https://doi.org/10.1177/10892680211015636

Derksen, M., & Morawski, J. (2022). Kinds of replication: Examining the meanings of “conceptual replication” and “direct replication”. Perspectives on Psychological Science, 17(5), 1490-1505. https://doi.org/10.1177/17456916211041116

Devezer, B., Nardin, L. G., Baumgaertner, B., & Buzbas, E. O. (2019). Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. PloS one, 14(5), Article e0216125. https://doi.org/10.1371/journal.pone.0216125

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Ozge Buzbas, E. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8(3), Article 200805. https://doi.org/10.1098/rsos.200805

Drummond, C. (2019). Is the drive for reproducible science having a detrimental effect on what is published? Learned Publishing, 32(1), 63-69. https://doi.org/10.1002/leap.1224

Errington, T. M., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021a). Reproducibility in cancer biology: Challenges for assessing replicability in preclinical cancer biology. Elife, 10, Article e67995. https://doi.org/10.7554/eLife.67995

Errington, T. M., Mathur, M., Soderberg, C. K., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021b). Investigating the replicability of preclinical cancer biology. Elife, 10, Article e71601. https://doi.org/10.7554/eLife.71601

Fabrigar, L. R., Wegener, D. T., & Petty, R. E. (2020). A validity-based framework for understanding replication in psychology. Personality and Social Psychology Review, 24(4), 316-344. https://doi.org/10.1177/1088868320931366

Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences, 115(11), 2628-2631. https://doi.org/10.1073/pnas.1708272114

Feest, U. (2019). Why replication is overrated. Philosophy of Science, 86(5), 895-905. https://doi.org/10.1086/705451

Fiedler, K. (2018). The creative cycle and the growth of psychological science. Perspectives on Psychological Science, 13(4), 433-438. https://doi.org/10.1177/1745691617745651

Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7(1), 45-52. https://doi.org/10.1177/1948550615612150

Field, S. M. (2022, July 13). Charting the constellation of science reform. PsyArXiv. https://doi.org/10.31219/osf.io/udfw4

Field, S. M., & Derksen, M. (2021). Experimenter as automaton; experimenter as human: Exploring the position of the researcher in scientific research. European Journal for Philosophy of Science, 11, Article 11. https://doi.org/10.1007/s13194-020-00324-7

Firestein, S. (2012). Ignorance: How it drives science. Oxford University Press.

Firestein, S. (2016, February 14). Why failure to replicate findings can actually be good for science. LA Times. https://www.latimes.com/opinion/op-ed/la-oe-0214-firestein-science-replication-failure-20160214-story.html

Fiske, S. T. (2016, October 31). A call to change science’s culture of shaming. APS Observer, 29. https://www.psychologicalscience.org/observer/a-call-to-change-sciences-culture-of-shaming

Flis, I. (2019). Psychologists psychologizing scientific psychology: An epistemological reading of the replication crisis. Theory & Psychology, 29(2), 158-181. https://doi.org/10.1177/0959354319835322

Flis, I. (2022). The function of literature in psychological science. Review of General Psychology, 26(2), 146-156. https://doi.org/10.1177/10892680211066466

Freiling, I., Krause, N. M., Scheufele, D. A., & Chen, K. (2021). The science of open (communication) science: Toward an evidence-driven understanding of quality criteria in communication research. Journal of Communication, 71(5), 686-714. https://doi.org/10.1093/joc/jqab032

Gao, Z. (2014). Methodologism/methodological imperative. In T. Teo (Ed.), Encyclopedia of critical psychology. Springer. https://doi.org/10.1007/978-1-4614-5583-7_614 

Gervais, W. M. (2021). Practical methodological reform needs good theory. Perspectives on Psychological Science, 16(4), 827-843. https://doi.org/10.1177/1745691620977471

Giere, R. N. (2006). Scientific perspectivism. Chicago Press.

Greenfield, P. M. (2017). Cultural change over time: Why replicability should not be the gold standard in psychological science. Perspectives on Psychological Science, 12(5), 762-771. https://doi.org/10.1177/1745691617707314

Grossmann, M. (2021). How social science got better: Overcoming bias with more evidence, diversity, and self-reflection. Oxford University Press.

Guttinger, S. (2020). The limits of replicability. European Journal for Philosophy of Science, 10(2), 1-17. https://doi.org/10.1007/s13194-019-0269-1

Haig, B. D. (2022). Understanding replication in a way that is true to science. Review of General Psychology, 26(2), 224-240. https://doi.org/10.1177/10892680211046514

Hamlin, J. K. (2017). Is psychology moving in the right direction? An analysis of the evidentiary value movement. Perspectives on Psychological Science, 12(4), 690-693. https://doi.org/10.1177/1745691616689062

Hardwicke, T. E., & Wagenmakers, E. (2023). Reducing bias, increasing transparency, and calibrating confidence with preregistration. Nature Human Behaviour, 7, 15–26. https://doi.org/10.1038/s41562-022-01497-2

Hartgerink, C. H. J., & Wicherts, J. M. (2016). Research practices and assessment of research misconduct. ScienceOpen Research, 0(0), 1-10. https://doi.org/10.14293/S2199-1006.1.SOR-SOCSCI.ARYSBI.v1

Hoekstra, R., & Vazire, S. (2021). Aspiring to greater intellectual humility in science. Nature Human Behaviour, 5(12), 1602-1607. https://doi.org/10.1038/s41562-021-01203-8

Holcombe, A. O. (2021). Ad hominem rhetoric in scientific psychology. British Journal of Psychology, 113(2), 434-454. https://doi.org/10.1111/bjop.12541

Hostler, T. (2022). Open research reforms and the capitalist university’s priorities and practices: Areas of opposition and alignment. SocArXiv. https://doi.org/10.31235/osf.io/r4qgc

Ioannidis, J. P., Munafo, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014). Publication and other reporting biases in cognitive sciences: Detection, prevalence, and prevention. Trends in Cognitive Sciences, 18(5), 235-241. https://doi.org/10.1016/j.tics.2014.02.010

Iso-Ahola, S. E. (2020). Replication and the establishment of scientific truth. Frontiers in Psychology, 11, Article 2183. https://doi.org/10.3389/fpsyg.2020.02183

Jamieson, M. K., Pownall, M., & Govaart, G. H. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, Article e12735. https://doi.org/10.1111/spc3.12735

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532. https://doi.org/10.1177/0956797611430953

Kessler, A., Likely, R., & Rosenberg, J. M. (2021). Open for whom? The need to define open science for science education. Journal of Research in Science Teaching, 58(10), 1590-1595. https://doi.org/10.1002/tea.21730

Leonelli, S. (2018). Rethinking reproducibility as a criterion for research quality. Including a symposium on Mary Morgan: Curiosity, imagination, and surprise. Research in the History of Economic Thought and Methodology, 36B (pp. 129-146). Emerald Publishing. https://doi.org/10.1108/S0743-41542018000036B009

Leonelli, S. (2022). Open science and epistemic diversity: Friends or foes? Philosophy of Science, 89(5), 991-1001. https://doi.org/10.1017/psa.2022.45

Leung, K. (2011). Presenting post hoc hypotheses as a priori: Ethical and theoretical issues. Management and Organization Review, 7(3), 471–479. https://doi.org/10.1111/j.1740-8784.2011.00222.x

Levin, N., & Leonelli, S. (2017). How does one “open” science? Questions of value in biological research. Science, Technology, & Human Values, 42(2), 280-305. https://doi.org/10.1177/0162243916672071

Lewandowsky, S., & Oberauer, K. (2020). Low replicability can support robust and efficient science. Nature Communications, 11, Article 358. https://doi.org/10.1038/s41467-019-14203-0

Longino, H. E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton University Press.

Malich, L., & Rehmann-Sutter, C. (2022). Metascience is not enough – A plea for psychological humanities in the wake of the replication crisis. Review of General Psychology, 26(2), 261-273. https://doi.org/10.1177/10892680221083876

Massimi, M. (2022). Perspectival realism. Oxford University Press.

Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70(6), 487–498. https://doi.org/10.1037/a0039400

McDermott, R. (2022). Breaking free: How preregistration hurts scholars and science. Politics and the Life Sciences, 41(1), 55-59.  https://doi.org/10.1017/pls.2022.4

Merton, R. K. (1987). Three fragments from a sociologist’s notebooks: Establishing the phenomenon, specified ignorance, and strategic research materials. Annual Review of Sociology, 13(1), 1-29. https://doi.org/10.1146/annurev.so.13.080187.000245

Moody, J. W., Keister, L. A., & Ramos, M. C. (2022). Reproducibility in the social sciences. Annual Review of Sociology, 48, 65-85. https://doi.org/10.1146/annurev-soc-090221-035954

Moran, C., Richard, A., Wilson, K., Twomey, R., & Coroiu, A. (2022). I know it’s bad, but I have been pressured into it: Questionable research practices among psychology students in Canada. Canadian Psychology, 64(1), 12–24. https://doi.org/10.1037/cap0000326

Morawski, J. (2019). The replication crisis: How might philosophy and theory of psychology be of use? Journal of Theoretical and Philosophical Psychology, 39(4), 218–238. https://doi.org/10.1037/teo0000129

Morawski, J. (2022). How to true psychology’s objects. Review of General Psychology, 26(2), 157-171. https://doi.org/10.1177/10892680211046518

Morey, R. (2019). You must tug that thread: Why treating preregistration as a gold standard might incentivize poor behavior. Psychonomic Society. https://featuredcontent.psychonomic.org/you-must-tug-that-thread-why-treating-preregistration-as-a-gold-standard-might-incentivize-poor-behavior/

Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Percie du Sert, N., ... & Ioannidis, J. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 1-9. https://doi.org/10.1038/s41562-016-0021

Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511-534. https://doi.org/10.1146/annurev-psych-122216-011836

Norton, J. D. (2015). Replicability of experiment. Theoria. Revista de Teoría, Historia y Fundamentos de la Ciencia, 30(2), 229-248. https://doi.org/10.1387/theoria.12691

Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., ... & Vazire, S. (2019). Preregistration is hard, and worthwhile. Trends in Cognitive Sciences, 23(10), 815-818. http://dx.doi.org/10.1016/j.tics.2019.07.009

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600-2606. https://doi.org/10.1073/pnas.1708274114

Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., ... & Vazire, S. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 73, 719-748. https://doi.org/10.1146/annurev-psych-020821-114157

Nosek, B. A., & Lakens, D. (2014). Registered reports. Social Psychology, 45(3), 137-141. https://doi.org/10.1027/1864-9335/a000192

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615-631. https://doi.org/10.1177/1745691612459058

Oberauer, K. (2019, January 15). Preregistration of a forking path – What does it add to the garden of evidence? Psychonomic Society. https://featuredcontent.psychonomic.org/preregistration-of-a-forking-path-what-does-it-add-to-the-garden-of-evidence/

Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26(5), 1596-1618. https://doi.org/10.3758/s13423-019-01645-2

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), Article aac4716. https://doi.org/10.1126/science.aac4716

Penders, B. (2022). Process and bureaucracy: Scientific reform as civilisation. Bulletin of Science, Technology & Society, 42(4), 107-116. https://doi.org/10.1177/02704676221126388

Peterson, D., & Panofsky, A. (2020, August 4). Metascience as a scientific social movement. SocArXiv. https://osf.io/preprints/socarxiv/4dsqa/

Peterson, D., & Panofsky, A. (2021). Arguments against efficiency in science. Social Science Information, 60(3), 350-355. https://doi.org/10.1177/05390184211021383

Pham, M. T., & Oh, T. T. (2021). Preregistration is neither sufficient nor necessary for good science. Journal of Consumer Psychology, 31(1), 163-176. https://doi.org/10.1002/jcpy.1209

Pownall, M. (2022, June 14). Is replication possible for qualitative research? PsyArXiv. https://doi.org/10.31234/osf.io/dwxeg

Pownall, M., Azevedo, F., Aldoh, A., Elsherif, M., Vasilev, M., Pennington, C. R., ... & Parsons, S. (2021). Embedding open and reproducible science into teaching: A bank of lesson plans and resources. Scholarship of Teaching and Learning in Psychology. https://doi.org/10.1037/stl0000307

Pownall, M., & Hoerst, C. (2022). Slow science in scholarly critique. The Psychologist, 35, 2. https://thepsychologist.bps.org.uk/volume-35/february-2022/slow-science-scholarly-critique

Prosser, A. M. B., Hamshaw, R., Meyer, J., Bagnall, R., Blackwood, L., Huysamen, M., ... & Walter, Z. (2022). When open data closes the door: Problematising a one size fits all approach to open data in journal submission guidelines. British Journal of Social Psychology. https://doi.org/10.1111/bjso.12576

Proulx, T., & Morey, R. D. (2021). Beyond statistical ritual: Theory in psychological science. Perspectives on Psychological Science, 16(4), 671-681. https://doi.org/10.1177/17456916211017098

Reiss, J., & Sprenger, J. (2020). Scientific objectivity. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/scientific-objectivity/

Rosnow, R. L. (1983). Von Osten's horse, Hamlet's question, and the mechanistic view of causality: Implications for a post-crisis social psychology. The Journal of Mind and Behavior, 4(3), 319-337. http://www.jstor.org/stable/43852983

Rubin, M. (2017a). An evaluation of four solutions to the forking paths problem: Adjusted alpha, preregistration, sensitivity analyses, and abandoning the Neyman-Pearson approach. Review of General Psychology, 21(4), 321-329. https://doi.org/10.1037/gpr0000135

Rubin, M. (2017b). When does HARKing hurt? Identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Review of General Psychology, 21(4), 308-320. https://doi.org/10.1037/gpr0000128

Rubin, M. (2020). Does preregistration improve the credibility of research findings? The Quantitative Methods for Psychology, 16(4), 376–390. https://doi.org/10.20982/tqmp.16.4.p376

Rubin, M. (2021a). What type of Type I error? Contrasting the Neyman-Pearson and Fisherian approaches in the context of exact and direct replications. Synthese, 198, 5809–5834. https://doi.org/10.1007/s11229-019-02433-0

Rubin, M. (2021b). When to adjust alpha during multiple testing: A consideration of disjunction, conjunction, and individual testing. Synthese, 199, 10969–11000. https://doi.org/10.1007/s11229-021-03276-4

Rubin, M. (2022). The costs of HARKing. British Journal for the Philosophy of Science, 73(2), 535-560.  https://doi.org/10.1093/bjps/axz050

Rubin, M., & Donkin, C. (2022). Exploratory hypothesis tests can be more compelling than confirmatory hypothesis tests. Philosophical Psychology. https://doi.org/10.1080/09515089.2022.2113771

Sacco, D. F., Brown, M., & Bruton, S. V. (2019). Grounds for ambiguity: Justifiable bases for engaging in questionable research practices. Science and Engineering Ethics, 25(5), 1321-1337. https://doi.org/10.1007/s11948-018-0065-x

Scheel, A. M., Schijen, M. R., & Lakens, D. (2021). An excess of positive results: Comparing the standard psychology literature with Registered Reports. Advances in Methods and Practices in Psychological Science, 4(2), 1-12. https://doi.org/10.1177/25152459211007467

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology, 61(4), 364–376. https://doi.org/10.1037/cap0000246

Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69, 487-510. https://doi.org/10.1146/annurev-psych-122216-011845

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366. https://doi.org/10.1177/0956797611417632

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2021). Pre‐registration: Why and how. Journal of Consumer Psychology, 31(1), 151-162. https://doi.org/10.1002/jcpy.1208

Smithson, M. (1996). Science, ignorance and human values. Journal of Human Values, 2(1), 67-81. https://doi.org/10.1177/097168589600200107

Society for the Improvement of Psychological Science. (2022). Mission statement. https://improvingpsych.org/mission/

Spellman, B. A., Gilbert, E. A., & Corker, K. S. (2018). Open science. In J. T. Wixted & E.-J. Wagenmakers (Eds.), Stevens’ handbook of experimental psychology and cognitive neuroscience, learning and memory: Volume 5 methodology (4th ed., pp. 729-775). Wiley.

Stanley, D. J., & Spence, J. R. (2014). Expectations for replications: Are yours realistic? Perspectives on Psychological Science, 9(3), 305-318. https://doi.org/10.1177/1745691614528518

Stanley, T. D., Carter, E. C., & Doucouliagos, H. (2018). What meta-analyses reveal about the replicability of psychological research. Psychological Bulletin, 144(12), 1325-1346. https://doi.org/10.1037/bul0000169

Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication. Perspectives on Psychological Science, 9(1), 59-71. https://doi.org/10.1177/1745691613514450

Strong, S. R. (1991). Theory-driven science and naïve empiricism in counseling psychology. Journal of Counseling Psychology, 38(2), 204–210. https://doi.org/10.1037/0022-0167.38.2.204

Szollosi, A., & Donkin, C. (2021). Arrested theory development: The misguided distinction between exploratory and confirmatory research. Perspectives on Psychological Science, 16(4), 717-724. https://doi.org/10.1177/1745691620966796

Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2020). Is preregistration worthwhile? Trends in Cognitive Science, 24(2), 94-95. https://doi.org/10.1016/j.tics.2019.11.009

Ulrich, R., & Miller, J. (2020). Meta-research: Questionable research practices may have little effect on replicability. Elife, 9, Article e58237. https://doi.org/10.7554/eLife.58237

Uygun Tunç, D., Tunç, M. N., & Eper, Z. B. (2022). Is open science neoliberal? Perspectives on Psychological Science. https://doi.org/10.1177/17456916221114835

Vancouver, J. N. (2018). In defense of HARKing. Industrial and Organizational Psychology. 11(1), 73–80. https://doi.org/10.1017/iop.2017.89

van Dijk, T. (2021, June 22). How to tackle confirmation bias? Delta: Journalistic Platform TU Delft. https://www.delta.tudelft.nl/article/how-tackle-confirmation-bias#

van Rooij, I., & Baggio, G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspectives on Psychological Science, 16(4), 682-697. https://doi.org/10.1177/1745691620970604

Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411-417. https://doi.org/10.1177/1745691617751884

Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2022). Credibility beyond replicability: Improving the four validities in psychological science. Current Directions in Psychological Science, 31(2), 162-168. https://doi.org/10.1177/09637214211067779

Wagenmakers, E. J. (2012). A year of horrors. De Psychonoom, 27, 12-13.

Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632-638. https://doi.org/10.1177/1745691612463078

Walkup, J. (2021). Replication and reform: Vagaries of a social movement. Journal of Theoretical and Philosophical Psychology, 41(2), 131-133. https://doi.org/10.1037/teo0000171

Wentzel, K. R. (2021). Open science reforms: Strengths, challenges, and future directions. Educational Psychologist, 56(2), 161-173. https://doi.org/10.1080/00461520.2021.1901709

Whitaker, K., & Guest, O. (2020). # bropenscience is broken science. The Psychologist, 33, 34-37. https://thepsychologist.bps.org.uk/volume-33/november-2020/bropenscience-broken-science

Wiggins, B. J., & Christopherson, C. D. (2019). The replication crisis in psychology: An overview for theoretical and philosophical psychology. Journal of Theoretical and Philosophical Psychology, 39(4), 202–217. https://doi.org/10.1037/teo0000137

Wood, W., & Wilson, T. D. (2019, August 22). No crisis but no time for complacency. APS Observer, 32(7). https://www.psychologicalscience.org/observer/no-crisis-but-no-time-for-complacency

Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, Article E120. https://doi.org/10.1017/S0140525X17001972

Comments
0
comment
No comments here
Why not start the discussion?