Discussions around transparency in open science focus primarily on sharing data, materials, and coding schemes, especially as these practices relate to reproducibility. This fairly quantitative perspective of transparency does not align with all scientific methodologies. Indeed, qualitative researchers also care deeply about how knowledge is produced, what factors influence the research process, and how to share this information. Explicating a researcher’s background and role allows researchers to consider their impact on the research process and interpretation of the data, thereby increasing both transparency and rigor. Researchers may engage in positionality and reflexivity in a variety of ways, and transparently sharing these steps allows readers to draw their own informed conclusions about the results and study as a whole. Imposing a limited, quantitatively-informed set of standards on all research can cause harm to researchers and the communities they work with if researchers are not careful in considering the impact of such standards. Our paper will argue the importance of avoiding strong defaults around transparency (e.g., always share data) and build upon previous work around qualitative open science. We explore how transparency in all aspects of our research can lend itself toward projecting and confirming the rigor of our work.
Keywords: open science, transparency, rigor, qualitative, quantitative
The social sciences have been undergoing a credibility revolution, also known as the open science movement, within the last decade. This movement emphasizes greater transparency and openness through specific practices, such as preregistration and replication, and improving the quality and quantity of evidence used in making scientific claims (Vazire, 2018). The concepts of transparency and rigor are important to these conversations. Transparency can be defined as “the obligation to make data, analysis, methods, and interpretive choices underlying their claims visible in a way that allows others to evaluate them” (Moravcsik, 2019). Rigor can be defined as “the strict application of the scientific method to ensure unbiased and well-controlled experimental design, methodology, analysis, interpretation and reporting of results” (National Institutes of Health [NIH], n.d.). Within the open science movement, discussions around rigor and transparency have largely come from a positivist, quantitative perspective that focuses on the transparency of outputs, namely open data, open materials, open code, and open access to manuscripts (Hagger, 2019; Lyon, 2016; Powers & Hampton, 2019). For example, Fecher & Friesike (2014) describe five schools of open science which focus on (1) creating openly available platforms, tools, and services for scientists; (2) making science accessible for citizens; (3) developing alternative measures of impact; (4) making knowledge freely available for everyone; and (5) making the knowledge creation process more efficient. Notably, their description of open science does not explicitly discuss transparency regarding the decisions researchers make during the research process. More inclusive definitions of open science have emerged over time (e.g., United Nations Educational, Scientific and Cultural United Nations Educational, Scientific and Cultural Organization, 2021).
Indeed, much of the conversation around transparency in the open science movement has focused on standardizing outputs or processes related to data sharing, such as the Transparency and Openness Promotion (TOP) Guidelines (Nosek et al., 2015), authored by researchers who conduct largely, if not entirely, quantitative research projects. The TOP Guidelines include recommendations for citations, data transparency, analytic methods (code) transparency, research materials transparency, design and analysis transparency, preregistration of studies, preregistration of analysis plans, and replication. These guidelines feature three levels of implementation; lower levels recommend practices or require only statements about a measure of transparency and reproducibility (e.g., articles state whether code is available), while higher levels require authors to engage in a specific practice (e.g., code must be posted to a trusted repository).
When standards are created, it is important to consider both who is at the table and who is not. Open science guidelines fail to account for research based on epistemologies that are not strictly positivist and methods that are not strictly quantitative in nature, such as qualitative and interpretivist approaches. As such, they have unfortunately had limited applicability to those kinds of research. Feminist and qualitative scholars have long maintained that there are multiple ways of understanding, yet evangelists of the open science movement have commonly made assumptions that there is a shared understanding of a specific type of research (e.g., empiricism, deductive reasoning). This problematic assumption halts progress in the integration of perspectives for open science and contributes to possibly ignoring systematically marginalized voices (see Bennett et al., 2022).
Default standards can be useful because they automate processes and reduce cognitive load when making decisions. However, they can also be dangerous, because if researchers are not making thoughtful and informed decisions, then these defaults could result in individuals and research teams moving forward under assumptions that could ultimately cause harm to research participants (Sakaluk, 2021; Steltenpohl et al., 2021). Strict guidelines about data sharing, for example, may create problems for researchers working with qualitative data, which without careful attention to confidentiality safeguards (e.g., pseudonyms, redacting personally identifiable information) may be more identifiable than, for example, Likert-scale responses on a survey. These concerns are also relevant to researchers who have built trust through prolonged engagement with a participant community (Ross et al., 2018), and those whose research perspective differs from the seemingly common conceptualization of research as being strictly “right” or “wrong” (Lash, 2015).
Previous work within the qualitative research community suggests it may be helpful to think about the relationship between transparency and rigor (e.g., Billups, 2014; Davies & Dodd, 2002; Mill & Ogilvie, 2003; Rolfe, 2006) and to examine how transparency in all aspects of research can project and confirm the rigor of qualitative inquiry work. Opening the black box of the research process in this way also allows readers from all educational backgrounds to better understand how research is done, what kinds of decisions are made during a research project, and best practices within the researchers’ respective fields. As such, practices that promote transparency and rigor in qualitative science could (and, we would argue, should) be considered in line with the tenets of open science. For example, qualitative researchers are encouraged to provide detailed descriptions of their methods (e.g., sampling, data collection, analysis) to the extent that other researchers could follow their methods with other samples in other contexts (Creswell et al., 2018). This guidance aligns with guidance for the open science practice of replication, where researchers redo a study to see if the same results arise in a different population or at a different time.
Due to the use of purposive sampling and sociocultural contextual factors, replicating qualitative inquiry methods with different samples in different contexts is not as straightforward as replicating null hypothesis significance testing, which in and of itself is not always straightforward. What may be construed as a “failed” replication (often implying the previous study’s results were “wrong”) within quantitative paradigms is often interpreted simply as evidence that warrants further study in qualitative methodologies, because qualitative paradigms do not stem from a hypothesis but from inquiry. In qualitative research, we tend to sample to understand context, rather than to generalize; we think of validity in terms of trustworthiness, rather than replicability. At the end of the day, quantitative, qualitative, and mixed methods researchers share the goal of advancing knowledge in the field through rigorous and transparent processes. However, guidelines intended to promote these processes may have the unintended consequence of becoming gatekeepers that limit the ability of qualitative researchers to publish and obtain funding.
Taking a closer look at “the why” behind these open science practices could help researchers across all methodologies understand the benefits of these practices and aspects of each methodology that should be considered before engaging in the practices. This conversation may not only help qualitative researchers engage in open science practices, but also open doors for quantitative researchers to employ rigor and transparency strategies, such as positionality or reflexivity statements, that have traditionally been used exclusively by qualitative researchers (e.g., Jamieson et al., 2022). The purpose of the current paper is to reframe the conversation within the open science movement around transparency and rigor to include considerations from qualitative research that may be of use in determining best practices for rigor and transparency in open science. These concerns may or may not be unique to qualitative research; we are only speaking from this perspective as researchers who have engaged with qualitative and mixed methods research.
Much of the discussion about transparency in open science revolves around data, which is relevant to both qualitative and quantitative research. There are increasing requirements for sharing data, particularly through major funding agencies (e.g., National Institutes of Health [NIH], 2020). For example, a recent U.S. memo dictates that scientific data from projects funded through federal agencies must be available upon publication of the manuscript unless the data fall under specific limitations (Office of Science and Technology Policy, 2022).
Additionally, some researchers have argued that secondary analysis of qualitative data holds the potential to relieve the burden on vulnerable populations and community partners who collaborate with researchers (e.g., Ruggiano & Perry, 2019). It also may help foster new findings through the application of a different lens or through a focus on particular elements of the dataset (Long-Sutehall et al., 2011). Publicly available qualitative data could potentially be beneficial when individuals or groups are particularly difficult to access (Fielding, 2004) or when working with groups that might face trauma from multiple rounds of research participation (Ruggiano & Perry, 2019). Some research has revealed that participants are generally willing to allow researchers to archive and share their qualitative data (e.g., Cummings et al., 2015; Mozersky et al., 2020; VandeVusse et al., 2022) out of a desire to help others and improve research.
Although the TOP guidelines for data sharing do suggest that materials should be stored in a trusted digital repository (Nosek et al., 2015, page 5), these guidelines do not address potential issues that could arise in the future (e.g., guidelines for updating data; Lash, 2015) or other concerns around data sharing. Mozersky et al. (2020) found that less than four percent of researchers reported sharing qualitative data, largely due to concerns related to breaching participant confidentiality and anticipation of wavering participation in future qualitative studies. The sharing of sensitive and potentially identifiable data, even when approved by participants, comes with immense ethical responsibility, both from the researchers sharing the data and those using the data. The rush to mandate open data leaves many ethical ambiguities unaddressed. Researchers are not routinely trained on how to ethically share data, evident in part from researchers’ anecdotes of finding identifiable data on data repositories (e.g., Elson, 2021). Even when researchers are careful, participants may be more identifiable than they believe, especially in certain circumstances, such as being a part of a small identity group or discussing place-specific content in interviews (Gow et al., 2020). It is vital that participants understand how likely they are to be identifiable, especially vulnerable participants or those living in dangerous conditions (Ross et al., 2018; Small et al., 2014). In fact, ensuring participants truly understand what data sharing means is likely to be one of the largest barriers to ethically sharing data (VandeVusse et al., 2022).
A more fundamental issue complicating data transparency is the question of what constitutes data. At first glance, it may seem obvious that a qualitative researcher’s data would comprise transcripts and direct observational notes. However, qualitative researchers are ideally trained to provide detailed audit trails that incorporate notes from the research process (e.g., reflexivity, field notes, methodological notes), instrument development (e.g., observation protocols), and evidence of how codes, categories, and themes are synthesized. These secondary data may be necessary for observers to fully understand research findings and how conclusions were drawn from the data. Secondary data (e.g., memos, protocols) support the reporting process, but traditional journal space limitations may not provide a way to share these data.
To further complicate matters, many qualitative researchers would posit that while secondary data are a combination of the researcher’s perceptions and observations, even primary data, such as interview transcripts, are filtered to some extent through the researcher. This is because, in qualitative research, the researcher is an instrument of both data collection and analysis (e.g., Peredaryenko & Krauss, 2013). Open science standards do not take these complications into account, nor would many quantitatively-focused researchers and reviewers necessarily know what to do with these types of data, were qualitative researchers mandated to share them. Moving to open science practices and transparency will highlight the need to manage extensive and complicated data and various data types more effectively. For example, data management is often an ignored nuance in longitudinal quantitative data and many studies use “trial and error” to determine best practices (Youngblut et al., 1990). Data collection and storage declarations must be determined as part of the research design before the study begins, and participants and researchers will need to explore and understand risks. Furthermore, large sets of data can be difficult to maintain and manage–or even remember. Khan et al. (2018) determined that when researchers reviewed the study files they had stored in their cloud storage, more than 50% of the files were “forgotten” and 14% were unrecognizable. These results reveal the need for extensive training around perpetual data sharing requirements.
The researcher-as-instrument tradition also complicates discussions around reproducibility (i.e., the ability for another researcher to look at someone’s data and reproduce the analyses), one of the key components of rigor as it is currently discussed in the open science movement (NIH, n.d.). Quantitative researchers’ focus on reproducibility is often contrary to the tenets of qualitative research, particularly in methodologies aiming to uncover new ways of knowing, such as constructivist and grounded theory approaches. If one understands the researcher as a data collection instrument and a filter through which data is processed, strict quantitative-focused reproducibility becomes less likely—not through misconduct or error, but because ultimately, people conduct research, and people are not likely to have exactly the same perspectives. Guidelines that reinforce reproducibility without addressing this tension are not going to be useful for all researchers.
A common refrain within the open science movement is that “preregistration is not a prison,” (e.g., DeHaven, 2017; Mellor, 2021) which we ultimately think is true in intent. However, the goal of preregistration does appear to be putting guardrails on research processes, to ensure researchers do not significantly deviate from their analysis plans. Indeed, in their development of the useful qualitative preregistration template, Haven et al. (2020) describe preregistrations as being useful for qualitative researchers who are involved in some level of testing, where it would be important to communicate hypotheses that were drawn prior to the start of the study. For researchers who do not do this kind of research, however, preregistration may have limited utility as a method for introducing guardrails to the research process. Preregistration may, however, be useful in other contexts, which we describe later. Journal requirements for preregistration, then, may exclude entire areas of research if not carefully crafted.
The narrow view of rigor described by the National Institutes of Health [NIH] (n.d.) and others can lead people to assume there is one right way to conduct research. There are numerous decision points in quantitative research design that are susceptible to researcher bias, contextual factors such as community history or current events, and conceptual assumptions that may be a result of different training paradigms. Adhering to only one limited set of standards in the pursuit of rigor could therefore prioritize the positivist position that there could be one objective reality but only if the proper, rigorous methods were used, which would limit theory development and the advancement of knowledge generally.
Concerns arise if all qualitative research is subject to quantitative expectations of reproducibility to obtain funding or be published in peer-reviewed journals. This would eliminate important qualitative methodologies, many of which have historically been a foundation for amplifying voices of understudied individuals and groups. Focusing on only one metric, such as replicability or reproducibility, will likely lead to increased oppression of research and knowledge, instead of meeting a primary aim of open science to improve access to information which would enable people to better examine scientific claims. Further, it is our belief that many quantitative researchers would benefit from considering qualitative-born methods for determining rigor and transparency in their own work, which we will now discuss.
It is difficult to separate rigor from transparency. One cannot fully determine the rigor of work that is not transparent, and not all transparent work is rigorous. Importantly, reproducibility alone cannot determine the rigor of a study. When discussing rigor, qualitative researchers often focus on trustworthiness, which includes confirmability, credibility, dependability, and transferability (Lincoln & Guba, 1986; Stahl & King, 2020). Qualitative researchers establish confirmability by providing evidence that the conclusions they have made are drawn from the data, but this evidence relies on attention to credibility, dependability, and transferability (Lincoln & Guba, 1985; Nowell et al., 2017; Tobin & Begley, 2004). Using approaches such as extended engagement, triangulation, and member checking, qualitative researchers can address credibility by ensuring alignment between participants’ views and how those views are presented in the research report. Dependability can be established by explicitly and transparently documenting the research process, which can include codebooks, tables, or figures that show the research process. Transferability requires researchers to provide enough information to allow research consumers to make determinations about whether the findings might be relevant to other contexts. A common thread between each of these elements of trustworthiness is transparency. As interest in open science practices grows, it may be useful to reexamine the research process with these concepts in mind and integrate qualitative and quantitative perspectives as appropriate.
Importantly, qualitative researchers are often concerned not just with research results, but also with the process by which those results are produced. This could be an exciting area of growth for many quantitative scholars who are open to learning from qualitative researchers; most, if not all, practices would be just as beneficial for quantitative researchers to employ (see, for example, Hope et al., 2019). For example, positionality (i.e., an examination of who is doing the interpreting) and reflexivity (i.e., an examination of how researchers will reflect on, consider the impact of, and/or mitigate bias where necessary) are staple concepts in qualitative research courses.
Positionality statements are prose that allow researchers to reflect on their role within the data interpretation process (Clancy, 2013; Guillemin & Gillam, 2004; Lazard & McAvoy, 2020; Makel et al., 2022; Patton, 2014; Rooney, 2015; Savin-Baden & Howell-Major, 2013). This is an opportunity for researchers to think about and discuss decisions that were made, why they were made, and how that might have impacted the research process. Researchers are encouraged to consider both strengths and weaknesses of their approaches. Positionality statements do not give researchers permission to conduct a certain kind of research, but rather provide additional context to help readers understand the decisions a research team has made. Many journals do not require positionality statements to go beyond reflecting on one’s background vis a vis their research topic, but rigorous transparency compels us to confront how we will address potential biases, which is where reflexivity becomes especially important.
Researchers may engage with reflexive practices in a variety of ways, including but not limited to member checking (Caretta, 2016; Creswell & Miller, 2000; Goldblatt & Band-Winterstein, 2016; Lincoln & Guba, 1986; Stake, 1995), critical team discussions (Mao et al., 2016), memos (Birks et al., 2008), and external audits (Wolf, 2003). Most researchers do not use all of these methods in every single study, though a qualitative paper with none of these practices would likely raise questions from careful reviewers.
Member checking can help researchers ensure they have collected and are interpreting data in a way that is true to participants’ experience (e.g., Caretta, 2016). Bornstein et al. (2022) engaged in member checking by returning to participants after a first round of interviews, sharing the themes the researchers had deduced, and asking participants if these themes were accurate reflections of their experiences. This allowed for deviation, as well as nuance and deeper reflection, from participants. Caretta & Perez (2019) note that member checking is especially crucial for establishing trustworthiness in qualitative methods, because participants may (and likely will) not all agree on a topic. By understanding the context of their opinions and differences, researchers can approach sharper validity; however, the ever-present potential for disagreement reminds us that no one researcher or participant can or should hold epistemological authority over the data.
Critical team discussions, particularly when colleagues come from a variety of backgrounds, are another practice for sharing epistemological authority. While member checking involves sharing data with actual participants, critical team discussions can involve sharing data among colleagues who either share a background with the participant or, at the very least, can offer a different perspective from that of the research team. Mao et al. (2016) designed a practice for critical team discussions in which a group of graduate students met regularly to share dissertation data and provide feedback on each other’s interpretations of these data. In this way, the students were able to benefit from others’ impressions of their participants and reflect on their own blind spots. The authors call this critical reflexivity and recommend it for any researchers conducting qualitative research (e.g., Fook & Askeland, 2007; Mao et al., 2016).
Through memos, researchers can incorporate relevant literature, context, or background knowledge to analyze a particular set of qualitative data (e.g., interview transcripts). Similarly, through journaling, researchers record their thoughts and feelings in response to data and determine how they can keep these personal reactions from biasing their data interpretation (Meyer & Willis, 2019). Thick description (Geertz, 1973) also allows us to be as context-specific about our data and ourselves as possible, and, thus, be as rigorously transparent as possible. While thick descriptions provide detail, they can be difficult to fit within traditional publication word and page limits. However, with online publications and supplementary opportunities, thick descriptions of nuanced situations (e.g., historical events that may impact analysis for a small number of participants) can be included to provide additional transparency.
External audits, where a research team invites an external expert to review research-generated data, can also enhance rigor by providing an outside examination of the research process and products – including meeting notes, memos, and other reflections – and the extent to which interpretations and/or findings are supported by the data (Creswell & Miller, 2000; Lincoln & Guba, 1986; Rodgers & Cowles, 1993; Wolf, 2003). These audits can be beneficial by adding to the synthesis of ideas and bringing additional perspectives to a specific research topic. Similarly, a research team may engage with peer debriefing, where they review the research thus far with someone who is familiar with the work but not on the team; the peer reviewer is encouraged to challenge the research team on their findings (Creswell & Miller, 2000). External audits and peer debriefing lend themselves to establishing the credibility of the findings and require researchers to be transparent about how they have conducted their research and are thinking about the data.
It is clear to us that there is potential for alignment between current open science practices, such as preregistration, and practices in which qualitative researchers already engage. Haven et al. (2020) briefly discuss one such alignment in their development of the qualitative preregistration template, when they say that preregistration could be helpful to “make visible the connections between analytical assumptions, evidence, and decisions that form a particular interpretation of the data” (p. 2). We interpret this to mean that preregistration could be helpful as a means to encourage reflexivity—for researchers to document how they conceptualized their study at the beginning of the project and reflect on those assumptions, perspectives, and potential misunderstandings throughout the rest of the research process. Too often, across all methodologies, the process for the why behind study decisions is not documented clearly (Mackenzie & Knipe, 2006). From the inception of a study, it should be clear what the theoretical framework and paradigm are, since these drive, or should drive, methodology and methods. Specifically,
It is the paradigm and research question, which should determine which research data collection and analysis methods (qualitative/quantitative or mixed methods) will be most appropriate for a study. In this way researchers are not quantitative, qualitative or mixed researchers, rather a researcher may apply the data collection and analysis methods most appropriate for a particular research study.
(Mackenzie & Knipe, 2006, page 7-8)
The preregistration template, then, is an immensely helpful tool in any researcher’s toolbox, if it is viewed as a space for clearly documenting the why in addition to the how of a study, and if journal editors and reviewers understand that qualitative researchers are not using the preregistration as guardrails, but rather as a mirror or window to look back at how the study team conceptualized their research at the beginning of the study. If reviewers punish authors for deviating from what was preregistered – for example, if a researcher changes their interview protocols because they realized participants were understanding a question differently than how they meant it to be understood – preregistration templates will not be useful to qualitative researchers. Two key elements are required to ensure the value of the preregistration process to all researchers, regardless of methodological approach. First, the TOP Guidelines and other guidance on preregistration must clearly outline preregistration requirements for qualitative research in addition to quantitative research so that a positivist framework is not pushed onto qualitative research. Second, reviewers, editors, and other partners in the publication process must be clearly trained in these differences so that qualitative researchers are not inadvertently denied opportunities for publication due to inappropriate requirements, such as requiring hypotheses or denying researchers the ability to add or change data collection methods. Ultimately, preregistration is not the only way to achieve the goal of reflexivity or transparency in the research process; however, if journals subscribe to Level 3 of the TOP guidelines, all studies must be preregistered. This kind of guideline may be less useful to qualitative researchers than a broader guideline to provide an audit trail (or similar concept), which would allow researchers to provide many kinds of evidence, including preregistration and preregistration updates (Corker et al., 2022) illuminating how they got from study conceptualization to their results.
Although privacy, participant rights, consent, and the potentially identifiable nature of qualitative data must always be at the forefront of data practices, there has been an increasingly detailed discussion of what transparency and rigor mean in qualitative research.
One traditional method for increasing transparency is data sharing. When considering qualitative data sharing, we have ethical obligations to participants regarding how, with whom, and when this data can and should be shared. Existing recommendations include (a) professional data curation and archiving processes to optimize privacy (how data is shared), (b) ensuring necessary materials and details are included in the data storage/archiving to enhance contextual and data understanding (how data is shared), and (c) providing graded data access (when and with whom data is shared).
Regarding data curation and archiving, the Qualitative Data Repository (QDR) first provides professional curation to optimize de-identification prior to storage. This is an important step, though it can easily be counteracted by the “thick description” characteristic of rigorous contemporary (i.e., interpretivist) qualitative data (Geertz, 1973). When we consider when and with whom data can be shared, we must always balance privacy with transparency. One method that holds promise for this balance is graded sharing access to the data. This means that data are available based on researcher qualifications (e.g. ICPSR, n.d.) and the ability to meet security standards (e.g. ICPSR, n.d.; Qualitative Data Repository, 2022). For example, data stored on the Qualitative Data Repository can only be used for research or teaching, but depositors can designate the data as standard access (i.e., all registered users can access), special access (i.e., conditional, depositor-approved, restricted offline, or embargoed access), or depositor-approved access (i.e., QDR staff and the depositor review access requests) which allows researchers to understand how potentially sensitive data may be used (Access Controls, 2020). Setting gradations for how much access researchers have to data could allow researchers to meet open science requirements without unnecessarily compromising participant anonymity, an important balance to strike when working with sensitive data and with organizations that are protective of their data (e.g., governmental agencies, industry partners). Additionally, researchers can share data with the participants to provide transparency to the community of focus (Humphreys et al., 2021).
Next, QDR recommends having clear and detailed guidance for archiving qualitative data, such as detailed codebooks, processes, and contextual information. Specifically, they suggest including files and artifacts that help to “document the context in which information was gathered and/or data were created, the collection and generation processes, and (when applicable) how the data were analyzed” (Qualitative Data Repository, n.d., Documentation Files section). This is critical to optimizing future understanding of the larger context foundational to qualitative data and ensuring transparency. Taken together, these techniques for qualitative data sharing are critical for both maintaining open science standards and protecting the identities of research participants.
A recent article in the American Journal of Pharmaceutical Education defines rigor in qualitative research as “ensuring that the research design, method, and conclusions are explicit, public, replicable, open to critique, and free of bias” (Johnson et al., 2020). Qualitative research demonstrates rigor not only by sharing raw transcripts, but also—if not primarily—through thick description of data and detailed explication of the research design. Journal word limits often mean these details get squeezed out of articles during the editing process. One promising solution is to produce a supplementary “data paper” (Schöpfel et al., 2019), which describes data in much more detail than a traditional manuscript. For instance, in a qualitative data paper, a researcher could provide detailed context about the community and historical context in which data were collected, how the data were collected, and any additional information about positionality and reflexivity that other researchers would need to know before using the data. This meets many researchers’ professional obligations to publish in peer-reviewed journals, while also upholding ethical obligations to provide data and contextual information about said data.
Another strategy often used by qualitative researchers to ensure rigor is that of triangulation, or the process by which “researchers search for congruence among multiple and different sources of information” (Creswell & Miller, 2000, page 126). The data produced from such methods are necessary for knowledge acquisition and theory advancement. Scholars might triangulate multiple methods of data collection (e.g., interviews, observations, archival data), sources (e.g., different communities), researchers’ accounts, and/or theoretical approaches, to see whether a particular finding holds up across contexts. This strategy informs the rigor of our work by seeing how well it holds up to scrutiny under different conditions. Engaging in triangulation may reveal important disconfirming evidence that advances, questions, or adds nuance to a theory.
Quantitative researchers often use the term “materials” to refer to stimuli, models, programs, and other tools through which raw data are processed. In qualitative research, we might additionally consider journal entries, analytic or reflective memos, critical conversations, and member checking to all be materials – that is, products we create while processing data. How we interpret our data and draw conclusions depends on how well we understand our participants and mitigate any bias we may bring to the analysis. These materials are therefore as important as the data itself; they are the process by which researchers process raw data into findings. Part of our responsibility in qualitative open science—and, arguably, all open science—is to ensure that we are transparent not only with our data, but also with the materials, context, and strategies we use to interpret findings and draw conclusions. While these recommendations are echoed in current quantitative critical theory (QuantCrit; e.g., Gillborn et al., 2018), they still tend to be heeded more often by qualitative researchers than quantitative and mixed methods researchers (Hope et al., 2019).
Researchers have a habit of creating divisions where there need not be, a notorious example being the myth of incompatibility between qualitative and quantitative methods (Ercikan & Roth, 2006; Malterud, 2001). While these two approaches do bring different perspectives to the research process, there are opportunities to enhance the quality of our work by learning from both approaches. The open science movement and the research community in general can benefit from many of the practices qualitative researchers use to maintain rigor and transparency – namely, attention to providing a high level of contextual detail and reflexivity practices for mitigating bias. By expanding open science guidelines to leverage a broader array of rigor and transparency-promoting practices (e.g., reflexivity), we can truly begin to advance practices.
The rigor and transparency we call for is different from, and at times in opposition to, ideas of rigor that rely on generalizability alone. A study need not be replicable or reproducible to be rigorous. Instead, researchers need to be transparent about the context and reflexive processes used to draw conclusions from the data so readers can see their line of reasoning, determine the extent to which they trust the findings, and whether those findings might transfer to other contexts (e.g., trustworthiness; Lincoln & Guba, 1985). While another researcher may not be able to replicate a study in another context and arrive at identical findings, they may be able to align their study design if they understand the contextual factors the analysis should incorporate. Documentation, such as preregistrations, journal entries, memos, member checking, and critical conversations are all materials that a researcher can upload, along with data, to an open science repository with the appropriate permissions clearly described. These materials would help justify how the data were interpreted and facilitate conceptual or theoretical replications. Given the diversity of practices across disciplines, one may question, as Clarke (2022) does in her review of Heidi Levitt’s Reporting Qualitative Research in Psychology: How to Meet APA Style Journal Article Reporting Standards, whether general reporting standards are even necessary or possible. It may be that a one-size-fits-all approach will not work for the social sciences, let alone science more broadly. If one comes to this conclusion, we would argue that guidelines need to clearly articulate the kinds of research to which they apply. If one agrees with Levitt (2020) that general reporting standards are necessary and possible, then hopefully they also agree that when guidelines are advertised as “general science guidelines,” then they need to reflect all forms of scientific inquiry, not just lab-based quantitative research.
Intentions alone are not enough to move science forward. Creating responsible, considered processes for rigorously transparent open science requires involving interested parties from a wide range of backgrounds, perspectives, research areas, and training paradigms. If open science practitioners truly want their practices to become more mainstream, they must invite researchers with very different perspectives to the table, and everyone at the table must discuss these issues in good faith. If we can come to a mutual understanding of our various paradigms and agree upon guidelines that respect each other’s epistemologies, we will be much more successful in moving the field forward.
Access controls. (2020). https://qdr.syr.edu/guidance/human-participants/access-controls
Bennett, C., Fitzpatrick-Harnish, K., & Talbot, B. (2022). Collaborative untangling of positionality, ownership, and answerability as white researchers in indigenous spaces. International Journal of Music Education, 40(4), 628–641.
Bennett, E. A. (2021). Open science from a qualitative, feminist perspective: Epistemological dogmas and a call for critical examination. Psychology of Women Quarterly, 45(4), 448–456. https://doi.org/10.1177/03616843211036460
Billups, F. D. (2014). Trustworthiness and the quest for rigor in qualitative research. NERA Researcher, 52, 10–12. https://www.nera-education.org/docs/TNR_Fall_2014_Color_Final.pdf
Birks, M., Chapman, Y., & Francis, K. (2008). Memoing in qualitative research: Probing data and processes. Journal of Research in Nursing, 13(1), 68–75. https://doi.org/10.1177/1744987107081254
Bornstein, J., Lustick, H., Shallish, L., Hannon, L., & Okilwa, N. (2022, April). Active Accountability for Disproportionate Discipline and Disability Classification Highlights Student Agency, Contextualization, and Racialization. American Educational Research Association 2022 Conference.
Caretta, M. A. (2016). Member checking: A feminist participatory analysis of the use of preliminary results pamphlets in cross-cultural, cross-language research. Qualitative Research, 16(3), 305–318. https://doi.org/1177/1468794115606495
Caretta, M. A., & Perez, M. A. (2019). When participants do not agree: Member checking and challenges to epistemic authority in participatory research. Field Methods, 31(4), 359–374. https://doi.org/10.1177/1525822X19866578
Clancy, M. (2013). Is reflexivity the key to minimising problems of interpretation in phenomenological research? Nurse Researcher, 20(6), 12–16. https://doi.org/10.7748/nr2013.07.20.6.12.e1209
Clarke, V. (2022). Navigating the messy swamp of qualitative research: Are generic reporting standards the answer? Qualitative Research in Psychology, 19(4), 1004–1012. https://doi.org/10.1080/14780887.2021.1995555
Corker, K. D.-K., Whylly, P. E., K, & Steltenpohl, C. N. (2022). The importance of updating registrations: A round table discussion. Center for Open Science. https://www.youtube.com/watch?v=6JfsBC31en4
Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory Into Practice, 39(3), 124–130. https://doi.org/10.1207/s15430421tip3903_2
Creswell, J. W., Miller, D. L., & Poth, C. N. (2018). Qualitative inquiry & research design: Choosing among five approaches. SAGE.
Cummings, J. A., Zagrodney, J. M., & Day, T. E. (2015). Impact of open data policies on consent to participate in human subjects research: Discrepancies between participant action and reported concerns. PLoS One, 10(5), 0125208. https://doi.org/10.1371/journal.pone.0125208
Davies, D., & Dodd, J. (2002). Qualitative research and the question of rigor. Qualitative Health Researcher, 12(2), 279–289. https://doi.org/10.1177/104973230201200211
DeHaven, A. (2017). Preregistration: A plan, not a prison. Center for Open Science. https://www.cos.io/blog/preregistration-plan-not-prison
Elson, M. (2021). I’m all in favor of data sharing, even mandatory where possible. https://web.archive.org/web/20210929155622/https://twitter.com/maltoesermalte/status/1390758338321952770
Ercikan, K., & Roth, W. M. (2006). What good is polarizing research into qualitative and quantitative? Educational Researcher, 35(5), 14–23. https://doi.org/10.3102/0013189X035005014
Fecher, B., & Friesike, S. (2014). Open science: One term, five schools of thought. In S. Bartling & S. Friesike (Eds.), Opening science. Springer. https://doi.org/10.1007/978-3-319-00026-8
Fielding, N. (2004). Getting the most from archived qualitative data: Epistemological, practical and professional obstacles. International Journal of Social Research Methodology, 7(1), 97–104. https://doi.org/10.1080/13645570310001640699
Fook, J., & Askeland, G. A. (2007). Challenges of critical reflection: ‘Nothing ventured, nothing gained’. Social Work Education, 26(5), 520–533. https://doi.org/10.1080/02615470601118662
Geertz, C. (1973). Thick description: Toward an interpretive theory of culture. In C. Geetz (Ed.), The interpretation of cultures: Selected essays (pp. 3–30). Basic Books.
Gillborn, D., Warmington, P., & Demack, S. (2018). QuantCrit: Education, policy, ‘big data’ and principles for a critical race theory of statistics. Race Ethnicity and Education, 21(2), 158–179. https://doi.org/10.1080/13613324.2017.1377417
Goldblatt, H., & Band-Winterstein, T. (2016). From understanding to insight: Using reflexivity to promote students’ learning of qualitative research. Reflective Practice, 17(2), 100–113. https://doi.org/10.1080/14623943.2015.1134471
Gow, J., Moffatt, C., & Blackport, J. (2020). Participation in patient support forums may put rare disease patient data at risk of re-identification. Orphanet Journal of Rare Diseases, 15(1), 1–12. https://doi.org/10.1186/s13023-020-01497-3
Guillemin, M., & Gillam, L. (2004). Ethics, reflexivity, and “ethically important moments” in research. Qualitative Inquiry, 10(2), 261–280. https://doi.org/10.1177/1077800403262360
Hagger, M. S. (2019). Embracing open science and transparency in health psychology. Health Psychology Review, 13(2), 131–136. https://doi.org/10.1080/17437199.2019.1605614
Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., Piñeiro, R., Rosenblatt, F., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19, 1–13. https://doi.org/10.1177/1609406920976417
Hope, E. C., Brugh, C. S., & Nance, A. (2019). In search of a critical stance: Applying qualitative research practices for critical quantitative research in psychology. Community Psychology in Global Perspective, 5(2), 63–69. https://doi.org/10.1285/i24212113v5i2p63
Humphreys, L., Lewis Jr, N. A., Sender, K., & & Won, A. S. (2021). Integrating qualitative methods and open science: Five principles for more trustworthy research. Journal of Communication, 71(5), 855–874. https://doi.org/10.1093/joc/jqab026
ICPSR. (nd). Accessing restricted data at ICPSR. https://www.icpsr.umich.edu/web/pages/ICPSR/access/restricted/
Jamieson, M. K., Govaart, G., & Pownal, M. (2022). Reflexivity in quantitative research: A rationale and beginner’s guide. PsyArXiv. https://doi.org/10.31234/osf.io/xvrhm
Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84(1), 138–146. https://doi.org/10.5688/ajpe7120
Khan, M. T., Hyun, M., Kanich, C., & Ur, B. (2018). Forgotten but not gone: Identifying the need for longitudinal data management in cloud storage. ACM Proceedings, 1–12. https://jglobal.jst.go.jp/en/detail?JGLOBAL_ID=202002290369007676
Lash, T. L. (2015). Declining the Transparency and Openness Promotion Guidelines. Epidemiology, 26(6), 779–780. https://doi.org/10.1097/EDE.0000000000000382
Lazard, L., & McAvoy, J. (2020). Doing reflexivity in psychological research: What’s the point? What’s the practice? Qualitative Research in Psychology, 17(2), 159–177. https://doi.org/10.1080/14780887.2017.1400144
Levitt, H. M. (2020). Reporting qualitative research in psychology: How to meet APA style journal article reporting standards. American Psychological Association.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.
Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 30, 73–84.
Long-Sutehall, T., Sque, M., & Addington-Hall, J. (2011). Secondary analysis of qualitative data: A valuable method for exploring sensitive issues with an elusive population? Journal of Research in Nursing, 16(4), 335–344. https://doi.org/10.1177/1744987110381553
Lyon, L. (2016). Transparency: The emerging third dimension of open science and open data. LIBER Quarterly, 25(4), 153–171. https://doi.org/10.18352/lq.10113
Mackenzie, N., & Knipe, S. (2006). Research dilemmas: Paradigms, methods and methodology. Issues in Educational Research, 16(2), 193–205. http://www.iier.org.au/iier16/mackenzie.html
Makel, M. C., Meyer, M. S., Pei, M. A., Roberts, A. M., & Plucker, J. A. (2022). Replication is relevant in qualitative research. Educational Research and Evaluation, 27(1–2), 215–219. https://doi.org/10.1080/13803611.2021.2022310
Malterud, K. (2001). Qualitative research: standards, challenges, and guidelines. The Lancet, 358(9280), 483–488. https://doi.org/10.1016/S0140-6736(01)05627-6
Mao, L., Mian Akram, A., Chovanec, D., & Underwood, M. L. (2016). Embracing the spiral: Researcher reflexivity in perse critical methodologies. International Journal of Qualitative Methods, 15(1), 1–8. https://doi.org/10.1177/1609406916681005
McIntosh, T., Walsh, H. A., Parsons, M. V., Goodman, M., & DuBois, J. M. (2021). Barriers and facilitators to qualitative data sharing in the United States: A survey of qualitative researchers. PLoS ONE, 16(12), 0261719. https://doi.org/10.1371/journal.pone.0261719
Mellor, D. T. (2021). Preregistration and transparency in the research process. PsyArXiv. https://doi.org/10.31219/osf.io/8rq3t
Meyer, K., & Willis, R. (2019). Looking back to move forward: The value of reflexive journaling for novice researchers. Journal of Gerontological Social Work, 62(5), 578–585. https://doi.org/10.1080/01634372.2018.1559906
Mill, J. E., & Ogilvie, L. D. (2003). Establishing methodological rigour in international qualitative nursing research: A case study from Ghana. Journal of Advanced Nursing, 41(1), 80–87. https://doi.org/10.1046/j.1365-2648.2003.02509.x
Moravcsik, A. (2019). Transparency in Qualitative Research. SAGE Publications Ltd. https://doi.org/10.4135/9781526421036
Mozersky, J., Walsh, H., Parsons, M., McIntosh, T., Baldwin, K., & DuBois, J. M. (2020). Are we ready to share qualitative research data? Knowledge and preparedness among qualitative researchers, IRB Members, and data repository curators. IASSIST Quarterly, 43(4), 13–27. https://doi.org/10.1002/eahr.500044
National Institutes of Health [NIH]. (2020). NIH data sharing policy and implementation guidance. U.S. Department of Health. https://sharing.nih.gov/data-management-and-sharing-policy/about-data-management-and-sharing-policy/data-management-and-sharing-policy-overview#after
National Institutes of Health [NIH]. (n.d.). Enhancing reproducibility through rigor and transparency. U.S. Department of Health. https://grants.nih.gov/policy/reproducibility/index.htm
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., & Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1–13. https://doi.org/10.1177/1609406917733847
Office of Science and Technology Policy. (2022). Ensuring Free, Immediate, and Equitable Access to Federally Funded Research. https://www.whitehouse.gov/wp-content/uploads/2022/08/08-2022-OSTP-Public-Access-Memo.pdf
Patton, M. Q. (2014). Qualitative research & evaluation methods. Sage Publications.
Peredaryenko, M. S., & Krauss, S. E. (2013). Calibrating the human instrument: Understanding the interviewing experience of novice qualitative researchers. Qualitative Report, 18(43), 1–17. https://doi.org/10.46743/2160-3715/2013.1449
Powers, S. M., & Hampton, S. E. (2019). Open science, reproducibility, and transparency in ecology. Ecological Applications, 29(1), 01822. https://doi.org/10.1002/eap.1822
Qualitative Data Repository. (2022). Human participants general guidance. https://qdr.syr.edu/guidance/human-participants
Qualitative Data Repository. ( (n.d.)). Preparing data files. https://qdr.syr.edu/guidance/managing/preparing-data
Rodgers, B. L., & Cowles, K. V. (1993). The qualitative research audit trail: A complex collection of documentation. Research in Nursing & Health, 16(3), 219–226. https://doi.org/10.1002/nur.4770160309
Rolfe, G. (2006). Validity, trustworthiness and rigour: Quality and the idea of qualitative research. Journal of Advanced Nursing, 53(3), 304–310. https://doi.org/10.1111/j.1365-2648.2006.03727.x
Rooney, V. M. (2015). Consent in longitudinal intimacy research: Adjusting formal procedure as a means of enhancing reflexivity in ethically important decisions. Qualitative Research, 15(1), 71–84. https://doi.org/10.1177/1468794113501686
Ross, M. W., Iguchi, M. Y., & Panicker, S. (2018). Ethical aspects of data sharing and research participant protections. American Psychologist, 73(2), 138–145. https://doi.org/10.1037/amp0000240
Ruggiano, N., & Perry, T. E. (2019). Conducting secondary analysis of qualitative data: Should we, can we, and how? Qualitative Social Work, 18(1), 81–97. https://doi.org/10.1177/1473325017700701
Sakaluk, J. K. (2021). Response to commentaries on Sakaluk (2020). Archives of Sexual Behavior, 50(5), 1847–1852. https://doi.org/10.1007/s10508-021-02020-w
Savin-Baden, M., & Howell-Major, C. (2013). Qualitative Research: The Essential Guide to Theory and Practice. Routledge.
Schöpfel, J., Farace, D., Prost, H., & Zane, A. (2019). Data papers as a new form of knowledge organization in the field of research data. Knowledge Organization, 46(8), 622–638. https://halshs.archives-ouvertes.fr/halshs-02284548
Small, W., Maher, L., & Kerr, T. (2014). Institutional ethical review and ethnographic research involving injection drug users: A case study. Social Science & Medicine, 104, 157–162. https://doi.org/10.1016/j.socscimed.2013.12.010
Stahl, N. A., & King, J. R. (2020). Expanding approaches for research: Understanding and using trustworthiness in qualitative research. Journal of Developmental Education, 44(1), 26–28. https://files.eric.ed.gov/fulltext/EJ1320570.pdf
Stake, R. E. (1995). The art of case study research. Sage.
Steltenpohl, C. N., Montilla Doble, L. J., Basnight-Brown, D. M., Dutra, N. B., Belaus, A., Kung, C. C., Onie, S., Seernani, D., Chen, S., Burin, D. I., & Darda, K. (2021). Society for the Improvement of Psychological Science global engagement task force report. Collabra: Psychology, 7(1), 22968. https://doi.org/10.1525/collabra.22968
Tobin, G. A., & Begley, C. M. (2004). Methodological rigour within a qualitative framework. Journal of Advanced Nursing, 48(4), 388–396. https://doi.org/10.1111/j.1365-2648.2004.03207.x
United Nations Educational, Scientific and Cultural Organization. (2021). UNESCO Recommendation on Open Science. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000379949
VandeVusse, A., Mueller, J., & Karcher, S. (2022). Qualitative data sharing: Participant understanding, motivation, and consent. Qualitative Health Research, 32(1), 182–191. https://doi.org/10.1177/10497323211054058
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411–417. https://doi.org/10.1177/1745691617751884
Wolf, Z. R. (2003). Exploring the audit trail for qualitative investigations. Nurse Educator, 28(4), 175–178. https://journals.lww.com/nurseeducatoronline/Fulltext/2003/07000/Exploring_the_Audit_Trail_for_Qualitative.8.aspx
Youngblut, J. M., Loveland-Cherry, C., & Horan, M. F. (1990). Data management issues in longitudinal research. Nursing Research, 39(3), 188–189. https://doi.org/10.1097/00006199-199005000-00019