Skip to main content
SearchLoginLogin or Signup

The Invisible Workload of Open Research

Published onMay 04, 2023
The Invisible Workload of Open Research


It is acknowledged that conducting open research requires additional time and effort compared to conducting ‘closed’ research. However, this additional work is often discussed only in abstract terms, a discourse which ignores the practicalities of how researchers are expected to find the time to engage with these practices in the context of their broader role as multifaceted academics. In the context of a sector that is blighted by stress, burnout, untenable workloads, and hyper-competitive pressures to produce, there is a clear danger that additional expectations to engage in open practices add to the workload burden and increase pressure on academics even further. In this article, the theories of academic capitalism and workload creep are used to explore how workload models currently exploit researchers by mismeasuring academic labour. The specific increase in workload resulting from open practices and associated administration is then outlined, including via the cumulative effects of administrative burden. It is argued that there is a high chance that without intervention, increased expectations to engage in open research practices may lead to unacceptable increases in demands on academics. Finally, the individual and systematic responsibilities to mitigate this are discussed.

Keywords: academic capitalism, workload, burnout, administrative burden, open research

It is widely accepted that conducting open research can improve the endeavour of collaborative human knowledge generation. Here, “open research”1 refers to a variety of practices that make the plans, procedures, labour, and outputs of research publicly available, although the phrase covers broader meanings elsewhere (Fecher & Friesike, 2014). Any one individual open practice such as preregistration or data sharing may have a variety of axiological benefits (Uygun Tunç et al., 2022), but taken together, transparency of the research process improves the epistemic reliability of a piece of research, which facilitates incremental knowledge generation. It also supports an environment by which epistemically unreliable research (whether through errors or bias) can be discounted or ignored (Lakens & Evers, 2014).

Compared to ‘closed’ research, where the only publicly available element of research is a final journal report, open research involves transparently cataloguing as much of the research process as possible. It has been argued that transparent workflows can reduce inefficiency and save time once implemented (Lowndes et al., 2017). However, in practice, conducting open research involves additional time and effort compared to closed research, not only in the process of sharing materials to disciplinary standards, but also in the development of new skills and knowledge to enable this. Generally, the additional work required to conduct open research has been acknowledged (and justified) by proponents of open research reforms (e.g., Allen & Mehler, 2019; Robson et al., 2021; Scheliga & Friesike, 2014; A. J. Stewart et al., 2021).

However, the current discourse promoting open research fails to engage sufficiently with how additional workload impacts the practicalities of academic labour (Callard, 2022). The majority of literature discussing open research reforms is in the field of ‘meta-research’ and has typically viewed closed research as a systematic or cultural problem: characterizing researchers as fallible human agents working in systems and contexts that encourage bias and closed practices. Examples include discussion of issues such as the incentives for conducting open practices (Nosek et al., 2012), biases in publication workflows (Chambers & Tzavella, 2022), recognition and reward of open research practices (Munafò, 2019) and compliance with open research mandates (Gabelica et al., 2022). Whilst these are important issues, the common perspective is that researchers exist solely to conduct research and are primarily judged and motivated by success in this domain. This neglects the fact that the majority of researchers are employed not solely as researchers but rather as academics, a role that involves a large number of other activities that compete for time and resources, including teaching, administration, income generation, knowledge exchange, and supervision. Even for academics who are primarily researchers, transparency may not be a priority concern given other important and competing demands such as increasing research regulation (P. M. Stewart et al., 2008), novel ethical issues (Havard et al., 2012), or grappling with fundamental issues in theory development (Eronen & Bringmann, 2021) and methodology (Uher, 2023). By solely focusing on “open research” as a separable pursuit, meta-research neglects to acknowledge that the additional workload required by open research cannot always be practically accommodated in the day-to-day duties of academics and the time and resources they have available.

This is a critical issue, given the increasing systematic degradation of working standards across academia: There is ample evidence that many academics are already at “capacity” in terms of the amount of work they do (Long et al., 2020), and yet workloads are still increasing. This has led to endemic levels of stress and burnout in the sector (Urbina-Garcia, 2020), mental health crises (Nicholls et al., 2022), recourse to industrial action to protest against overwork (University & College Union, 2022), and a recognition that the sector is haemorrhaging talent to industry where working conditions are seen to be more favourable (Gewin, 2022; Seidl et al., 2016). It is therefore crucial to explore the blind spot in meta-research of how the additional workload of open research may potentially negatively impact on working conditions of academics.

The oversight can be addressed by utilising research from the field of higher education studies, including the theory of academic capitalism (Jessop, 2018). This theory suggests the scholarly ecosystem can be viewed as a type of market, where institutions are capitalist actors in competition with one another. From this perspective, the way that universities (as the primary employers of most academics) are organised, and subsequently their priorities, policies, and relationship with (and potentially exploitation of) academic labour can be examined. The lens of academic capitalism can therefore offer new insights on the way that open research reforms may be practically prevailed upon academics (Hostler, 2022), in order to anticipate problems and provide solutions.

The rest of the paper is structured as follows: first, I explain the theoretical framework of academic capitalism, including how academic labour is typically controlled by universities using a “workload model” and exploited via “workload creep”. Second, I explain how various open research practices add to the time burden and workload of conducting research, including via the cumulative and unnecessary effect of administrative burden. Third, I will explore how additional open research activities may not be sufficiently accounted for in a workload model, leading to detrimental effects on academics’ well-being. Finally, I will conclude with a discussion of potential solutions and the responsibilities of both individuals and institutions to address these issues.

Theoretical Background

Academic Capitalism

The theory of academic capitalism comes from higher education research and refers to the tendency for universities to operate in competition with each other in markets, competing over both economic and social capital. This tendency is manifested in their priorities, activities, internal organisation, and management. Whilst there are various specific forms of academic capitalism (Jessop, 2018), as an overarching theory it provides a framework which enables universities to be considered as strategic actors, rather than passive organizational units (Münch, 2014).

Through this lens, universities (like all capitalist actors) seek to maximise the utilization of their resources to remain competitive against one another in a variety of zero-sum ‘markets’ including student recruitment, research funding, national research evaluation exercises, and national and global university rankings (Collini, 2012). Primary among a university’s resources is its academic labour, which it attempts to “steer” towards its own goals through its internal management policies (Rees, 2015), and find innovative ways of organizing to improve its (economic) efficiency, for example through the use of fixed-term or part-time contracts (Macfarlane, 2011).

The strategic deployment of academic labour is not necessarily exploitative, and managerialist organization in universities is increasingly tolerated and accepted by academics (Kolsaker, 2008). However, increased oversight and capitalist logic is also seen to enable normalizing exploitative practices when financial considerations are prioritized over traditional academic professional values (Vican et al., 2020). This is epitomized by the finding that the majority of casualized academic staff are required to work more hours than which they are paid for in order to complete the work required in their contract (i.e., marking essays to a suitable academic standard;, University & College Union, 2019).

Universities are not purely capitalists and have many competing interests, and the drivers of these interests are dynamic and set by the broader economic and political conditions from which they are created. Often, a university’s specific goals are congruous with the metrics and conditions tied to these drivers, for example the criteria used to judge research excellence in national research evaluation exercises. Changes to these metrics subsequently influence the university’s strategic plans, leading to the re-allocation of resources and new instructions to academics. The work by open research advocates to change these drivers to reward openness - such as funder mandates for open data (Hefce et al., 2016), or changes to university ranking criteria (Pagliaro, 2021) - are therefore some of the most powerful tools for system-wide adoption of open research practices.

However, efforts to change a university’s strategic priorities to support open research do nothing to alter the underlying capitalist framework, and so do not tackle the issues of working conditions raised in this article. Academic labour can also potentially be exploited to serve the interests of open research: the competitive pressures of “publish or perish” can easily become “publish open research or perish”. Indeed, any changes to working practices or expectations can provide perfect cover (intentionally or not) for increased exploitation (Hostler, 2022). To understand how the promotion of open research can lead to negative changes in working conditions, a closer look at how academic labour is currently organized is required.

The Workload Model

An academic’s job typically consists of a large number of activities in addition to research. A comprehensive analysis of an academic’s typical work over a three-month period is provided by Miller (2019) , covering five main areas of teaching, research, administration, community service, and ‘other’. Whilst academics have responsibility for organising when they perform each of these duties, universities are increasingly using managerial practices to assign the range and volume of the tasks themselves (Kenny & Fluck, 2022), in the form of a “workload model”: a system for “projectifying” time into limited, measurable slots that can be allocated to different activities (Dollinger, 2020). Workload models are typically based on an annual measure of time that is allocated across different activities, forming a “split” of work across a year. For example, 40% of time dedicated to research, 40% to teaching and 20% to administration. Certain components of workload models are due to regulatory requirements (Kernohan, 2019), but they are also a useful tool for a capitalist university seeking to understand and maximise the efficiency of the deployment of its human resources. The use of such models is divisive. Some academics view them as a threat to their autonomy (Boncori et al., 2020), and whilst they may for some represent a level of protection against being given too many tasks, many others view them as a mechanism for universities to demand unrealistic levels of work from academics by underestimating the time taken for different activities (Papadopoulos, 2017). There is also a general acknowledgement that workload models are not comprehensive and that a significant proportion of the actual work that academics do is unaccounted for (Kenny & Fluck, 2019; Miller, 2019).

Whilst the time spent on different activities is allocated via workload models, academic performance is typically assessed via departmental or individual targets for outcomes or outputs of academic work. These outcomes are often cascaded from university-level metrics, for example, the number of papers published and in which journals, and the amount of research funding accrued. For researchers employed on temporary contracts, these are targets that are required to secure the next job and “survive” in academia (Anderson et al., 2007). The discrepancy between the time available to work on different academic activities and the expectations of performance is a key driver of stress and discontent. Many academics already feel that their workloads are at “untenable” levels and that additional time is needed to meet expectations, leading to burnout (Beatson et al., 2021). Within this context, expectations to perform additional duties to conduct open research, whether from formal mandates by funders or universities, or informal social expectations to remain competitive, have the potential to make things worse if insufficiently accounted for (made ‘invisible’) in a workload model. Unfortunately, historical trends suggest that open research practices will not be accounted for in workloads, as I explain below.

Workload Creep

One likely way in which open research may be insufficiently accounted for is via its inclusion in “workload creep”. This phenomenon exploits the fact that workload models do not provide a granular breakdown of activities, meaning that expectations around what should be achieved in a given time can be subtly changed without a corresponding change in the amount of hours dedicated to a task (Long et al., 2020). This is particularly common in the case of research, where changes to research expectations (in terms of quantity or quality of outputs) may be raised without extra time or resources made available. In the case of open research, there is a high potential for expectations to engage in open research to become widespread, but without additional time on academics’ workloads dedicated to the activity. These expectations may originate either from employer’s performance standards, for example requiring evidence of open research in hiring and promotion criteria (Gärtner et al., 2022; Robson et al., 2021), or from mandates for openness from funders, journals, or legislation (Nosek, 2019).

The broader discourse around academic workload acknowledges the reality of workload creep. Advice to early career academics is to simply learn how to “say no” to requests to perform additional work (Somerville, 2021), or to ask a manager “what would you like me to stop doing?” (Williams, 2022). However, it is difficult to apply either of these approaches to changes to research expectations. It is unlikely that open research practices will be explicitly requested by an individual such as a manager to whom one can say “no”: academics will either be encouraged or mandated to adopt them by anonymous university or journal policies, which are difficult to contest, or they will do so out of their own volition to remain competitively employable. This then hampers bargaining power in discussions with managers about workloads, making it difficult to secure changes in workload models to accommodate the additional time required.

The issue of workload creep can already be seen in the open research practice of ‘open access’, where funded research outputs are mandated to be made publicly available, requiring additional work by academics to understand and comply with these requirements (Research Consulting, 2014). However, this additional work is not reflected in workloads in terms of an increase in the number of hours per year given for research (which in ‘full capacity’ workloads would require other tasks to be removed), or in changes to performance expectations of an explicit decrease in the number of publications expected per year. Complying with an open access requirement may only take about 30 minutes (Reimer, 2014), but conducting other open research practices (such as sharing materials, data, code, or preregistrations) would confer a much more significant time burden if expected or mandated in the same way. In the next section I provide several examples of how open research practices may lead to a significant increased time burden.

Additional Workload of Open Research

There are numerous open research practices, the benefits of which are discussed in detail elsewhere (e.g., Munafò et al., 2017; Nosek et al., 2018) and not all need be applied to every piece of research. However, adopting any new practice typically involves making changes to a researchers’ existing practices, bringing additional workload. This includes time spent learning and applying ‘open’ practices, the cumulative workload of novel administrative work, and the indirect labour of teaching and mentoring open research. These are explained in turn below:

Workload of Specific Open Practices


Preregistration involves providing a detailed explanation of a researcher’s planned data collection procedure, hypotheses, and analysis plan, so that researcher degrees of freedom in analysis decisions can be observed. However, in order for a preregistration to achieve this functionality, it must be “precise”, “specific” and “exhaustive” (Bakker et al., 2020). This involves communicating plans in a substantially greater level of detail than required in traditional research administration (e.g. for the purposes of ethical review, grant applications), as multiple alternative analysis strategies need to be considered and explained (including what will not be done) depending on different data collection outcomes including outliers, missing data, and violation of statistical assumptions (Bakker et al., 2020). The checklist by Wicherts et al. (2016) presents 34 different degrees of freedom that researchers should define in advance in a preregistration to prevent p-hacking. For the majority of researchers, following such a checklist represents an increase in the explicit planning needed for a research project, where many of their decisions will be based on implicit assumptions. It takes additional time to explicitly articulate these plans and decisions, especially in a form that is understandable to people unfamiliar with the project. The suggestion that a preregistration should typically take only “30–60 minutes” (Aguinis et al., 2020) is likely to be an inaccurate generalization, depending on the type and complexity of the research and the experience of the researcher, although the time taken should decrease with practice (Nosek et al., 2019).

Data Sharing

Data sharing is an open research practice that takes significantly more effort compared to a traditional closed approach; insofar as a closed approach takes no effort at all, involving simply ignoring or rebuffing sharing requests (Gabelica et al., 2022). In contrast, done properly and in line with the principles of Findable, Accessible, Interoperable, Reusable (FAIR; Wilkinson et al., 2016), data sharing takes a considerable amount of time. First, data must be properly anonymized in order to be shared ethically, a process that is particularly difficult for qualitative data such as interviews (Saunders et al., 2015), legal documents (Csányi et al., 2021), and unstructured, high-dimensional data such audio and video recordings (Weitzenboeck et al., 2022). Second, data should be findable, which means taking time to access and use an appropriate data repository, make sure settings are correct to comply with ethical restrictions, and that sufficient meta-data is provided. Finally, to be accessible, interoperable, and reusable, data must be organized and labelled to community standards and formatted and described in such a way as to be understandable to others who are not familiar with how it was collected or processed. These tasks may represent time-consuming departures from how a researcher typically organises data for their own use.

Analysis Code

Open code refers to sharing the analysis code used to produce the output found in the final report from the research data. This is an open practice that may be unfamiliar to some researchers, especially those who use graphical user interface (GUI) programs such as SPSS where viewing and understanding the underlying code is not necessary to analyse data. Compiling and sharing analysis code may therefore require training and the acquisition of new skills to be able to do this adequately, if a researcher is not experienced in doing this. Some researchers argue that using proprietary programs such as SPSS is not ideal for open research, since it takes more effort for those without access to these programs to utilize and interpret the code (Obels et al., 2020). This may encourage researchers to utilise open source alternative programs, such as JASP or R, where sharing code is significantly easier, although this in itself involves the development of new software skills and workflows. As with data, the shared code also needs to be written or annotated in such a way to ensure usability for other researchers (Obels et al., 2020), which takes extra time compared to writing code for one’s own use, which may use idiosyncratic shorthand.

Openness Agreements & Administration

In complex projects, additional administration is often required to facilitate the use of open research practices. This is particularly true in research with a large number of collaborators. Here, legal documents may be required to certify sharing agreements for data, materials, or outputs, particularly in cases where different elements of a project have different levels of openness across different time frames. For example, in cases where separable elements of a piece of software may be “owned” by different parties in a collaboration (Levin & Leonelli, 2017) or where industry collaboration requires delaying the timing of release of results or materials to maintain a competitive edge (Fernández Pinto, 2020). With the increasing size of research teams and complexity of projects, such agreements become lengthier and take additional time to complete and get approval from all parties. Additional administration (compared to closed research) may also come in the form of recording contributions to research projects (e.g. CREDIT taxonomy; Holcombe, 2019) or providing meta-data about open research practices in the form of transparency statements or checklists (Aczel et al., 2020). Administration has a particularly close relationship with workload creep, and below I explain how the theory of administrative burden can further illuminate how the process of integrating minor administrative tasks into existing workflows can exacerbate the time burden of open research.

Administrative Burden

Whilst preparing a large set of audio-visual data to FAIR sharing standards may be a significant technical undertaking, many open research practices may be viewed as essentially administrative tasks involving documenting information about a piece of research. This includes writing preregistrations, data sharing documentation, and statements about the openness (or not) of open practices for different elements of a project. The theory of “administrative burden” (Bozeman, 1993) can be used to explore how in many cases the additional time spent completing these tasks is unnecessary and unnoticed in workload estimates. Administrative burden theory acknowledges that all administration represents a time burden, but that in many cases it represents unnecessary “red tape” when it does not help to fulfil a regulation’s functional objectives. An example of unnecessary research administration might be an ethics form that asks a researcher whether they are using radioactive materials, despite the fact that due to their discipline (e.g. psychology), the answer should be obvious (Bozeman & Youtie, 2020). In the case of open research, red tape may involve requirements to write transparency statements or complete checklists about the availability of data or materials where none exist (e.g. review papers), or explain the (non)existence of preregistrations for research in which this practice is not required or its use contested (e.g. exploratory or qualitative research).

Red tape can also be seen in the case of “rule redundancy” (Bozeman & Jung, 2017) resulting from bureaucratic overlap, where administration such as explanations of data sharing arrangements is duplicated across platforms (e.g. for funding applications, ethics applications, and journal requirements), but often with different specifications. Another example of rule redundancy is with preregistrations, documents which may closely mirror elements of existing research administration, such as research protocols required for ethical review. Whilst the existence of a protocol may make completing a separate preregistration easier (or vice-versa), functionally the duplication of the information may be unnecessary if one document could potentially serve both purposes, yet generates extra workload when the different formats of each document requires time to adapt the content to move information between the two.

Administrative burden is often exacerbated when technology is used to remotely facilitate administration and therefore lacks the nuance to accurately capture the reality of a particular context. Dialogue with the technology provider is then required to resolve discrepancies, inconspicuously increasing time burden. For example, platforms or infrastructure to facilitate open practices such as preregistration or sharing data or materials may be unclearly worded or not fit for purpose for particular kinds of research or data (e.g., Borgerud & Borglund, 2020; Rhys Evans et al., 2021). This is particularly the case in disciplines such as the humanities, where what constitutes “data” or even “research outputs” may be unusual (including physical objects). The widespread adoption of the ethical norms and terminology from positivist biomedical research in the ethical review process is inappropriate for much social science research and a historical example of creating extra administrative burden for certain groups of scholars (Schneider, 2015). This foreshadows the potential for open research practices such as preregistration, developed from a similarly narrow statistical perspective (Nosek et al., 2018), to also be misapplied to other areas of research if administered remotely. Increased administrative burden also occurs when existing technology and systems in the research ecosystem fail to keep pace with developments and trends in research resulting from greater openness. This issue can already be seen in archaic manuscript submission systems which do not accommodate the hundreds of authors found on “big team science” projects enabled by the use of open research practices, requiring significant additional time spent doing administration (Forscher et al., 2022). The scope for the multitude of open research requirements and applications to novel forms of research to outpace existing technology means that such examples are likely to become more common.

The growth of administrative burden has two facets that make it difficult to combat. The first is that administration can often be convincingly defended on the basis that it collects data that has potential utility or that it is necessary to assure compliance with regulations or mandates. However, whether all the data collected from research administration is strictly necessary or ever actually used is contested (O’Leary et al., 2013), and as many open research practices and associated administration are not yet widely adopted there is a lack of evidence on the actual benefits (e.g., of transparency statements) to consider against potential or actual time costs. There are strong arguments that existing research administration is already excessive and prohibitive, particularly for certain types of research such as clinical trials (D. J. Stewart et al., 2015). Some estimates put the amount of time allocated to research that is spent on administration at 42% (Rockwell, 2009) and there is evidence that researchers already employ “workarounds” or exhibit non-compliance to reduce this burden (Bozeman et al., 2021). More fundamentally, it has been argued that research administration is an ineffective way to ensure compliance with regulations (Schneider, 2015), as it can easily be falsified (e.g., claiming data is available when it is not; Gabelica et al., 2022).

The second issue is that administrative burden is a cumulative problem. The time cost of any one individual instance of administration can easily be dismissed as trivial: it might take only ten minutes to complete a transparency checklist. In the context of a workload model measuring time annually, this represents <.01% of a researcher’s time. However, cumulatively, such administration adds up. In addition to a checklist about methodological transparency (Aczel et al., 2020), a researcher may also be compelled to complete an ethics transparency checklist (Henry et al., 2018), a financial conflict of interest checklist (Rochon et al., 2010), a patient and public involvement checklist (Staniszewska et al., 2017), and/or a clinical practice guidelines checklist (Cruz Rivera et al., 2020). When considered in the wider context of an all-round academic job, potential sources of administration multiply even further. Administration is increasing in universities across all elements of an academic role (Hogan, 2011), all of which compete for importance and time. Minor administrative tasks are constantly introduced to collect data relating to pedagogy, supervision, equality and diversity, technology enhanced learning, financial auditing, health and safety, data protection compliance, employment law, and so on. The cumulative impact of ‘trivial’ pieces of administration has been described as “death by a thousand 10-minute tasks” (Bozeman et al., 2021). However, it is only the academic themself that sees the impact of this burden as it is they who need to devote time to completing all of these tasks. The full picture of administrative burden is therefore difficult to detect as it is only visible when considered holistically, a perspective that meta-research and other siloed analyses of individual aspects of academic work often overlook.

Fostering Open Research: Teaching and Supervision

A final way in which open research invisibly adds to workload is through the expectation that academics not only engage in open research themselves but teach and mentor open research practices to junior colleagues, and graduate and undergraduate students. This can be through direct reforms to teaching materials and curriculums, but also through supervision and informal mentoring. Understanding the why and how-to of open research reforms is a big task that necessarily requires knowledge of the philosophy, history, and sociology of science, as well as the practical data science and technological skills discussed previously (Crüwell et al., 2019). Therefore, instructing graduate and undergraduate students on such topics may require significant reform of existing teaching and supervision practices, which have been described as “largely outdated” (Azevedo et al., 2022). The issue of updating existing curricula with new knowledge is not one that is unique to research methods, and workloads typically include time to update and rewrite teaching content (although this is often already underestimated). Resources have been developed and shared to reduce this burden (e.g., lesson plans, Pownall et al., 2021), however, the “revolutionary” changes that open research reforms represent (Spellman, 2015) still make this task considerable and time consuming. Major changes such as shifting to teaching reproducible analysis software like R may require significant investments in staff training (e.g., Barr et al., 2019), again a time sink that is rarely captured in workload models. These issues also apply to the informal mentoring of colleagues and PhD students, work that is typically already neglected in workload models and falls disproportionately on structurally disadvantaged staff (Gordon et al., 2022). Adding in mentoring of open research skills and knowledge to existing supervision of how to navigate academia and research methods again represents additional activities that are practically time consuming, but that are not reflected in workload models.


The root of the issue of workload models is that as models, they are by definition “simulations of measurement” of academic work rather than accurate records of real-world labour, but their use precedes the reality in discussions and expectations of the work of academics (Papadopoulos, 2017). By erroneously “measuring” the reality of the time it takes to perform certain activities or ignoring others entirely, expectations of academic work are unrealistic, yet from a managerial perspective, justified. Open research practices represent a novel type of academic labour with high potential to be mismeasured or made invisible by workload models, raising expectations to even more unrealistic levels.

The actual additional workload of open research is highly dependent on the type of research and the specific practices adopted. Whilst the time burden of tasks such as data sharing or teaching open research may be clear and significant, others such as administration, checklists, or preregistration may be deceptively trivial. However, such trivial tasks can easily add up and multiply, and are thus much more likely to ‘creep’ into workloads undetected. Taken together, the additional workload required for openness could therefore easily consume any time “saved” by efficiency gains from open workflows (Lowndes et al., 2017).

The benefits of open research practices have been widely discussed (Munafò et al., 2017), and generally speaking researchers have positive attitudes towards adopting open research practices, finding them worthwhile (Eynden et al., 2016; Lowndes et al., 2017), and recommending their use to others (Sarafoglou et al., 2022). However, high time cost is repeatedly identified as one of the main barriers to the adoption of open practices (Eynden et al., 2016; Gownaris et al., 2022; Tenopir et al., 2011) and time is the main thing that researchers lack, with workloads at capacity across the sector, having already reached “untenable” levels (Long et al., 2020; Papadopoulos, 2017). Historically, institutions have responded to increases in workload and administration by implementing managerial practices such as workload models that aim to increase efficiency by raising expectations of what researchers should achieve in their existing available time. This has led to endemic levels of stress and mental health issues across academia, with unrealistic expectations and excessive workload cited as the primary causes (Nicholls et al., 2022; Urbina-Garcia, 2020).

Without intervention, there is currently no reason to expect that the additional workload required by open research practices will not follow the same pattern of being integrated into existing workloads without a sufficient increase in time available, and thus exacerbate the crisis. The theory of academic capitalism suggests that the responsibility for addressing potential discrepancies between modelled and actual workload will not be taken up voluntarily by university management, who may at best ignore such issues, or at worse tacitly approve of them as a form of capitalist efficiency (Lyons & Ingersoll, 2010). In other words, if university management can choose not to incorporate open research into workload models, then they won’t.

Acknowledging the implications of operating in a capitalist academic system presents a dilemma for open research advocates looking to improve the quality of research without exacerbating existing issues with working conditions. Fairly integrating expectations and incentives to conduct open research into a system which already exploits academic labour is a difficult task, and good intentions on a systematic level can have perverse individual outcomes. On a systematic level it is certainly a desirable outcome if open research is rewarded, thus helping to position responsible researchers into long-term careers and raising the quality of research across the board. But on an individual level it is not a desirable outcome for a researcher already working at maximum capacity and at risk of burnout to be expected to perform extra tasks in order to be able to achieve or retain secure employment. Both of these outcomes can co-occur and uncritical progress towards the former may inadvertently trigger the latter. This duality has implications for both the responsibility of individuals and the design of systems in promoting open research, which I explain below.

Implications for Individuals

First, proponents of open research reforms must acknowledge how the extra work of conducting open research may be practically accommodated in a researcher’s existing workload. Although some have attempted to do this (e.g., Robson et al., 2021), elsewhere the issue is neglected or downplayed. Suggestions that concerns about the workload of open research are an example of a “myth” (Bastiaansen, 2019) or a “misconception” that can be corrected by “positive advocacy” (Hagger, 2022) are unhelpful to having honest conversations about the practical negative consequences of conducting open research. Claims that open practices such as sharing resources reduce workload (Grahe et al., 2020) only reflect the perspective of those utilising shared resources, and not those involved in doing the sharing. Efficiency gains from open research which nominally “save time” (Lowndes et al., 2017) may be inconsequential if open research also results in increased expectations of open outputs in the form of workload creep.

Second, open research advocates should not promote open research practices uncritically. Despite the benefits, all open research practices have an accompanying cost of time, which is a rare and increasingly depleted resource. Even trivial administrative tasks can have a cumulative impact. Whilst it may not be possible to accurately predict potential time costs and benefits in advance of proposing or promoting a new open research initiative, rigorous meta-research should be planned and conducted to evaluate the actual benefits and costs of open research practices. For example, research has investigated the impact of preregistrations on researcher workflow (Sarafoglou et al., 2022). If a practice is shown to have minimal actual benefit (e.g., if transparency statements go largely unread, or preregistrations fail to prevent researcher bias) then their continued promotion should be re-evaluated or discontinued.

Third, open research advocates, particularly those with influence in universities, should engage more directly with issues of academic labour (Callard, 2022; Hostler, 2022). When promoting open research reforms in universities or conducting open practices themselves they should advocate not only for investments in open research infrastructure and training but to receive extra time in workload allocations to acknowledge the additional burden of open research. Advocates writing about issues of systems and incentives should familiarize themselves with literature on academic capitalism (Jessop, 2018) and issues of workload modelling (Papadopoulos, 2017) and acknowledge and address the implications of policy suggestions on workload. They should listen to and support workers’ rights groups and trade unions in academia and take a broader interest in how changes to research infrastructure and practice can have negative effects on labour conditions, and take a more holistic view of researchers as part of an increasingly troubled and discontented higher education sector.

Implications for Systems

One proposed solution to the issue of the extra work required for open research is a move to team-based research, and the use of specialists to support academics with open research requirements (A. J. Stewart et al., 2021). It is a sensible suggestion and one that is likely to be amenable to institutions as it dovetails with many aspects of the ‘post-academic’ re-organization of research in universities (Ziman, 2000). In several places it has already been implemented, with an increase in university research professionals and support services to help with open research practices (Carter et al., 2019), which can directly reduce workload for academics. However, it is a long-term solution, and the availability of such specialist support is not yet consistent across the sector (S. L. K. Stewart et al., 2022). There is also much work to be done to fairly embed such roles in the infrastructure of university research and to ensure such specialist labour is not exploited itself, and is appropriately funded and rewarded (Bennett et al., 2022).

Where institutions do implement policies and mandates for current academics to practice open research, these should be designed to minimize unnecessary “red tape” in the form of administration that does not aid the reform’s functional objectives. This requires a clear understanding and explanation of what the functional objective of the reform actually is, which itself requires evaluating the reforms’ axiological position and benefits (Uygun Tunç et al., 2022). For many reforms (e.g., preregistration or data sharing for certain types or programs of research), such arguments may be contested (e.g., Szollosi et al., 2020), making suggestions that preregistration should be mandatory for publication (Aguinis et al., 2020) a clear example of potential ‘red tape’. Designing systems to minimize red tape is not an easy task, as both standardization (which does not accommodate non-standard research), as well as diversity (which can result in inconsistent and confusing nomenclature) can be potential sources of administrative burden. In addition, systems should aim to minimize rule redundancy, for example by replacing research protocols with preregistration documents during ethical review, to avoid unnecessary duplication of similar documents for different purposes. Policies and interventions should also carefully consider whether administration for the purposes of generating (meta)data about (open) research activities is justified, which whilst potentially useful for a number of reasons, may not be worth the cost of the added burden.

If institutions decide to devote funding and resources to support open research, then these should where possible be directed to activities and interventions which practically reduce the time-burden of conducting open research. This includes using funding to provide dedicated workload hours for open research practices, as well as revaluating research performance targets to acknowledge that conducting open research may take significantly longer. Any changes to workload or research expectations should be developed and implemented in consultation with academic staff, made fully transparent, and be flexible and under continuous review (Kenny & Fluck, 2022). Training and guidance for open research should be as targeted as possible (i.e., disciplinary and methodologically specific) to reduce the time-burden on researchers for interpreting and applying such guidance to their own projects. Grassroots open science communities have an important role in developing and delivering such training and mentoring in an accessible way (Armeni et al., 2021), although those involved in fostering these communities should also be appropriately workloaded and resourced for these tasks by their institutions.

The broader systems within which universities operate and compete should also be considered in the promotion of open research. Universities respond to market drivers; if a funder requires research data to be made open, then an astute university will invest in infrastructure to support researchers to do this to ensure continued access to the funding. If university rankings rewarded open research, a competitive university would try to ensure its research outputs performed well on these metrics. By this logic, if funders and rankers valued or mandated fair workloads and working conditions for academics as a condition of eligibility, universities would be inclined to adapt to remain competitive in this new environment.

Finally, it should be remembered that universities are complex, multifaceted, organizations and not solely capitalist actors. They have many competing and sometimes conflicting interests, which can ebb and flow depending on the current climate and conditions. Nevertheless, they are nearly always strategic, and direct intervention is often required by those advocating for change to highlight the benefits of one particular priority (e.g., staff wellbeing) over the costs of another (time spent on open research). This has implications for the way in which initiatives relating to promoting open research are designed, described, and advocated for.


The uncritical promotion of expectations to conduct open research within a framework of academic capitalism may inadvertently increase workload for researchers at a time when demands on time are already excessive and academics are struggling to cope. It is neither the specific role nor within the capability of open research advocates to tackle the root causes of workload issues, but they must be aware of the potential implications of their calls for systemic changes in incentives for open research. Understanding how academic labour is organized and viewing universities through the lens of academic capitalism can help open research advocates to promote open research practices in responsible and sustainable ways.


My initial work on this topic was conducted whilst undertaking an MA in Higher Education with the University Teaching Academy at Manchester Metropolitan University and I would like to thank my supervisor Bernard Lisewski for his support during this. I would also like to thank Yael Benn for her comments on an initial draft of this manuscript.


Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Šimon Kucharský, Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J. P., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, D. S., Morey, C. C., Munafò, M., Newell, B. R., … Wagenmakers, E.-J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 4–6.

Aguinis, H., Banks, G. C., Rogelberg, S. G., & Cascio, W. F. (2020). Actionable recommendations for narrowing the science-practice gap in open science. Organizational Behavior and Human Decision Processes, 158, 27–35.

Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career and beyond. PLOS Biology, 17(5), Article e3000246.

Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13(4), 437–461.

Armeni, K., Brinkman, L., Carlsson, R., Eerland, A., Fijten, R., Fondberg, R., Heininga, V. E., Heunis, S., Koh, W. Q., Masselink, M., Moran, N., Ó Baoill, A., Sarafoglou, A., Schettino, A., Schwamm, H., Sjoerds, Z., Teperek, M., van den Akker, O. R., van ’t Veer, A., & Zurita-Milla, R. (2021). Towards wide-scale adoption of open science practices: The role of open science communities. Science and Public Policy, 48(5), 605–611.

Azevedo, F., Liu, M., Pennington, C. R., Pownall, M., Evans, T. R., Parsons, S., Elsherif, M. M., Micheli, L., Westwood, S. J., & Framework for Open and Reproducible Research Training (FORRT). (2022). Towards a culture of open scholarship: The role of pedagogical communities. BMC Research Notes, 15(1), Article 75.

Bakker, M., Veldkamp, C. L. S., van Assen, M. A. L. M., Crompvoets, E. A. V., Ong, H. H., Nosek, B. A., Soderberg, C. K., Mellor, D., & Wicherts, J. M. (2020). Ensuring the quality and specificity of preregistrations. PLOS Biology, 18(12), Article e3000937.

Barr, D., Cleland Woods, H., DeBruine, L., Lai, R., McAleer, P., McNee, S., Nordmann, E., Paterson, H., & Stack, N. (2019). Redesigning methods curricula for reproducibility.

Bastiaansen, J. (2019). 10 open science myths – Open Science Community Groningen. Open Science Community Groningen.

Beatson, N. J., Tharapos, M., O’Connell, B. T., Lange, P., Carr, S., & Copeland, S. (2021). The gradual retreat from academic citizenship. Higher Education Quarterly, 76(4), 715–725.

Bennett, A., Garside, D., Gould van Pragg, C., Hostler, T. J., Kherroubi Garcia, I., Plomp, E., Schettino, A., Teplitzky, S., & Ye, H. (2022). A manifesto for rewarding and recognising Team Infrastructure Roles. Research Equals.

Boncori, I., Bizjak, D., & Sicca, L. M. (2020). Workload allocation models in academia: A panopticon of neoliberal control or tools for resistance? Tamara, 18(1), 51–69.

Borgerud, C., & Borglund, E. (2020). Open research data, an archival challenge? Archival Science, 20, 279–302.

Bozeman, B. (1993). A theory of government ‘red tape.’ Journal of Public Administration Research and Theory, 3(3), 273–303.

Bozeman, B., & Jung, J. (2017). Bureaucratization in academic research policy: What causes it? Annals of Science and Technology Policy, 1(2), 133–214.

Bozeman, B., & Youtie, J. (2020). Robotic bureaucracy: Administrative burden and red tape in university research. Public Administration Review, 80(1), 157–162.

Bozeman, B., Youtie, J., & Jung, J. (2021). Death by a thousand 10-minute tasks: Workarounds and noncompliance in university research administration. Administration & Society, 53(4), 527–568.

Callard, F. (2022). Replication and reproduction: Crises in psychology and academic labour. Review of General Psychology, 26(2), 199–211.

Carter, S., Carlson, S., Crockett, J., Falk-Krzesinski, H. J., Lewis, K., & Walker, B. E. (2019). The role of research development professionals in supporting team science. In K. L. Hall, A. L. Vogel, & R. T. Croyle (Eds.), Strategies for team science success (pp. 375–388). Springer International Publishing.

Chambers, C. D., & Tzavella, L. (2022). The past, present and future of Registered Reports. Nature Human Behaviour, 6(1), 29–42.

Collini, S. (2012). What are universities for? Penguin.

Crüwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., Orben, A., Parsons, S., & Schulte-Mecklenbeck, M. (2019). Seven easy steps to open science: An annotated reading list. Zeitschrift Für Psychologie, 227(4), 237–248.

Cruz Rivera, S., Liu, X., Chan, A.-W., Denniston, A. K., Calvert, M. J., SPIRIT AI, & CONSORT-AI Working Group. (2020). Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. The Lancet Digital Health, 2(10), 549–560.

Csányi, G. M., Nagy, D., Vági, R., Vadász, J. P., & Orosz, T. (2021). Challenges and open problems of legal document anonymization. Symmetry, 13(8), 1490.

Dollinger, M. (2020). The projectification of the university: Consequences and alternatives. Teaching in Higher Education, 25(6), 669–682.

Eronen, M. I., & Bringmann, L. F. (2021). The theory crisis in psychology: How to move forward. Perspectives on Psychological Science, 16(4), 779–788.

Eynden, V. V. D., Knight, G., Vlad, A., Radler, B., Tenopir, C., Leon, D., Manista, F., Whitworth, J., & Corti, L. (2016). Survey of Wellcome researchers and their attitudes to open research. Wellcome Trust.

Fecher, B., & Friesike, S. (2014). Open science: one term, five schools of thought. In S. Bartling & S. Friesike (Eds.), Opening science (pp. 17–47). Springer International Publishing.

Fernández Pinto, M. (2020). Open science for private interests? How the logic of open science contributes to the commercialization of research. Frontiers in Research Metrics and Analytics, 5, Article 588331, Article 588331.

Forscher, P. S., Wagenmakers, E.-J., Coles, N. A., Silan, M. A. A., Dutra, N. B., Basnight-Brown, D., & IJzerman, H. (2022). The benefits, barriers, and risks of big team science. PsyArXiv.

Gabelica, M., Bojčić, R., & Puljak, L. (2022). Many researchers were not compliant with their published data sharing statement: Mixed-methods study. Journal of Clinical Epidemiology, 150, 33–41.

Gärtner, A., Leising, D., & Schönbrodt, F. D. (2022). Responsible Research Assessment II: A specific proposal for hiring and promotion in psychology.

Gewin, V. (2022). Has the ‘great resignation’ hit academia? Nature, 606(7912), 211–213.

Gordon, H. R., Willink, K., & Hunter, K. (2022). Invisible labor and the associate professor: Identity and workload inequity. Journal of Diversity in Higher Education.

Gownaris, N. J., Vermeir, K., Bittner, M.-I., Gunawardena, L., Kaur-Ghumaan, S., Lepenies, R., Ntsefong, G. N., & Zakari, I. S. (2022). Barriers to full participation in the open science life cycle among early career researchers. Data Science Journal, 21(1), 2.

Grahe, J. E., Cuccolo, K., Leighton, D. C., & Cramblet Alvarez, L. D. (2020). Open science promotes perse, just, and sustainable research and educational outcomes. Psychology Learning & Teaching, 19(1), 5–20.

Hagger, M. S. (2022). Developing an open science ‘mindset.’ Health Psychology and Behavioral Medicine, 10(1), 1–21.

Havard, M., Cho, M. K., & Magnus, D. (2012). Triggers for research ethics consultation. Science Translational Medicine, 4(118).

Hefce, R. C. U., Universities UK, & Wellcome Trust. (2016). Concordant on Open Research Data.

Henry, B. M., Vikse, J., Pekala, P., Loukas, M., Tubbs, R. S., Walocha, J. A., Jones, D. G., & Tomaszewski, K. A. (2018). Consensus guidelines for the uniform reporting of study ethics in anatomical research within the framework of the anatomical quality assurance (AQUA) checklist: Framework of the AQUA Checklist. Clinical Anatomy, 31(4), 521–524.

Hogan, J. (2011). Is higher education spending more on administration and, if so, why? Perspectives: Policy and Practice in Higher Education, 15(1), 7–13.

Holcombe, A. (2019). Contributorship, not authorship: Use CRediT to indicate who did what. Publications, 7(3), 48.

Hostler, T. (2022). Open research reforms and the capitalist university’s priorities and practices: Areas of opposition and alignment. SocArXiv.

Jessop, B. (2018). On academic capitalism. Critical Policy Studies, 12(1), 104–109.

Kenny, J., & Fluck, A. E. (2019). Academic administration & service workloads in Australian Universities. Australian Universities Review, 61(2), 21–30.

Kenny, J., & Fluck, A. E. (2022). Emerging principles for the allocation of academic work in universities. Higher Education, 83(6), 1371–1388.

Kernohan, D. (2019). A beginner’s guide to academic workload modelling.

Kolsaker, A. (2008). Academic professionalism in the managerialist era: A study of English universities. Studies in Higher Education, 33(5), 513–525.

Lakens, D., & Evers, E. R. K. (2014). Sailing from the seas of chaos into the corridor of stability: Practical recommendations to increase the informational value of studies. Perspectives on Psychological Science, 9(3), 278–292.

Levin, N., & Leonelli, S. (2017). How does one “open” science? Questions of value in biological research. Science, Technology, & Human Values, 42(2), 280–305.

Long, D. W., Barnes, A. P. L., Northcote, P. M., & Williams, P. T. (2020). Accounting academic workloads: Balancing workload creep to avoid depreciation in the higher education sector. Education, Society and Human Studies, 1(2), 55.

Lowndes, J. S. S., Best, B. D., Scarborough, C., Afflerbach, J. C., Frazier, M. R., O’Hara, C. C., Jiang, N., & Halpern, B. S. (2017). Our path to better science in less time using open data science tools. Nature Ecology & Evolution, 1(6), Article 0160.

Lyons, M., & Ingersoll, L. (2010). Regulated autonomy or autonomous regulation? Collective bargaining and academic workloads in Australian universities. Journal of Higher Education Policy and Management, 32(2), 137–148.

Macfarlane, B. (2011). The morphing of academic practice: Unbundling and the rise of the para-academic. Higher Education Quarterly, 65(1), 59–73.

Miller, J. (2019). Where does the time go? An academic workload case study at an Australian university. Journal of Higher Education Policy and Management, 41(6), 633–645.

Munafò, M. (2019). Raising research quality will require collective action. Nature, 576(7786), 183–183.

Munafò, M., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C., Percie du Sert, N., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), Article 0021.

Münch, R. (2014). Academic capitalism: Universities in the global struggle for excellence. Routledge.

Nicholls, H., Nicholls, M., Tekin, S., Lamb, D., & Billings, J. (2022). The impact of working in academia on researchers’ mental health and well-being: A systematic review and qualitative meta-synthesis. PLOS ONE, 17(5), Article e0268890.

Nosek, B. A. (2019). Strategy for culture change. Centre for Open Science.

Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., van ’t Veer, A. E., & Vazire, S. (2019). Preregistration is hard, and worthwhile. Trends in Cognitive Sciences, 23(10), 815–818.

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606.

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631.

Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in Registered Reports in psychology. Advances in Methods and Practices in Psychological Science, 3(2), 229–237.

O’Leary, E., Seow, H., Julian, J., Levine, M., & Pond, G. R. (2013). Data collection in cancer clinical trials: Too much of a good thing? Clinical Trials, 10(4), 624–632.

Pagliaro, M. (2021). Purposeful evaluation of scholarship in the open science era. Challenges, 12(1), Article 6.

Papadopoulos, A. (2017). The mismeasure of academic labour. Higher Education Research & Development, 36(3), 511–525.

Pownall, M., Azevedo, F., Aldoh, A., Elsherif, M., Vasilev, M., Pennington, C. R., Robertson, O., Tromp, M. V., Liu, M., Makel, M. C., Tonge, N., Moreau, D., Horry, R., Shaw, J., Tzavella, L., McGarrigle, R., Talbot, C., Parsons, S., & FORRT. (2021). Embedding open and reproducible science into teaching: A bank of lesson plans and resources. Scholarship of Teaching and Learning in Psychology.

Rees, T. (2015). Developing a research strategy at a research intensive university: A Pro Vice Chancellor’s perspective. In R. Dingwall & M. McDonnell (Eds.), The SAGE handbook of research management (pp. 565–580). SAGE Publications Ltd.

Reimer, T. (2014). Imperial College London submission to the RCUK review on open access.

Research Consulting. (2014). Counting the costs of open access.

Rhys Evans, T., Branney, P., Clements, A., & Hatton, E. (2021). Improving evidence-based practice through preregistration of applied research: Barriers and recommendations. Accountability in Research, 30(2), 88–108.

Robson, S. G., Baum, M. A., Beaudry, J. L., Beitner, J., Brohmer, H., Chin, J. M., Jasko, K., Kouros, C. D., Laukkonen, R. E., Moreau, D., Searston, R. A., Slagter, H. A., Steffens, N. K., Tangen, J. M., & Thomas, A. (2021). Promoting open science: A holistic approach to changing behaviour. Collabra: Psychology, 7(1), Article 30137.

Rochon, P. A., Hoey, J., Chan, A.-W., Ferris, L. E., Lexchin, J., Kalkar, S. R., Sekeres, M., Wu, W., Van Laethem, M., Gruneir, A., Maskalyk, J., Streiner, D. L., Gold, J., Taback, N., & Moher, D. (2010). Financial Conflicts of Interest Checklist 2010 for clinical research studies. Open Medicine: A Peer-Reviewed, Independent Open-Access Journal, 4(1), 69–91.

Rockwell, S. (2009). The FDP Faculty Burden Survey. Research Management Review, 16(2), 29–44.

Sarafoglou, A., Kovacs, M., Bakos, B., Wagenmakers, E.-J., & Aczel, B. (2022). A survey on how preregistration affects the research workflow: Better science but more work. Royal Society Open Science, 9(7), 211997.

Saunders, B., Kitzinger, J., & Kitzinger, C. (2015). Anonymising interview data: Challenges and compromise in practice. Qualitative Research, 15(5), 616–632.

Scheliga, K., & Friesike, S. (2014). Putting open science into practice: A social dilemma? . First Monday.

Schneider, C. E. (2015). The censor’s hand: The misregulation of human-subject research. MIT Press.

Seidl, A., Wrzaczek, S., El Ouardighi, F., & Feichtinger, G. (2016). Optimal career strategies and brain drain in academia. Journal of Optimization Theory and Applications, 168(1), 268–295.

Somerville, L. H. (2021). Learn when—and how—to say no in your professional life. Science.

Spellman, B. A. (2015). A short (personal) future history of revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899.

Staniszewska, S., Brett, J., Simera, I., Seers, K., Mockford, C., Goodlad, S., Altman, D. G., Moher, D., Barber, R., Denegri, S., Entwistle, A., Littlejohns, P., Morris, C., Suleman, R., Thomas, V., & Tysall, C. (2017). GRIPP2 reporting checklists: Tools to improve reporting of patient and public involvement in research. Research Involvement and Engagement, 3(1), 13.

Stewart, A. J., Farran, E. K., Grange, J. A., Macleod, M., Munafò, M., Newton, P., Shanks, D. R., & the UK Reproducibility Network (UKRN) Local Network Leads. (2021). Improving research quality: The view from the UK Reproducibility Network Institutional Leads for research improvement. BMC Research Notes, 14(1), 458.

Stewart, D. J., Batist, G., Kantarjian, H. M., Bradford, J.-P., Schiller, J. H., & Kurzrock, R. (2015). The urgent need for clinical research reform to permit faster, less expensive access to new therapies for lethal diseases. Clinical Cancer Research, 21(20), 4561–4568.

Stewart, P. M., Stears, A., Tomlinson, J. W., & Brown, M. J. (2008). Regulation - the real threat to clinical research. BMJ, 337, Article a1732, Article a1732.

Stewart, S. L. K., Pennington, C. R., da Silva, G. R., Ballou, N., Butler, J., Dienes, Z., Jay, C., Rossit, S., Samara, A., & Leads, U. K. R. N. L. N. (2022). Reforms to improve reproducibility and quality must be coordinated across the research ecosystem: The view from the UKRN Local Network Leads. BMC Research Notes, 15(1), Article 58.

Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2020). Is preregistration worthwhile? Trends in Cognitive Sciences, 24(2), 94–95.

Tenopir, C., Allard, S., Douglass, K., Aydinoglu, A. U., Wu, L., Read, E., Manoff, M., & Frame, M. (2011). Data sharing by scientists: Practices and perceptions. PLoS ONE, 6(6), Article e21101.

Uher, J. (2023). What’s wrong with rating scales? Psychology’s replication and confidence crisis cannot be solved without transparency in data generation. Social and Personality Psychology Compass, Article, e12740, Article e12740.

University & College Union. (2019). Counting the costs of casualisation in higher education.

University & College Union. (2022). Four fights dispute FAQs.

Urbina-Garcia, A. (2020). What do we know about university academics’ mental health? A systematic literature review. Stress and Health, 36(5), 563–585.

Uygun Tunç, D., Tunç, M. N., & Eper, Z. B. (2022). Is open science neoliberal? Perspectives on Psychological Science, 174569162211148.

Vican, S., Friedman, A., & Andreasen, R. (2020). Metrics, money, and managerialism: Faculty experiences of competing logics in higher education. The Journal of Higher Education, 91(1), 139–164.

Weitzenboeck, E. M., Lison, P., Cyndecka, M., & Langford, M. (2022). The GDPR and unstructured data: Is anonymization possible? International Data Privacy Law, ipac008, Article ipac008.

Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking. Frontiers in Psychology, 7.

Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), Article 160018.

Williams, H. (2022). So, if this is going to be done within my usual hours as part of my current role, what would you like me to stop doing and what reassurances can you offer that this won’t adversely affect my career prospects? [Tweet]

Ziman, J. M. (2000). Real science: What it is and what it means. Cambridge University Press.

No comments here
Why not start the discussion?