Gamified Inoculation Against Misinformation in India: A Randomized Control Trial

© Harjani et al. 2023 Although the spread of misinformation is a pervasive and disruptive global problem, extant research is skewed towards “WEIRD” countries leaving questions about how to tackle misinformation in the developing world with different media and consumption patterns unanswered. We report the results of a game-based intervention against misinformation in India. The game is based on the mechanism of psychological inoculation; borrowed from the medical context, inoculation interventions aim to pre-emptively neutralize falsehoods and help audiences spot and resist misinformation strategies. Though the efficacy of these games has been repeatedly demonstrated in samples from Western countries, the present study conducted in north India (n = 757) did not replicate earlier findings. We found no significant impact of the intervention on the perceived reliability of messages containing misinformation, confidence judgments, and willingness to share information with others. Our experience presents a teachable moment for the unique challenges associated with complex cultural adaptations and field work in rural areas. These results have significant ramifications for designing misinformation interventions in developing countries where misinformation is largely spread via encrypted messaging applications such as WhatsApp. Our findings contribute to the small but growing body of work looking at how to adapt misinformation interventions to cross-cultural settings.

T he spread of misinformation online is widely documented as a threat to democracies worldwide van der Linden, Maibach, et al., 2017). In India, the world's largest democracy, the sharing of misinformation online has been linked to mob violence, and even killings (Arun, 2019;Sundar et al., 2021;Vasudeva & Barkdull, 2020). While social media platforms such as Facebook or Twitter can flag misinformed content or remove it from their platforms, mobile instant messenger services such as WhatsApp and Telegram are limited by their end-to-end encrypted nature (Banaji et al., 2019). Private conversations or groups form a closed network where misinformation can freely circulate without monitoring and studies have shown that this takes place in India (Badrinathan, 2021), as well as Burundi (Mumo, 2021), Nigeria, Brazil, and Pakistan (Pasquetto et al., 2020). Furthermore, a significant proportion of the misinformation shared in India continues to be shared and circulated on WhatsApp even after being falsified by professional, third-party fact checkers (Reis et al., 2020). This trend has created a breeding ground for unverified, misleading, or false information, some of which originates from political parties (Chibber & Verma, 2018). Despite WhatsApp's countermeasures, which include implementing digital literacy programs, placing restrictions on forwarding, and broadcasting awareness-raising adverts, misinformation on the platform is persistent and has been exacerbated by COVID-19 (Al-Zaman, 2021;Ferrara, 2020). Given the limitations of implementing algorithmic solutions on private messaging platforms (Reis et al., 2020), user-level solutions are an increasingly important avenue of research.
The overwhelming majority of individual-

Take-home Message
This study found that gamified inoculation interventions, which have worked well in Western countries, did not confer psychological resistance against misinformation to participants in India. This null result (possibly due to lower digital literacy rates) calls for further investigation into bottom-up interventions tackling misinformation on messaging platforms in developing countries.
level misinformation interventions have been tested on populations from developed, Western countries. This is indeed a feature of behavioral science in general where non-WEIRD (western, educated, industrialized, rich and democratic) samples are underrepresented (Henrich et al., 2010;Rad et al., 2018). There are several factors that could impede the generalizability of findings to India specifically.
Since 2017, year-on-year internet penetration in India has grown by 13% in rural areas compared to 4% in urban neighborhoods (Bhattacharjee et al., 2021). While misinformation can be spread by both urban and rural residents, the latter are likely to access the internet via 2G networks with limited resources for fact checking and a tendency to distribute WhatsApp messages with low reflexivity, as a mode of group participation or strategy to avoid feelings of exclusion (Banaji et al., 2019). Given the collectivist culture in India (Kapoor et al., 2003;Verma & Triandis, 2020), even amongst youth samples (Rao et al., 2013), the importance of group identities is heightened. Political parties frequently capitalize on these divisions, often along religious lines (Vaishnav et al., 2019). Furthermore, the institutionalization of misinformation dissemination by political parties in India, whereby 'IT cells' troll and spread automated content, is not uncommon (Campbell-Smith & Bradshaw, 2019) as part of their campaigning strategy (Banaji et al., 2019).
To counter the spread of misinformation, several strategies have been researched at the individual level, the most well-known of which include fact-checking and "debunking" or correcting false information after exposure (Ecker et al., 2022;van der Linden, 2022;Wal-ter & Murphy, 2018). Studies examining the efficacy of such corrective measures have revealed mixed results. Although some have found that fact-checking can improve accuracy assessments (Clayton et al., 2020;Porter & Wood, 2021;Walter & Murphy, 2018), there are several drawbacks to correcting misinformation post-exposure. One major issue concerns the continued influence of misinformation or the tendency for people to continue making inferences based on misinformation. They do so even when they acknowledge a correction (Ecker et al., 2022;Lewandowsky et al., 2012), which limits the correction's potential effectiveness. This is further compounded by the finding that (a) not all audiences are receptive to fact-checks (Walter et al., 2020), (b) repeated exposure to misinformation can increase its perceived accuracy (Pennycook et al., 2018;Swire et al., 2017), and (c) that corrections do not scale, meaning they rarely reach the same number people as the initial misinformation (Roozenbeek & van der Linden, 2019;van der Linden, 2022). Lastly, corrective strategies are also difficult to implement on private messaging platforms given the invisibility of information flow in this sphere (Reis et al., 2020).
Accordingly, studies which have evaluated fact-checking and literacy interventions in developing countries have revealed inconclusive results. For example, Guess et al. (2020) tested the effect of providing U. S. and Indian participants with tips on how to spot misinformation. They found a positive impact on people's ability to detect false information in the U. S. and in a highly educated online Indian sample, but not in a face-to-face sample obtained in rural Northern India. Similarly, Badrinathan (2021) tested the impact of an intensive one-hour inperson media literacy training during the 2019 national election and found no significant beneficial effects.
One study tested the impact of a debunking intervention via WhatsApp broadcast messaging in Zimbabwe, another country with high WhatsApp usage, finding that participants had increased knowledge about COVID-19 (Bowles et al., 2020). Pasquetto et al. (2020) further found that, while corrections on encrypted group chats reduced belief in misinformation in India and Pakistan, WhatsApp users report corrections as unusual and socially awkward. Given the known challenges surrounding de-bunking and fact-checking, a promising effort against misinformation has been to preemptively debunk (or prebunk) falsehoods to allow individuals to acquire skills to detect and resist misinformation in the future (Lewandowsky & van der Linden, 2021). This approach is based on the theory of psychological inoculation (McGuire, 1961).

Theoretical Background: Prebunking and Inoculation Theory
Inoculation theory was originally developed in the 1960s and is based on the biological process of immunization (McGuire, 1961(McGuire, , 1964: just as exposure to a weakened dose of a pathogen can confer immunity against future infection(s), pre-emptively exposing people to weakened doses of misinformation-along with strong refutations-can cultivate cognitive immunity to future manipulation attempts. Inoculation theory has two key components. Firstly, the inoculation must have a forewarning to evoke threat or the motivation for people to defend themselves from a potential attack on their attitudes (Compton, 2012). Being aware of one's vulnerability to manipulation is important for kick-starting resistance to persuasion (Sagarin et al., 2002). Secondly, much like the injection of a weakened dose of a virus can build immunity through the production of antibodies, exposure to a weakened version of a persuasive argument along with a counterargument can inspire lowered vulnerability to misleading persuasion attempts (McGuire, 1961). A meta-analysis of inoculation theory has found that it is effective at building resistance against persuasion across issues (Banas & Rains, 2010).
In more recent years, the theory has informed the design of inoculation interventions aiming to endow attitudinal resistance against online misinformation specifically (for in-depth reviews see Compton et al., 2021;Lewandowsky & van der Linden, 2021;Roozenbeek & van der Linden, 2018;van der Linden, 2022). Some recent applications of inoculation theory include even potentially polarizing topics such as climate change (van der Linden, Leiserowitz, et al., 2017), conspiracy theories (Banas & Miller, 2013), or vaccinations (Jolley & Douglas, 2017). However, all these studies aimed to inoculate people against misinformation about a specific issue. As such, they do not necessarily imply that the inoculation would be effective as a "broad-spectrum vaccine" against misinformation (Roozenbeek & van der Linden, 2018). This prompted a shift away from narrow-spectrum inoculations to those that incorporate persuasion techniques common to misinformation more generally Roozenbeek & van der Linden, 2019). In other words, familiarity with a weakened dose of the underlying techniques that are used to spread misinformation could impart an increased cognitive ability to detect manipulative information that makes use of such misinformation tactics. These tactics include emotionally manipulative language, group polarization, conspiratorial reasoning, trolling, and impersonations of fake experts, politicians, and celebrities (Roozenbeek & van der Linden, 2019).
This strategy has demonstrated fairly consistent success (Basol et al., 2020;Cook et al., 2017; Roozenbeek & van der Linden, 2019) including long-term efficacy, provided inoculated individuals are given short reminders or "booster shots" of the lessons learned ). Yet, no study to date has tested the effect of inoculation interventions on the Indian population and inoculation researchers have noted a lack of generalizability of inoculation scholarship to non-WEIRD populations (Bonetto et al., 2018), demanding interventions be adapted and evaluated.

Recent Applications: Inoculation Games
Recent applications of inoculation theory also depart from the traditional method of providing participants with ready-made counterarguments (so-called "passive inoculation") and instead use an "active" form of inoculation whereby participants themselves play an active role in generating resistance to manipulation (Roozenbeek & van der Linden, 2018). Gamified interventions have proven to be a fruitful vehicle for active inoculation. One example of such an inoculation intervention is the online game Bad News (www.getbadnews.com): in this game, players find themselves in an artificial social media environment designed to mimic the features of widely used online platforms (Basol et al., 2020;Maertens et al., 2021;Roozenbeek et al., 2021;Roozenbeek & van der Linden, 2019). Across six levels, players are warned about the dangers of fake news, and they develop an understanding of several widely used misinformation techniques through exposure to weakened dose of these tactics alongside ways to spot them. Evidence for the relative benefits of "active" inoculation is emerging , particularly because it may strengthen associative memory networks, contributing towards higher resistance to persuasion (Pfau et al., 2005).
However, the Bad News game, as well as two others (Harmony Square  and (Go Viral! Basol et al., 2021), all focus on misinformation on public social media platforms (such as Facebook and Twitter). This reduces the potential applicability of these games in countries where direct messaging apps are a more common means of communication than public social media platforms. To address this problem, we engaged in a novel real-world collaboration with WhatsApp, Inc (Meta platforms) and developed a new game that inoculates people against misinformation on direct messaging apps, called Join this Group (link to English version; https: //whatsapp.aboutbadnews.com). The Hindiversion of the game was tested in this study (further details in the method section). Its purpose is to inoculate participants against four manipulation techniques commonly present in misinformation on direct messaging apps. Specifically, these techniques are the impersonation of a fake expert (Goga et al., 2015;Jung, 2011;Reznik, 2013), use of emotional language to frame content (Gross & Ambrosio, 2004;Konijn, 2012;Zollo et al., 2015), polarization of narratives to create hostility towards the opposition (Groenendyk, 2018;S. Iyengar & Krupenkin, 2018), and the escalation of an issue such that misinformation triggers offline acts of aggression (BBC Monitoring, 2021; Robb, 2021).

The Present Research
This paper seeks to address two gaps in the literature on misinformation interventions. We first aim to understand whether inoculation against misinformation can improve people's ability to spot misinformation that is commonly shared in a private messaging context (such as on WhatsApp). Second, our sample is from India, an understudied population where the spread of misinformation via private messaging platforms has been linked to violence (McLaughlin, 2018). We ran a field experiment in India testing the efficacy of the inoculation game, Join this Group.
This paper therefore makes two unique advancements to the literature. This study is the first to test an inoculation intervention against misinformation shared in the context of private messaging. This domain of information exchange is markedly different to public platforms such that the burden of identifying, addressing, and correcting misinformation falls on the user(s) (Pasquetto et al., 2020). Moreover, we test the effectiveness of these modified interventions in India (n = 757), the largest market for WhatsApp globally (Findlay, 2019). Both studies were approved by the Cambridge Psychology Research Ethics Committee (REC-2018-19/19

Method
We conducted a 2 (treatment -control) x 2 (pre -post) mixed-between randomized control trial on a sample collected from 8 North Indian states (Bihar, Chhattisgarh, Haryana, Jharkhand, Madhya Pradesh, Rajasthan, Uttar Pradesh, and National Capital Territory (Delhi)). Participants were recruited as part of media literacy workshops administered to 1283 individuals. The experiment was conducted doorto-door, in person, with the assistance of iPads and smartphones through which participants could access the online intervention. After providing informed consent, participants were asked to indicate their frequency of WhatsApp usage in the last twelve months on a 5-point scale, ranging from "Never" to "More than once a day". Participants were then shown 16 screenshots of WhatsApp conversations in a randomized order (see Figure 1) and, following Roozenbeek et al. (2021), were asked to make three assessments: how reliable they found the post (1), how confident they are in their reliability assessment (2) and how likely they would be to share the message (3). All three assessments were rated on a 1-7 Likert scale (1 being "Not at all", 4 being "Neutral", and 7 being "Very much"). Of the 16 images, four were screenshots of authentic WhatsApp conversations, of which two Figure 1 WhatsApp messages containing emotional misinformation messaging. This image is an example of one used in the experimental pre-test and post-test measure. The screenshot reads: "Friends, be careful", "Attempts are being made to kidnap a child from our friend's area. 10 boys were kidnapping him with the promise of biscuits. People in the area have caught those 10 and 5 more people", "The police has announced that 400 people had come to steal the child in this area. Wait for our next video that will report this and watch over your children carefully." were fake news and two contained accurate information. The remaining 12 were screenshots containing misinformation designed to demonstrate four manipulation techniques (fake expert, emotion, polarization, and escalation). The four real (non-misinformation) items were sourced from fact-checking websites and the manipulative items were created by one of the authors and validated by two other authors, to ensure that the conversations make appropriate use of a misinformation technique. Figure 1 demonstrates an example of eliciting fear using emotional language in misinformation messaging.
Participants were then randomly assigned to play either Join this Group (treatment) or Tetris (control), consistent with previous gamified inoculation experiments (Basol et al., 2020; Roozenbeek & van der Linden, 2020). Gameplay for Join this Group was approximately 15 minutes while Tetris participants had to play for a minimum of nine minutes before proceeding. Participants who played Join this Group were required to input a password to validate their completion. Following the game, as part of the post-test measure, all participants were asked to assess the same 16 WhatsApp conversations again and answer some demographic questions, including district, state, gender, education level, age group, how frequently they check the news, how frequently they use social media platforms, their interest in politics, their political ideology, and attitudes scales assessing left to right and libertarian to authoritarian views (Park et al., 2013). Participants were also asked to provide their first thoughts upon hearing the term "fake news."

Treatment Game: Join this Group
We created a Hindi translation of the Join this Group game in collaboration with a Delhibased non-profit, the Digital Empowerment Foundation (DEF). One major challenge that arose during field implementation is that our novel inoculation approach did not fit conceptually into DEF's media literacy strategy. As a condition of administering the intervention in rural India, DEF therefore required that we adapt the intervention to be more in line with their own media literacy strategy. As a result, the key difference between the English and Hindi versions of the Join this Group game is that players take on more of a traditional factchecking role by posing as an undercover detective fighting misinformation online. This is in stark contrast to active inoculation games such as Bad News, GoViral!, and Harmony Square. In these games, participants generally take on the role of a misinformation spreader because this perspective-taking exercise helps elicit "motivational threat" or the motivation to defend oneself against misinformation, a key component of inoculation theory . However, DEF advised that such a perspective was not in line with their traditional media literacy training and may be confusing for their target audience in India, who generally have low digital literacy. Accordingly, we created a new version of the game where the player steps into the shoes of a fake news "detective." In the Hindi version, players are introduced to the game with a messaging-interface screen reading "Hello detective! We need you." The game explains that a group called "Big News" is spreading propaganda on WhatsApp in the fictional nation of "Santhala." The game then explains that understanding the techniques Figure 2 Landing page of the game. The text reads "Play the game and watch out for notifications! Attention: You will receive a password at the end of the game. In order to take part in the study, you'll need to input this password." Blue button reads "Let's start." of the "Big News" group will require going undercover since messages are encrypted and untraceable. Figures 2 and 3 below display in-game screenshots. See Figures S4-S8 1 for more screenshots.
Players go through four levels, each one teaching and testing the application of techniques present in misinformation (fake experts, emotional language, polarization, escalation). See Table 1 for an overview of the four levels. In the first level, players are shown how sharing messages in a group unannounced can result in being reported, an issue that can be overcome by impersonating a fake expert to boost credibility of spurious claims. Players are then able to go undercover by spreading rumors such as "Mangoes cause cancer" using their fake pseudonym (See Figure 3). Such impersonations are pervasive throughout social media (Adewole et al., 2017;Goga et al., 1 All figures and tables starting with S are to be found in the supplementary materials. Jung, 2011;Reznik, 2013). The second level shows players how the use of emotionally charged language can create an atmosphere of chaos especially when combined with a visual prompt. Emotional framing and language have been shown to increase salience, social media engagement (Rathje et al., 2021), grab attention (Konijn, 2012), and evoke emotional reactions (Gross & Ambrosio, 2004). The third level continues in context where players now need to apply their detective skills to prevent election manipulation. They are shown how repeated false messaging that uses partisan misinformation can vilify and antagonize the opposition (such as a political party), exaggerate the perceived distance between identities, sow doubt and increase support for a particular group (Groenendyk, 2018;S. Iyengar & Krupenkin, 2018;Melki & Pickering, 2014). Finally, in the fourth level players are told that they need to report the partisan misinformation being shared. This results in the suspicion of a disloyal supporter in the political party's WhatsApp group and motivates a targeted offline attack on the mole, which intensifies into protests and riots. Throughout this level, the game explains how online encouragement can escalate into offline aggression (BBC Monitoring, 2021; Robb, 2021).

2015;
At the end of each level, players are given a summary of the techniques they have been inoculated against. Points and sanctions are also counted throughout; if players send a message that does not reflect use of the techniques learned, they are penalized. Conversely, exposing propaganda as an undercover detective increases points. In all scenarios, players also see WhatsApp group members' reactions to the misinformation. Overall, the game aims to demonstrate how fabricated content can evoke not only belief in misinformation but also create an atmosphere of fear, polarization, and elicit violent offline behavior.
The study was thus designed to test the efficacy of Join this Group, measured by three forms of assessment. We therefore hypothesized that: H 1 Treatment group participants find manipulative WhatsApp messages significantly less reliable post-gameplay compared to the control group.
H 2 Treatment group participants are significantly more confident at assessing the reliabil-

Figure 3
The first two messages after starting the game. The top message reads "Hello Detective! We need you." The bottom message reads "Our great country Santhala needs you. A group called 'Big News' is spreading propaganda on a very large scale" (left). In-game screenshot from the first level. The top message reads "Well done! Find the profile of a person who is a fake doctor." The bottom message reads "Dr. Saurav Agrawal" (right).
ity of manipulative WhatsApp messages compared to the control group.
H 3 Treatment group participants are significantly less likely to want to forward manipulative WhatsApp messages to others compared to the control group.

Sample
After providing informed consent, we collected n = 1283 observations, of which, n = 757 were complete responses. Participants did not always complete the full survey; we saw some drop-off after the intervention as many participants did not complete the post-test. To understand if the data was missing at random (MAR), we ran further analyses using the pretest scores, condition allocation and WhatsApp usage data to assess missingness (see the supplementary materials for full details). We were not able to study the demographic predictors of the incomplete data because this was collected at the end of the study. The analysis finds that the data was not missing at random and that a higher baseline confidence in assessing the reliability of manipulative items decreased the odds of missingness (OR = 0.030, [95%CI; 0.002,0.431]) and being assigned to the treatment group increased the odds of missingness (OR = 2.171, [95%CI; 1.589, 2.967]). Please see Table S1 for full results.
During the data quality check, we further observed data in which participants just provided the same scale point consistently throughout the pre-test, post-test, or both (e.g., "4"). We therefore removed any responses which had repeated answer patterns 2 throughout the entire section (pre-test or post-test), resulting in a final sample size of n = 725. Of the final sample, 55% identified as female, 40% as male and 5% as other. 49% reported being 18-24 years old. 42% reported having obtained at least a bachelor's degree. The sample was also heavily left leaning, (M = 2.14, SD = 0.78). Finally, 65% of participants came from the state of Madhya Pradesh (17% from Rajasthan, 6% from Chhattisgarh, 5% from Uttar Pradesh, 4% from Jharkhand, 3% from Bihar). See Table S2 for a full breakdown of the sample.

Results
All data cleaning and analysis was conducted using RStudio, scripts are available via the Open Science Framework: https://osf.io/abjrg. For the main analyses, the following packages were used: stats (for ANCOVA), TOSTER (for tests of statistical equivalence) and BayesFactor (for Bayesian t-tests).
We conducted a one-way ANCOVA to test H 1 , Table 1 A summary of the game from the player's perspective at each of the four levels.
Level Manipulation Technique Description 1 Fake Expert As undercover detectives, players join a WhatsApp group called "Breaking News" in the town of "Santhala." They share a fake message but are kicked out of the group, upon which they are encouraged to use a fake expert to gain credibility and witness how this impersonation can garner belief.
2 Emotional Language Players are told that certain users in the group "Big News" are picking fights. As an undercover detective, they are tasked with spreading content to contribute to the chaos. The game then prompts players to share a fear or anger inducing message. This level shows players how, especially when paired with an image, emotional language can manipulate opinions and exacerbate chaos in the group.
3 Polarization At this stage, Santhala is facing an election that the group "Breaking News" is attempting to manipulate. Players are told they must go undercover in one of the political candidate groups to spread polarizing information (e.g., damaging information about the opposition). The game shows how this cycle causes wider rifts between supporters. To test H 2 , we followed the same analysis: we conducted a one-way ANCOVA on the average post-test confidence in reliability judgment scores, controlling for the baseline. We find no significant difference between groups: F(1,722) = 1.79, p = 0.18 or for the subcategories; fake expert: F(1, 722) = 1.56, p = 0.21; emotion: F(1,722) = 1.05, p = 0.31; polarization: F(1,722) = 1.18, p = 0.28; escalation: F(1,722) = 1.17, p = 0.28. A TOST equivalence test confirmed equivalence to zero for the average post-test confidence scores (in assessing the reliability of misinformation items), t(721.43) = -2.34, p = 0.01. A Bayesian t-test provided strong evidence for the null hypothesis of H 2 , with a Bayes factor of BF 10 = 0.04 (error % = 0.00).
To test H 3 , or whether there was a difference in post-test scores of intended willingness to share misinformation, another one-way AN-COVA was conducted on the average post-test scores, controlling for the baseline. Results were non-significant F(1,722) = 1.46, p = 0.23 including on the subcategories; fake expert: A TOST analysis on the post-test likelihood to share misinformation items scores could not confirm statistical equivalence to zero t(719.73) = -0.64, p = 0.26. However, a Bayesian t-test suggested strong support for the null hypothesis of H 3 with a Bayes factor of BF 10 = 0.07 (error % = 0.00). See Table S6 for Bayesian ttests. Figure 4 shows the distribution of mean scores (reliability, confidence and sharing) for all misinformation items. Similarly, Figure 5 displays the distribution of mean reliability scores broken down by technique.
Though not hypothesized, to test whether the intervention increased skepticism towards factual messages, we also conducted a oneway ANCOVA to test for significant differences in post-gameplay scores for real news items, controlling for baseline scores. Specifically, ratings of reliability: F(1,722) = 0.09, p = 0.76; confidence in judgments: F(1,722) = 1.10, p = 0.30; and likelihood to share: F(1,722) = 1.39, p = 0.24 were not significantly different across treatment and control groups. Similarly, we tested whether the intervention improved participants assessments of the two genuine screenshots capturing fake news sharing on WhatsApp. Using one-way ANCOVAs we found no significant differences in ratings of reliability: F(1, 712) = 0.99, p = 0.32; confidence: F(1, 711) = 1.68 , p = 0.20; or likelihood to share: F(1,702) = 0.12 , p = 0.73.
We ran linear regressions to check for covariate effects on the differences in pre-post measures of reliability, confidence, and sharing. We only find that higher frequency of checking the news significantly predicts a larger difference between pre and post confidence scores of misinformation items (p = 0.03). See Tables S33-S35 for the full results.

Discussion
Through this study we demonstrate that there was no significant effect of playing Join this Group on the veracity evaluations of both real and misinformation items in our sample of North Indians. This is in contrast with previous results that have found promising results using gamified inoculation in Western populations ( including versions translated to German, Greek, French, Polish, and Swedish  Roozenbeek & van der Linden, 2020). Direct replications of the Bad News game online have also shown positive effects on urban populations in India (A. Iyengar et al., 2022) and importantly, randomized trial data 4 from a representative sample of the UK population using the English version of Join this Group found that the game significantly improved people's ability to detect fake news, how confident they were in their own judgments, and reduced their overall willingness to share misinformation with others (Basol et al., 2022).
There could be a myriad of explanations for the discrepant results observed. therefore, we categorize explanations into two broad categories: (1) cross-cultural (Indian sample, translated to Hindi) and (2) perspective shift (the player assumed the role of detective).
Firstly, we discuss possible cross-cultural explanations for our observed findings. While inoculation interventions demonstrate a clear potential to be effective (Traberg et al., 2022), it is not surprising that the process of applying an intervention to understudied, non-WEIRD cultures (Henrich et al., 2010;Rad et al., 2018) might require an iterative process. Indeed, previous interventions aiming to reduce belief in and sharing of misinformation in India have faced similar difficulty. WhatsApp's media literacy campaigns and adverts have been criticized for a lack of alignment with local contexts (Medeiros & Singh, 2021). In-person or online digital literacy interventions have either demonstrated no reduced belief in misinformation (Badrinathan, 2021) or an effect size limited to a highly educated subset (Guess et al., 2020). Here, we tested the efficacy of an inoculation intervention, Join this group, that was modified for context through partnership with a local non-profit. The intervention aimed to teach participants fundamental techniques commonly used in the presentation of misinformation through an inoculation intervention. We expected that our local adaptation and use of inoculation would improve individual veracity discernment of manipulative news items. Yet, we do not find this in our study.
We hypothesize that the cultural context, local values, and social preferences may have played a role. In particular, the process of successful inoculation in the Indian population may be different. Threat has long been conceptualized a key and necessary component for inoculation to take place (McGuire, 1964) with most recent scholars agreeing that a threshold level of threat is required for inoculation to be conferred (Compton, 2021) as it serves the function of highlighting one's vulnerability which in turn, motivates the build-up of resistance. While there is no quantitatively defined level of minimum threat discussed in inoculation theory, studies assessing inoculation have traditionally measured threat as an apprehension (Ivanov et al., 2022;Wood, 2007) and more recently in a motivational form (Banas & Richards, 2017). Unfortunately, we did not include measures of apprehensive or motivational threat in our study. Moreover, given the paucity of literature around non-WEIRD samples in psychology in general, it is difficult to make claims about the efficacy of inoculation without an explicit measurement of threat. Future research should consider incorporating this, informed by cultural variation in emotional experience and motivations (Kwan, 2016;Lim, 2004;Matsumoto et al., 2008;Mesquita & Walker, 2003).
The cross-cultural adaptation also required numerous language and context changes. . For example, the chosen fictional country of "Santhala" may have carried pre-conceived notions for some given its close resemblance to the Santhal tribe (The Editors of Encyclopaedia Bri-

Original Purpose
This paper aims to address the paucity of empirical research investigating misinformation interventions in developing countries. One important difference in the circulation of misinformation in developing countries is its spread through private, encrypted networks such as WhatsApp, which poses different challenges than (the circulation of) misinformation on open networks such as Twitter and Facebook. As such, this paper features a study testing the efficacy of an "inoculation" game in India. We hypothesized that previously reported effects of this inoculation game would be replicated by reducing the reported reliability and sharing intent of misinformation while increasing people's confidence in their own assessments. tannica, 2012). All 12 manipulative WhatsApp prompts were translated from English to Hindi, which may have resulted in a loss of meaning and validity of measurement (see Figure S9 for an example). In addition, based on 2011 national census data, we estimate that our sample is 74% rural (Government of India, 2016), a figure calculated based on the sample's distribution across states (see Table S39). Shahid et al. (2022) find that rural samples had a lower ability to detect misinformation compared to their urban counterparts, suggesting that interventions on rural samples may require additional challenges. Moreover, rural areas are estimated to have a digital literacy rate of 25% compared to 61% in urban areas (Mothkoor & Mumtaz, 2021), suggesting that our sample has low digital literacy overall. Classifying a household as digitally literate only requires one person, aged above 5 years, to be able to operate a computer and use the internet. As such, it is likely that our game-based intervention was conducted on participants with minimal experience with operating digital devices. This is compounded by the fact that the majority of our sample was female (55%), who typically have lower digital literacy in this area (Rowntree et al., 2020). This could have hindered the intervention's efficacy. Furthermore, data quality was poor: only 26% of individuals who played the inoculation game put in the password correctly. Further analysis, however, demonstrated that this did not make a difference to our results (please see Tables  S36-38).
Secondly, the game departed from previous game-based inoculation experiments in that it changed the player's perspective from troll to detective. Although this change preserved the critical element of 'active' inoculation that has been effective previously (Pfau et al., 2005; Roozenbeek & van der Linden, 2019), it is possible that the role of being not only a detective, but also being undercover, added further layers of complexity that minimized goal salience and clarity for participants, Figure 6 Distribution of post-pre differences between control and treatment groups. Red line drawn at y = 0. thus reducing its effectiveness. Practitioners may also consider running naturalistic studies in developing countries by conducting interventions broadcasted on WhatsApp through local organizations' subscription lists for increased data availability (Bowles et al., 2020), or even by artificially constructing a social network in the lab (Pogorelskiy & Shum, 2017).
Our study may be taken as a lesson in conducting interventions in underexplored populations. In particular, the typical data quality, representativeness, and methodological best practices for running such online experiments in India, and non-WEIRD countries in general, is poorly understood and can impede the experimental process. Campbell-Smith and Bradshaw (2019) notes, "having digital connectivity does not mean people are digitally equipped to use online surveys. They have issues in read-ing and writing, but not in talking." Although we partnered with a local NGO in India, one must also account for gaps in the implementation of scientific experimental designs in the field, particularly by non-academic partners as it can increase the possibility of unobserved extraneous variables. Additionally, we observed non-random missingness in the data. We find that being assigned to the treatment group increases the odds of an incomplete or missing response, which may have introduced a bias in the results. However, as we found null results no further correction analysis was conducted. Future replications, particularly that find significant results, should pay attention to any differential attrition.
Future studies may also benefit from stronger local relationships (Sircar & Chauchard, 2019) as well as a greater accountability of the diversity within countries, such as India, that have notable heterogeneity beyond age, gender, and education level (Deshmukh, 2019). For example, the question on political ideology in this study was more accurately asking people how "free" their ideology is rather than measuring their political ideology on a left-right scale (measure detailed in the supplement). Although India has been historically classified as clientelist and thus there is no established scale to capture political ideology, some evidence suggests voting behavior among certain groups is not clientelist (Chibber & Verma, 2018). Future research will need to account for this in the design of surveys. In the context of misinformation, educational interventions have shown differing efficacy depending on political party support (Badrinathan, 2021) while polarizing content on the basis of religion and caste is often featured in misinformation circulated in India (Al-Zaman, 2021; Arun, 2019; Campbell-Smith & Bradshaw, 2019). For digital interventions, Indian samples may also vary in levels of digital literacy by caste and consumption levels (Mothkoor & Mumtaz, 2021). Therefore, additional measures, such as whether someone is part of a scheduled group (caste or tribe), religion, income level, and political party affiliation can facilitate a richer understanding of the intervention efficacy in subgroups due to heterogeneity in local factors. To isolate the effect of culture, experiments may also aspire to reach a more digitally literate population within non-WEIRD cultures, given that middle class, urban population in non-WEIRD countries are more likely to resemble the typically studied WEIRD population (Ghai, 2021).

Conclusion
This study was motivated by scarcity of studies examining non-WEIRD populations in general (Henrich et al., 2010), and by the lack of research testing the effectiveness of misinformation interventions in democracies such as India (Badrinathan, 2021), that are being threatened by the prevalence of misinformation. We find null results of a game-based inoculation intervention, Join this Group, on ratings of reliability, reported intent to share, and confidence in judgments of misinformation messages. Previous similar game-based inoculation interventions have been demonstrably successful (Basol et al., 2020;Roozenbeek & van der Linden, 2018, 2020. We would thus conclude that the results reported here are more likely to reflect an interplay of cultural and experimental design factors. Taken together, we interpret these findings as a call for further adaptation and testing of inoculation interventions on non-WEIRD populations. Modifications may include measuring conceptual mediators such as motivational threat to elucidate and hypothesize potential differences in crosscultural mechanisms, partnering with local researchers and universities, measuring digital literacy, as well as assessing of behavioral outcomes such as news sharing online.

Missing Data
A total of n = 1283 consenting individuals began the survey of which n = 757 were complete and valid responses used in the analysis. As sample demographics were only collected after the post-test measures, it is not possible to understand the differences in individual characteristics across missing and complete responses. However, after filtering out for the those answered at least one question in the pre-test (n = 1038), Little's MCAR test (run in R using the misty package) for all three dependent variables (reliability, confidence and sharing) suggested that the data were not missing completely at random, χ 2 (5) = 70.59, p < 0.001. Thus, we ran a standard logistic regression (using the glm function from the stats package in R) to investigate patterns of missing data as a function of pre-test responses. This was done by creating a dummy variable where 1 = missing observation and 0 = complete responses. For the manipulative items, higher pre-test confidence scores slightly reduced the odds of missingness (OR = 0.030, [95%CI; 0.002,0.431]) and being assigned the treatment group increased the odds of missingness (OR = 2.171,[95%CI;1.589,2.967]). This implies that a higher baseline confidence in assessing the reliability of manipulative items decreases the likelihood of missingness while being assigned to the treatment group increases the likelihood of missingness. All other pre-test measures did not affect the odds of dropout. We were not able to assess whether the missing data was due to demographic factors as these were collected at the end of the study.    Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively.

Intent to share Fake Messages
Share-Post Share-Pre BF 10, prior = 0.707 0.073 9.425E -08 Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively.       Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively. Note. LL and UL represent the lower-limit and upper-limit of the partial η 2 confidence interval, respectively.

Figure S4
In-game Screenshot -First screen shown after starting the game, introducing the character and motive Translation: Green Bar (Left to Right): "Score" "Sanctions" White Box: "Hello, Detective! We need you"

Figure S5
In-game Screenshot -Second screen shown after starting the game, depicting an explanation of the propaganda spreading on WhatsApp. Translation: Green Bar (Left to Right): "Score" "Sanctions" White Box: "Our great country Santhala needs you. A group called "Big News" is spreading propaganda at a very large scale" Blue Text (Left to Right): "New mobile, who's this?" "For what?" Blue text: "Big News?" "Santhala?"

Figure S7
In-game screenshot -Showing how a Fake News technique (using a fake expert) is taught. Translation: Green Bar (Left to Right): "Score" "Sanctions" White Box: "Just sending a message all of a sudden isn't the right way, what do you think, how will the group spread this health-related misconception?" Blue text: "By creating a fake doctor profile" "By shouting loudly"

Figure S8
In-game screenshot showing how the Fake News techniques is taught. Continuation of Figure S7. Translation: Green Bar (Left to Right): "Score" "Sanctions" Grey Box: "Well done! Find the profile of a person who is a fake doctor" White Box: "Dr Saurav Agrawal" Blue Text: "It looks suspicious…" "Next"

Figure S9
Example of a translated manipulative WhatsApp prompt (with English version from another study) intended to show the use of a fake expert. Screenshot reads: "Hello! Nowadays it's been very dry. Even in the rainy season, it does not rain", "Not sure what's happening with the weather these days. Maybe this is happening because of the climate change in the environment", "Do you think this is happening because of climate change?", "I'm not sure, it's difficult to say, farming has become very difficult", "Hello, I am a scientist, climate change is a big reason for whatever is happening in our environment. We have to save our environment.", "Right, interesting".