Skip to main content
Intended for healthcare professionals

Abstract

The proliferation of misinformation has stimulated research into games and videos to reduce susceptibility to misinformation via psychological inoculation. This research field applies several recent extensions of inoculation theory in novel ways, posing the question of whether such interventions are indeed producing inoculation effects. We conducted a systematic review (k = 72) to establish the strength of the links between inoculation theory and the outcomes of the tests. We found that the studies did not pose hypotheses relating to the core factors of inoculation theory: threat conferral and counterarguing. Moreover, empirical designs and analyses have introduced confounding factors. Therefore, links between psychological inoculation theory and the tests are weak, and the question of whether the interventions inoculate remains unbroached. We recommend that future research include theoretically relevant variables in improved empirical tests and that researchers exercise caution in interpreting the results of existing studies as representative of psychological inoculation effects.

Introduction

The widespread dissemination and acceptance of misinformation presents a pervasive societal challenge impacting important social, political, and public health outcomes (Ecker, 2023). Recent examples of real-world events associated with online misinformation include violent race-based riots throughout much of Britain and Northern Ireland (Thomas & Sardarizadeh, 2024), death threats against meteorologists and emergency response workers in the context of major hurricanes (McGinley, 2024), and significant world-population-level negative health impacts during the COVID-19 pandemic (Kisa & Kisa, 2024). Misinformation also threatens democratic processes (Lewandowsky et al., 2023). Recent instances include intimidation of election volunteers amidst false claims of election rigging (Wendling, 2024) and misinformation used to justify and build support for government actions, for example, the effective shuttering of U.S.A.I.D. (Myers & Thompson, 2025).
Interventions to reduce belief in misinformation at the individual level may be categorized as reactive or pre-emptive (Ecker et al., 2022; van der Linden, 2022). Reactive approaches address misinformation after people have been exposed to it. For example, a common reactive method is fact-checking by which existing misinformation is shown to lack a factual basis. Reactive approaches have been found effective for correcting belief in incorrect claims (see Chan et al., 2017 for a meta-analysis), but they are known to suffer from various limitations arising from pre-exposure to misinformation and its “stickiness” (e.g., the continued influence effect; Lewandowsky et al., 2012; for a discussion see Lewandowsky & van der Linden, 2021). The need to address such limitations has driven an increase in research attention towards interventions to pre-emptively refute or “prebunk” misinformation ahead of people’s exposure to it (Ecker et al., 2022). Psychological inoculation is one such prebunking approach that has recently been employed toward reducing susceptibility to persuasion by misinformation (Compton et al., 2021; van der Linden, 2024). The aim of the current work is to review the theoretical and empirical approaches of studies that test digital interventions to combat misinformation via technique-based psychological inoculation.

Psychological Inoculation Theory

Psychological inoculation theory (Compton, 2013; McGuire, 1964) is a theory of resistance to influence built upon an analogy to biological inoculation. Just as medical vaccines deliver a weak dose of pathogens to stimulate an immune response to produce antibodies that defend against future infection, psychological vaccines deliver weak persuasive attacks to stimulate a threat response to produce counterarguments that defend against future persuasion (Banas & Rains, 2010; Compton, 2013, 2024). The basic model of psychological inoculation (Compton, 2013) depicts a two-step motivational-cognitive process which culminates in resistance to counter-attitudinal information (see Figure 1; Compton & Pfau, 2005; McGuire, 1964).
Figure 1. The basic model of psychological inoculation.
The first step of the basic model stimulates a perception that one’s attitudinal position on an issue is under threat. This threat response is triggered by exposing participants to arguments that attack their position, mirroring the biological immune response triggered by exposure to foreign pathogens. An optional addition to this core method of threat conferral is to present an explicit warning about future persuasive attempts (Compton & Pfau, 2005). The perception of threat serves to motivate attitudinal resistance and drive attention to the intervention’s main content: a two-sided pre-emptive refutation or inoculation “prebunk.” The prebunk consists of the aforementioned threat-inducing attacks followed by their refutations. In keeping with the biological analogy on which the theory is built, the attack messages of the prebunk are relatively weak (e.g., an incomplete or poorly supported argument) so as to not “overwhelm the system” (i.e., convince), while the refuting counterarguments must be sufficiently strong. This second step is commonly theorized to promote a post-intervention process of counterarguing akin to the production of “cognitive antibodies” that promote resistance to persuasion from subsequent full-strength attacks. Over the years, research has informed various extensions to this model. For example, inoculation is now applied in the context of contested issues rather than constrained, as it originally was, to topics on which practically everyone in a society agrees (i.e., “cultural truisms”; Compton & Pfau, 2005). Additionally, factors beyond post-intervention counterarguing have been proposed as instrumental in pathways to resistance (Banas, 2020; Compton, 2013; Pryor & Steinfatt, 1978).
The threat response is an enduring core feature of psychological inoculation (Compton, 2024; Compton & Pfau, 2005; Pfau, 1997). The conferral of threat mirrors a defining mechanism of biological inoculation by which the immune system reacts to the challenge presented by foreign pathogens. As the psychological theory is built upon a tight analogy to biological inoculation, threat conferral has long been considered essential to psychological inoculation (Compton, 2013, McGuire, 1964; Pfau, 1997). In testing its practical necessity, though, some research has challenged this strict requirement. Most notable of these studies is Banas and Rains’ (2010) meta-analysis which found that threat did not predict inoculation. However, the researchers noted their test was likely underpowered. Additionally, subsequent research has argued that the traditional measure of threat operationalized an affective response that is not well aligned with the theory and introduced a more theoretically relevant “motivational threat” scale for use in inoculation research (Banas & Richards, 2017; Richards & Banas, 2018). To date, it remains a key theoretical underpinning that perceived threat is requisite to psychological inoculation such that creating effects of inoculation without it is considered impossible (Compton, 2013, 2020, 2024; Compton & Pfau, 2005; McGuire, 1964; Pfau, 1997).
The other core factor of conventional psychological inoculation is counterarguing. Counterarguing occurs both through the presence of the refutational material of the prebunk and through the subsequent cognitive process of generating counterarguments after the intervention (i.e., post-intervention counterarguing; Compton, 2013). Counterarguments adhere closely to the biological inoculation analogy as being akin to antibodies (Compton, 2013). Although processes other than counterarguing may lead to inoculation effects (e.g., source derogation; Miller et al., 2013), counterarguing is still one of two core factors of psychological inoculation and is centrally important to “active inoculation” specifically (McGuire, 1964). The threshold for active involvement that separates passive and active inoculation was set by McGuire and Papageorgis (1961). In active inoculation, participants are required to produce prebunking counterarguments themselves instead of having them provided (e.g., construct and write them down; Compton & Pfau, 2005; McGuire, 1964). Effects of active inoculation are theorized to be mediated by an enhanced process of post-intervention counterarguing (Compton & Pfau, 2005; McGuire, 1961, 1964).
In recent years, inoculation research has incorporated technique-based inoculation (also termed logic-based; Banas & Miller, 2013; Compton et al., 2021; Cook et al., 2017; Lewandowsky & van der Linden, 2021). While conventional inoculation interventions are fact-based in that they prebunk specific claims, technique-based inoculation seeks to preemptively refute the use of fallacious rhetorical techniques of persuasion. Technique-based inoculation has most often been applied and tested in the context of Digital Technique-based Inoculation Interventions against Misinformation (DTIIMs). These scalable game- and video-based interventions have stimulated a new area of applied misinformation research tasked with addressing rhetorical techniques common to misinformation (Lewandowsky & van der Linden, 2021). This predominantly social psychological research reimagines inoculation theory that has for several decades chiefly been the domain of communication scholars (Banas, 2020; Pfau, 1997).

Digital Technique-Based Inoculation Interventions Against Misinformation (DTIIMs)

DTIIMs are digital games and videos that seek to use technique-based psychological inoculation to reduce susceptibility to misinformation by refuting the use of fallacious rhetorical techniques of persuasion common to it (Lewandowsky & van der Linden, 2021). While the topical focus and specific content may vary from intervention to intervention, DTIIMs commonly address techniques such as using emotive language, presenting fake experts, conspiratorial reasoning, polarization, and ad hominem attacks. Reviews of the initial studies to test these interventions conclude they reduce susceptibility to misinformation by psychologically inoculating against the use of rhetorical techniques of persuasion (Lewandowsky & van der Linden, 2021; van der Linden, 2022, 2024; van der Linden & Roozenbeek, 2020). DTIIMs have attracted notable endorsements, including an EU Commission recommendation for use in European classrooms (European Commission, 2022). Their digital nature makes them highly scalable, and many millions of people have engaged with them (van der Linden, 2024).
Lewandowsky and van der Linden (2021) identify a novel combination of three significant extensions of conventional inoculation that run through technique-based approaches such as that of DTIIMs: (1) the interventions address contested issues regardless of participants’ initial attitudes as a form of “therapeutic inoculation,” (2) the classic concept of “active inoculation” is revived by DTIIM games, and (3) the interventions create “broad-spectrum protection” against misinformation. These departures from conventional inoculation introduce uncertainty as to whether DTIIM studies are indeed reporting inoculation effects (Fransen et al., 2024; Modirrousta-Galian & Higham, 2023), a question with serious implications for ongoing inoculation scholarship and communication research. In addition to these combined extensions, the empirical paradigm favored by DTIIM studies constitutes a fourth significant departure from conventional psychological inoculation research.
DTIIM studies typically employ an empirical approach that is new to psychological inoculation research, and which differs also to that of studies of non-digital technique-based inoculation interventions (e.g., Banas & Miller, 2013; Cook et al., 2017). By the conventional psychological inoculation testing paradigm, participants’ opinions on a topic are assessed, then they are assigned to a treatment or comparator group. Post-intervention, participants are presented with a strong argument attacking their pre-held belief, then their attitude is reassessed. Many studies include a measure of perceived threat (Banas & Rains, 2010), with higher levels in the treatment group taken as evidence that the inoculation process achieved its intermediate goal. The main dependent variable is attitude change (or a related belief or intention etc.) with less change predicted in the inoculated group relative to comparators. In contrast, tests of DTIIMs tend to assess participant ratings of the credibility (or reliability or veracity etc.) of short social media posts or headlines that do or do not employ a rhetorical technique. These stimuli are taken as a proxy for misinformation and non-misinformation respectively. Participants are assigned to a treatment or comparator group, then, post-intervention, are asked to rate the stimuli. Susceptibility to misinformation is inferred from mean group ratings, with lower credibility ratings of the misinformation stimuli predicted in the treatment group relative to comparators.
The four theoretical and empirical differences outlined render DTIIM studies a distinct class of inoculation research, bound by shared assumptions and methods while being distinguished from tests of non-digital technique-based inoculation interventions against misinformation. They warrant focused attention because of their popularity and endorsements, and because the results of DTIIM studies are commonly interpreted toward new understandings of psychological inoculation theory (e.g., Appel et al., 2025; Leder et al., 2024; Maertens et al., 2021, 2025). To guide an analysis, we formulate the four extensions of the theory attributed to DTIIMs as relating to two theoretical topics and two that are primarily empirical.

Theoretical Topics

The first proposed extension of inoculation theory relevant to DTIIMs pertains to their classification as therapeutic psychological inoculation, which is drawn from their application to contested issues regardless of prior beliefs (Compton, 2020; Ivanov et al., 2022; Lewandowsky & van der Linden, 2021; van der Linden, 2024). To illustrate the connection, consider that participants sampled from a population will have been exposed to a range of positions on contested issues. As such, a spectrum of attitudinal stances on the issue will be represented within the sample, some of which will have been informed by misinformation. When inoculating in the context of contested issues, previous research has tended to assess and protect whichever attitude participants brought to the intervention (e.g., protect the “pro” stance in one group and the “against” stance in the other; Compton & Pfau, 2005). In contrast, DTIIM research presents and prebunks examples of misinformation regardless of initial attitudes such that some misinformation stimuli (i.e., attack messages) will support rather than threaten the position of participants. When this approach succeeds, it is analogous to medicine as the therapeutic inoculation of those previously “infected” rather than protective biological inoculation (Compton, 2020). An enduring question regarding therapeutic psychological inoculation’s status as inoculation proper hinges on there being no apparent role for perceived threat where attack messages encountered in the intervention support rather than oppose participant beliefs (Compton, 2020; Ivanov et al., 2022).
The second proposed extension of inoculation theory associated with DTIIMs is the revival of active inoculation in its application to DTIIM games such as Bad News (Basol et al., 2021; Lewandowsky & van der Linden, 2021; Roozenbeek & van der Linden, 2019, 2020). Through active technique-based inoculation, a participant faced with a rhetorical technique would note its use and the fallaciousness of the approach. That is, they would produce their own counterarguments against its use to persuade. This should then lead them to discount their perception of the credibility of that claim and others that employ the technique. Active inoculation in DTIIM research is attributed to games which, to the best of our knowledge, neither compel people to show that they have generated their own refutations nor offer the functionality for them to do so. Whether DTIIM games stimulate a process of post-intervention counterarguing, the theorized mechanism of active inoculation, is therefore an open question with implications for the connection between the theory and tests.

Empirical Topics

The third extension of inoculation theory attributed to DTIIMs is that they are said to create testable effects of broad-spectrum protection against misinformation (Basol et al., 2020; Lewandowsky & van der Linden, 2021). The potential for broad-spectrum protection is due to DTIIMs’ higher order focus on techniques common to misinforming claims generally rather than specific claims. In testing this, it is important to show the effect is constrained to misinformation. Otherwise, it might be better described as generalized skepticism rather than inoculation against misinformation. This distinction can be understood in terms of Signal Detection Theory (SDT). By a SDT approach, discernment is the ability to distinguish between misinformation and non-misinformation, while response bias is an independent effect pertaining to both classes of information combined (Batailler et al., 2022; Guay et al., 2023). Response bias can be liberal or conservative, with a conservative response bias being consistent with an effect of generalized skepticism (e.g., tending to rate stimuli in general as not credible). Modirrousta-Galian and Higham’s (2023) reanalysis and meta-analysis of the original DTIIM studies employed a statistically robust SDT approach to disentangle discernment and response bias: Receiver Operator Characteristic (ROC) curve analysis. Their findings showed that the interventions tended to promote more conservative responding, but not discernment. They recommended that future tests for broad-spectrum inoculation against misinformation show that effects are specific to misinformation (discernment) and not so broad as to be equivalent for non-misinformation (response bias without discernment). It is unclear if subsequent research has sought to differentiate these effects in establishing inoculation against misinformation by DTIIMs, and if so, by what means.
The final expansion of inoculation research linked to DTIIMs is related to the novel testing paradigm and the stimuli that constitute test-items within it. Firstly, there is evidence of item effects across multiple studies (Basol et al., 2020; Roozenbeek et al., 2021; Roozenbeek, Traberg, & van der Linden, 2022; see van der Linden, 2024 for a discussion). Item effects arise from stimuli-specific characteristics beyond those of empirical interest that nonetheless influence participant responses, thus confounding results (Clark, 1973; DeCarlo, 2011). Where these effects are present but unaccounted for in analyses, results may not be indicative of effects for misinformation as they will not be generalizable to the population of stimuli (Judd et al., 2012). Also, there are potential test-item issues for DTIIM research relating to item-scales in which (1) some stimuli were produced by researchers and other stimuli were selected from real-world examples (Pennycook et al., 2021), (2) ratios of misinforming and non-misinforming stimuli are skewed (Hameleers, 2023), and (3) subscales are too short, with implications for the scale’s psychometric properties (Boateng et al., 2018).

The Present Study

It is unclear whether the recent departures from conventional psychological inoculation combined in DTIIM research render DTIIM effects to be those of a new type of psychological inoculation against misinformation or something meaningfully different. Clarification of the nature of the effects has significant implications for the health of this classic theory as they are commonly used to inform new mechanisms and boundary conditions of the theory. As such, if DTIIM effects differ in important ways from those of psychological inoculation, this could cause a crisis for the theory as unrelated factors and pathways are added to increasingly inaccurate models.
This systematic review will first provide a summary description and evaluation of the theoretical and empirical characteristics of quantitative research into the effects of DTIIMs to ascertain the extent to which the research has established what their effects are. That overview will then facilitate an assessment of the theoretical and empirical approaches of the field to answer research questions pertaining to the strength of inferences that can be made for (active) inoculation against misinformation by DTIIMs. We will then discuss the implications of the findings and provide recommendations for future research.

Theoretical Research Questions

Theoretical research questions center around the classification of DTIIM effects as those of psychological inoculation interventions, and those of DTIIM games as active inoculation. In each case, the research questions posed are:
1.
To what extent are key factors, mechanisms, and pathways of models of psychological inoculation theory considered?
2.
How strong are the inferential links between tests and psychological inoculation theory?
3.
What are the potential implications for interpretation of existing findings based on the strength of the inferential links?
To ascertain if primary studies have enquired as to whether DTIIMs create psychological inoculation effects by the conventional understanding, we will assess their inclusion of the requisite factor of perceived threat. To ascertain if primary studies have enquired into whether DTIIM games classified as active psychological inoculation create active inoculation effects, we will assess their inclusion of post-intervention counterarguing.

Empirical Research Questions

Empirical research questions will consider the ability of tests to infer broad-spectrum protection against susceptibility to misinformation, and the accuracy of empirically informed estimates of susceptibility. Questions will be addressed in the first instance via an enquiry into the characteristics of dependent variables representing susceptibility to misinformation, then the characteristics of test-items and item-scales and their treatment in empirical tests. The research questions posed are:
1.
How has susceptibility to misinformation been operationalized and assessed?
2.
How well do test-items, scales, and statistical analyses assess susceptibility to misinformation?
3.
What are the potential implications for the ability of primary studies to test inoculation against misinformation?

Methods

This study’s pre-registered aims, research questions, methods, and categories of information extraction, along with the Rayyan export file, information extraction code book and working spreadsheets are available on the Open Science Framework (https://osf.io/9e7gz/). In adherence with our pre-registration, we conducted a search of titles, abstracts, and keywords of all available literature, whether published in a peer-reviewed journal or not, across five databases (PsycINFO, MEDLINE, Web of Science, Scopus, and Scopus Preprints; search conducted June 26, 2024). Search terms represented psychological inoculation, digital interventions, and misinformation (see Figure 2). Results were filtered to include only works published in English.
Figure 2. Search syntax used for web of science.
Searches returned 747 records, which were reduced to 415 following manual checks of the automated removal of duplicates. Two authors then screened titles and abstracts by the pre-registered inclusion and exclusion criteria, removing reports that did not include a quantitative study, did not test an intervention, or were not concerned with susceptibility to misinformation. Of the 111 reports retained for full-text screening, all but one were retrieved in full.1 Both authors then screened the retrieved reports, retaining all that included a study that quantitatively tested a game or video designed to reduce susceptibility to misinformation via technique-based psychological inoculation. Any conflicts in screening were resolved via discussion between the two authors. See Appendix A for our working definitions of key terms and the complete inclusion and exclusion criteria.
We then sent emails to the 31 authors whose email address appeared in one of the 23 retained reports asking for any other potentially relevant literature. This led to 14 replies and 14 more reports for full-text screening. Also included were six further reports that were otherwise known to us. Of the total 20 additional reports, 17 were retained after full-text screening. The systematic search and screening process culminated in 40 reports which contained a total of 72 relevant studies for inclusion (see Figure 3; see Appendix B for the reference list of included reports).
Figure 3. PRISMA 2020 flow diagram for new systematic reviews with included searches of databases, registers and other sources (Page et al., 2021).
We developed a codebook to facilitate the extraction of information pertaining to the pre-registered variables of interest. Included were 44 distinct pre-registered variables spanning four domains of study characteristics: descriptives, inclusion of theoretically relevant elements, the operationalization of susceptibility to misinformation, and characteristics relating to test-items and item-scales. Descriptive characteristics were to inform overall classifications of the interventions and their tests. The coding in this domain pertained to information that included the name and type of DTIIM, features of the study design, and sample characteristics. The inclusion of theoretically relevant factors within studies was coded to inform understandings of the links between psychological inoculation theory and tests of DTIIMs (research questions A1–A3). Theoretical variables included perceived threat, post-intervention counterarguing, and any alternate factors that may appear in the studies. Study characteristics relating to the operationalization of susceptibility to misinformation were coded to inform empirical understandings (research questions B1–B3). Empirical categories included dependent variables, the analyses conducted, and the metric of the reported outcome (e.g., ratings pertaining to which type of item[s]), as well as characteristics of test-items (e.g., were they selected or produced) and item-scales (e.g., ratios of misinformation and non-misinformation). For information to be coded as reported within a study it had to either appear in the main body of the paper or be referenced in the main body as appearing in a specific section of a supplement. However, we habitually engaged with supplementary materials and noted relevant information when it was found. The codebook also directed the methods of coding, provided examples, and gave other notes to ensure the process followed the pre-registered plan. For example, for a scale to be coded as validated it had to meet pre-set criteria relating to content validation, scale construction, and scale evaluation based on Boateng et al. (2018).
Two authors applied the codebook to extract information from a random sample of 10% of the studies retained after the first round of full-text screening, as per the pre-registered plan (k = 5; n = 220 discrete datapoints). A third author assessed the proportion of agreement between raters at 94.3% overall, with inter-rater agreement of κ = 0.89 (Cohen, 1960; pre-registered minimum acceptable agreement was 80%). Information from the remaining studies was then extracted by the first author in two passes, the first adhering to the original codebook and the second with the addition of 13 further categories, as consistent with the pre-registered plan. For example, two categories added on the second pass were: (1) whether the intervention focused on misinformation on a specific topic (topic-focused Y/N) and (2) if Y, the name of the topic (e.g., vaccine misinformation). This culminated in information extraction across 57 variables in total (see the data extraction spreadsheets). Finally, eventualities unanticipated by the codebook were discussed amongst the authors. For example, two studies (Tandoc & Seet, 2023) measured perceived threat via questions pertaining to known instances of having been misled in the past rather than a perception of threat posed by future exposure to misinformation. These studies were not coded as including a theoretically relevant measure of threat.
We took our method of answering research questions pertaining to the strength of the inferential links between psychological inoculation theory and the outcomes of tests of DTIIMs from Oberauer and Lewandowsky (2019). Inferential links are defined as the relations between the premises of a theory and the results of tests, with strength being a function of (1) the number of auxiliary assumptions required to render the inference deductive and (2) the credibility of those assumptions. For example, if threat was not measured in a test, to conclude that a process of inoculation had caused the effects would require an assumption of threat conferral. The strength of that inference would then rest on the credibility of that assumption and any other assumptions so required.

Results

Descriptives

The 72 studies retained from 40 reports describe a steady upward trend in research interest over time since 2019 (see Figure 4). Of the studies, 46 (64%) randomly allocated participants to conditions and thus constituted experiments. Of those, 39 (85% of experiments; 54% overall) were controlled trials. Most studies drew a convenience sample (k = 58; 81%), the location of which was most often the US or indeterminate (kUS = 27; kIndeterminate Location = 27; kGlobal South = 4). Most studies were conducted online (k = 61; 85%; kProlific = 29; kGame Website = 15) and approximately half were pre-registered (k = 35; 49%).
Figure 4. Number of reports and studies by year.
Note. To 6/2024 = peer-reviewed work published in 2024 up to the date of the search in June 2024. Gray Lit. = all gray literature up to June 2024.
The full set of studies tested 23 distinct DTIIMs, of which 14 were games and nine were videos. There was an approximately even split between interventions that addressed misinformation broadly and those that focused on a topic (k = 12). Of the topic-focused interventions, the majority were health-related (k = 9), usually concerning vaccination or COVID-19 misinformation (k = 7). Of the 72 studies, 59 (82%) tested a game-based DTIIM, 43 of which focused on a game described by designers as active inoculation. No studies described a video-based DTIIM as an active inoculation intervention. Approximately half of the tests of games and 40% of tests overall (k = 29) were of Bad News, a browser-based game that served as the template for other DTIIM games classified as active inoculation (e.g., Go Viral! and Bad Vaxx). The other stand-out intervention to have undergone repeated testing was the emotional language video featured in Roozenbeek, van der Linden, Goldberg, et al. (2022; k = 12).

Theoretical: Inclusion of Relevant Factors

Research question A1 focuses on the extent to which key factors, mechanisms, and pathways of psychological inoculation were considered by the studies. We first examined the measurement of perceived threat to inform an assessment of the link between tests of DTIIMs and a key premise of psychological inoculation theory that threat conferral is requisite (A2). In total, five studies (7%) measured perceived threat (Basol et al., 2021: Study 2; Capewell et al., 2024: Study 2; Maertens et al., 2025: Studies 2, 4, and 5). Some version of the “motivational threat” scale (Banas & Richards, 2017) was employed by four studies (Basol et al., 2021: Study 2; Maertens et al., 2025: Studies 2, 4, and 5), 1 of which (Basol et al., 2021: Study 2) also reported threat as measured by the “apprehensive threat” scale (Burgoon et al., 1978). The fifth study took a broad composite of those and other scales to operationalize perceived threat (Capewell et al., 2024: Study 2). Only the three studies from Maertens et al. (2025) included hypotheses relating to threat, all of which pertained to longitudinal effects over time.2 No studies included a hypothesis relating to threat perception during or immediately post-intervention.
We then considered the extent to which studies of DTIIM games classified by creators as active inoculation (e.g., Bad News and Bad Vaxx but not Fake News Detective or Spot the Troll) measured post-intervention counterarguing. We did this to ascertain the link between tests of active DTIIMs and a premise of psychological inoculation theory that producing counterarguments (i.e., active inoculation) will stimulate post-intervention counterarguing in inoculated participants (A2). No test of a DTIIM game described as active inoculation considered post-intervention counterarguing, though one test of a video-based DTIIM included a single-item measure of this construct (Hughes et al., 2024).
We also catalogued the inclusion of other theoretically relevant factors and mechanisms beyond those of the basic model. A total of nine studies assessed factors other than threat and counterarguing (kMemory = 4; kAnger = 3; kSelf-Efficacy = 2; kSource Credibility = 1; kIssue Involvement = 1). Memory for refutations over time, as a longitudinal outcome, was the most common alternate factor (Capewell et al., 2024: Study 2; Maertens et al., 2025: Studies 2, 4, and 5).

Empirical: Measures, Test-Items, and Item-Scales

To address our research question concerning how susceptibility to misinformation has been assessed (B1), we determined how many studies included susceptibility to misinformation as a dependent variable and the ratings upon which the reported effects were based. Of the 72 studies, 67 (93%) operationalized susceptibility to misinformation via ratings of test-stimuli such as simulated social media posts or headlines. Of the stimuli rating studies, 31 involved ratings of perceived reliability, 14 of veracity (or accuracy or truth), 11 of manipulativeness, and eight of trustworthiness. Additionally, 11 studies assessed technique recognition with multi-choice questions, and 20 enquired into sharing intentions (or willingness or likelihood).
Research question B2 concerned how well susceptibility to misinformation was estimated by statistical analyses and operationalized by the scales of test-items employed in DTIIM research. To address this question in relation to the ability of tests to infer broad-spectrum protection against misinformation, we considered the inclusion of dependent variables discernment and response bias or skepticism. Of the 67 stimuli rating studies, k = 39 (58%) reported analyses related to discernment and k = 10 (15%) reported analyses related to response bias or skepticism. Of those studies, 13 calculated discernment and eight calculated response bias via a SDT approach, all of which used ROC curve analysis. Figure 5 shows that the trend over time has been to consider these outcomes more, especially discernment.
Figure 5. Percentage of studies by year including discernment and response bias or skepticism as dependent variables.
Note. To 6/2024 = peer-reviewed work published in 2024 up to the date of the search in June 2024. Gray Lit. = all gray literature up to June 2024.
Regarding the mitigation of potential confounds associated with item effects within DTIIM studies, we note two apparent approaches: item matching and treating items as a crossed random factor with participants in statistical analyses. Regarding item matching, 19 stimuli rating studies (28%) attempted some method of aligning misinformation and non-misinformation stimuli on extraneous characteristics such as word count and topic (Appel et al., 2025: Studies 1–3; Capewell et al., 2024: Studies 1 and 2; Maertens et al., 2025: Studies 3–5; Pennycook et al., 2024: Studies 1–5; Roozenbeek, van der Linden, Goldberg, et al., 2022: Studies 1–6). Regarding the statistical approach, four of the stimuli rating studies (6%) included an analysis that treated items as a crossed random factor with participants in a mixed-effects model. This constituted the primary analysis in two studies (Leder et al., 2024: Studies 1 and 2), while in two others it appeared as a robustness check (Lees et al., 2023: Study 1; Roozenbeek et al., 2020). We noticed a handful of other studies included such analyses in their Supplementary Materials but did not reference them in the main body of the study, so they were not coded as reported analyses (e.g., Appel et al., 2025: Studies 1–3).
To help address our research question regarding the potential implications of item effects in DTIIM research (B3), we sought to identify stimuli that may have been repeatedly used in testing. This assessment was made because item effects pertain to characteristics of specific items such that when items are often recycled, confounding effects may be replicated across tests (DeBruine & Barr, 2021). To that end, we coded all 1,436 individual item-uses across the 67 stimuli rating studies. This revealed 21 specific items that stood out for having been repeatedly used to assess the effects of DTIIMs, and Bad News in particular (nRange Total = 10–25 uses per item; nBad News = 10–19 uses per item; kBad News = 29 tests in total). Of those items, three operationalized non-misinformation and 18 misinformation, with three misinformation items representing each of six rhetorical techniques featured in Bad News (see the online Supplemental Spreadsheets for details).
To answer our research question concerning the quality of estimates of susceptibility to misinformation with regards to the characteristics of item-scales (B2), we considered: the inclusion of produced and sampled test-items; the ratios of misinforming and non-misinforming items in scales; the length of scales and subscales; and reports of scale validity and reliability. We identified a total of 89 item-scale-uses within the 67 stimuli rating studies. Regarding the inclusion of items that were either produced by researchers or selected from real-world examples, n = 45 (51%) scales were comprised of solely produced stimuli, n = 26 (29%) were comprised of solely selected stimuli, n = 16 (18%) contained produced misinforming- and selected non-misinforming-stimuli, and n = 2 (2%) did not report this information. Ratios of misinforming to non-misinforming items in scales ranged between 1:1 (50% misinformation) and 1:0 (100% misinformation). Across all scales appearing in stimuli rating studies, the ratio of misinformation to non-misinformation was 2.3:1 (70% misinformation), with an average total length of 11.84 items containing 8.27 misinformation and 3.57 non-misinformation items (see Figure 6 for a graphical representation of scale and subscale length by year). Regarding the psychometric properties of item-scales, we found two studies (3%) used a scale that was validated by our pre-set criteria and 14 (21%) reported scale or subscale reliability (⍺ = .14 to .88). Of the 46 individual reports of reliability appearing in those studies, n = 17 (37%) returned ⍺ < .60, 15 of which pertained to non-misinformation subscales.
Figure 6. Average number of items in scales and subscales by year.
Note. To 6/2024 = peer-reviewed work published in 2024 up to the date of the search in June 2024. Gray Lit. = all gray literature up to June 2024.

Discussion

This systematic review aimed to evaluate the theoretical and empirical approaches of the body of literature testing digital interventions to combat misinformation via technique-based psychological inoculation. Its purpose was to assess the ability of quantitative studies to accurately infer effects of inoculation against misinformation. The first set of research questions (A1–A3) focused on the inclusion of theoretically relevant factors in DTIIM tests to facilitate an assessment of the strength of inferential links between the tests and inoculation theory. The second set (B1–B3) were empirical questions concerning the tests and interpretations of results.

Theoretical Topics

Inclusions of Theoretically Relevant Factors, Pathways, and Mechanisms

Our first theoretical research question (A1) concerned the consideration of theoretically relevant factors, mechanisms, and pathways of inoculation in DTIIM studies. We found that the studies did not include core factors of inoculation with the exception of five studies that measured perceived threat and one that considered post-intervention counterarguing. No studies hypothesized threat would be conferred during the intervention as is theoretically required for inoculation to occur (Compton, 2013, 2020, 2024; Compton & Pfau, 2005; McGuire, 1964; Pfau, 1997). This stands in contrast to 20 threat-measuring studies out of the 30 between 1990 and 2009 that were included in Banas and Rains’ (2010) meta-analysis of inoculation experiments. We do note, however, that multiple DTIIM studies acknowledged the importance of threat in producing inoculation effects and expounded upon the intervention’s method of forewarning to argue threat will have been conferred (e.g., Appel et al., 2025; Basol et al., 2021; Leder et al., 2024; Maertens et al., 2021; Roozenbeek et al., 2021; van der Linden & Roozenbeek, 2020). This approach conflicts with Compton’s (2013) point that it is threat conferral rather than forewarning that is requisite to inoculation, and that threat is a psychological outcome that must be measured.

The Strength of the Inferential Links

Our second theoretical research question regarded the strength of the inferential link between inoculation theory and outcomes of DTIIM tests (A2). The strength of an inferential link depends on the number and credibility of any assumptions required to establish a deductive link between the outcomes of tests and the theory (Oberauer & Lewandowsky, 2019).
We find that the conferral of threat by DTIIMs must be assumed to deduce their effects may be those of psychological inoculation. This is evidenced by the lack of hypotheses predicting threat conferral during the intervention along with the small number of instances threat was measured. Several challenges exist to the credibility of the assumption. One issue negatively impacting credibility is the classification of DTIIMs as therapeutic inoculation interventions (Lewandowsky & van der Linden, 2021). Indeed, the question for inoculation theorists of “whether these effects are inoculation effects or something else” (Compton, 2020, p. 331, emphasis in the original) arose because therapeutic inoculation does not challenge participant beliefs thus rendering the role of threat unclear. Other factors reducing the credibility of assumptions of threat conferral by DTIIMs apply where they lack an explicit forewarning and employ cartoon-based, humorous content, as at least some do. Moreover, our findings highlight that descriptions of the way a DTIIM forewarns do not necessarily increase the credibility of an assumption it does. For example, the only study to measure perceived threat immediately after playing Bad News outlined the various methods by which the intervention confers threat although testing found no such effect (Maertens et al., 2025: Study 2).
Post-intervention counterarguing is the other core theoretically relevant factor of the basic model of inoculation. This cognitive process of refuting the content of attack messages in post-testing is not essential to inoculation generally, but it is the theorized mediator of active inoculation effects (Compton & Pfau, 2005; McGuire, 1964). None of the studies of DTIIM games classified as active inoculation measured post-intervention counterarguing. Thus, any inference that they derive their effects by the theorized mechanism of active inoculation must be assumed. Negatively impacting the credibility of this assumption is the lack of a requirement for DTIIM gamers to produce their own refutational material and no in-game functionality allowing them to demonstrate if they had.
The apparent absence of the intervention features required for active inoculation suggests that the producers of active DTIIM games may have designed them with theoretical principles in mind that differ to those previously established for active inoculation. If so, this would further reduce the credibility of an assumption that the effects of DTIIM games are those of active inoculation. To provide an overview, we checked the literature for examples of how active inoculation is conceptualized in the field of inoculation against misinformation. We found research variously describes DTIIMs as active where they: (1) allow rather than require participants to generate counterarguments (akin to “implicit inoculation,” Basol et al., 2020, 2021; Compton, 2013, p. 231; Roozenbeek, van der Linden, Goldberg, et al., 2022); (2) are a form of experiential learning (Axelsson et al., 2024; Basol et al., 2020, 2021; Leder et al., 2024; Roozenbeek & van der Linden, 2019, 2020); (3) employ perspective taking (Droog et al., 2024; van der Linden, 2022); or (4) are either a game or a video (i.e., all DTIIMs but not non-digital interventions; Lu et al., 2023). None of these conceptualizations of active inoculation adhere to that established in the seminal inoculation literature (Compton & Pfau, 2005; McGuire, 1964; McGuire & Papageorgis, 1961). Where there is no accurate consensus of what constitutes an active inoculation intervention, any inference that the outcome of a test was the product of active inoculation may be unconnected to psychological inoculation theory. From this we judge the inferential link between inoculation theory’s concept of active inoculation and tests of DTIIM games to be weak to nonexistent.

Implications of the Strength of Inferential Links

The final theoretical research question of this review (A3) considered the implications of interpreting the effects of DTIIM tests as those of psychological inoculation. The weakness of the inferential link between inoculation theory and DTIIM tests shows that such claims are only weakly supported. Consequently, the practice of using the results of DTIIM tests to construct new boundary conditions and models of inoculation could threaten accurate theoretical understandings, potentially causing a crisis for the theory. Further, the weak to non-existent inferential link to active inoculation casts doubt on the appropriateness of using DTIIMs to compare active and passive inoculation processes. The findings also have implications for the further development of DTIIMs. This work may return suboptimal outcomes from manipulating a DTIIM’s use of the factors, mechanisms, and pathways of inoculation if the intervention’s original effects were not those of inoculation.

Empirical Topics

Measurement of (Broad-Spectrum) Susceptibility to Misinformation

To address our empirical research question regarding how susceptibility to misinformation has been operationalized and assessed in the field of DTIIM research (B1), we first quantified the dominance of the credibility rating empirical paradigm. We found stimuli ratings via Likert-type scales to be almost ubiquitous, with 67 of the 72 included studies employing them. We then determined these studies’ inclusion of misinformation and non-misinformation stimuli as test-items, and of the dependent variables credibility discernment (between misinformation and non-misinformation) and response bias (or skepticism). We made this enquiry because the concept of broad-spectrum inoculation against misinformation includes that effects should at least be greater for misinformation than non-misinformation (i.e., discernment) lest they be better conceptualized as a blanket effect on any information (e.g., generalized skepticism). We found that around half of the stimuli rating studies in this review reported discernment, while considerably less reported response bias or skepticism (approximately one in seven). We note that in some early DTIIM research, non-misinforming items were not originally intended for analysis, but rather, to ensure scales were not solely constructed from misinformation items (van der Linden, 2024). However, we hold that wherever non-misinformation is omitted from a stimuli rating test, results will be unable to establish discernment and, therefore, cannot speak to effects with the specificity expected of inoculation against misinformation.

Test-Stimuli and Accuracy of Estimates of Susceptibility to Misinformation

We address our research question concerning how accurately the test-items, item-scales, and statistical analyses employed by DTIIM studies assessed susceptibility to misinformation (B2) in reference to two main topics: the methods by which potential item effects were broached, and the characteristics of item-scales.
Broaching Item Effects
Past findings that different DTIIM test-items and item-scales are associated with meaningfully different test outcomes attest to the relevance of item effects, and thus methods to address them, to DTIIM research (Judd et al., 2012; Roozenbeek et al., 2021; Roozenbeek, van der Linden, Goldberg, et al., 2022; van der Linden, 2024). We found that over a quarter of the stimuli rating DTIIM studies sought to match misinformation and non-misinformation stimuli on potentially confounding characteristics (e.g., message length and topic). Our inspection of these attempts revealed misinforming and non-misinforming items that were less than well-matched. For example, matching on word count sometimes returned non-misinforming items twice the length of their misinforming matches. Further, new potential confounds were introduced in some cases. For example, misinforming stimuli containing all caps words (e.g., NEWS ALERT) were matched with non-misinforming stimuli that did not, despite the use of all caps not being the rhetorical technique of interest and having known effects on credibility ratings (Pelau et al., 2023). Another method employed was the treatment of stimuli as a crossed random factor with participants in a mixed-models analysis, though only four studies included this analysis.
Characteristics of Item-Scales
Choices related to scale construction may also have affected determinations of susceptibility to misinformation. One such practice is the systematic inclusion of both fabricated and real-world test stimuli within different subscales. Producing or selecting test stimuli is associated with various psychometric trade-offs and valid arguments can be made for either approach. However, a problematic practice is to include fictional examples of one class of stimuli and real-world examples of another because each subscale would then differ on influential characteristics other than those of interest. Of the stimuli rating scales employed by the studies, nearly one-in-five combined misinformation items produced by researchers with non-misinformation items selected from real-world examples. This could have created confounds between misinformation and non-misinformation subscales impacting measures of discernment and skepticism and, therefore, inferences of inoculation against misinformation.
Other potential scale-related issues for stimuli rating tests of DTIIMs regard the proportions of items in scales operationalizing misinformation and non-misinformation, and the total numbers of items within scales and subscales. Overall, there were on average almost two-and-a-half times as many misinforming items in the scales, with the average length of a non-misinformation subscale at only three-and-a-half items. That some recent studies have endeavored to address this issue (e.g., Appel et al., 2025; Pennycook et al., 2024) highlights how heavily skewed the scales used in the original DTIIM studies were. Moreover, many reports of internal consistency for non-misinformation subscales returned unacceptable reliability, likely associated with subscale length (Boateng et al., 2018).

Implications of the Empirical Approaches

Our third empirical research question (B3) concerns the implications of the ability of studies to reveal the effects of DTIIMs. Regarding the trend toward including discernment as a dependent variable in DTIIM credibility rating studies, we consider this has enhanced the conceptual validity of findings of broad-spectrum protection against misinformation. While there exist only a handful of studies that can speak to both the discernment and response bias (or skepticism) effects of a DTIIM, most studies today can base at least qualified inferences of broad-spectrum protection against misinformation on the result of a relevant test. However, the implications of the test-items, item-scales, and statistical analyses employed include that item effects may have impacted the validity of measures of susceptibility to misinformation in several ways. Notable in DTIIM research is the near universal treatment of test-stimuli as a fixed effect, a statistical approach which in the case of DTIIM research commits a fixed-effects-fallacy (Clark, 1973). Because of this, where there is considerable inter-item variance in participant responses there will be a substantial risk of type I errors (i.e., finding an effect where one does not exist; for a discussion, see Judd et al., 2012). Further, where tests of this sort have repeatedly employed the same items, such as the 21 test-items often re-used in testing Bad News, these errors may be highly replicable, a problem that is exacerbated rather than reduced by increased statistical power (DeBruine & Barr, 2021).
Issues pertaining to characteristics of test-items and item-scales may have also interacted to negatively impact the validity and reliability of measures of susceptibility to misinformation. For example, if the ratio of a credibility rating scale is skewed toward misinforming stimuli, participants will be more likely to display a response bias toward rating all items as misinformation (Hameleers, 2023). This would be consistent with an effect of generalized skepticism. If the scale includes produced misinformation and selected non-misinformation, the real-world non-misinformation stimuli may be known to participants to be true such that the mooted effect of increased skepticism for both classes of stimuli would not likely affect the ratings of the non-misinformation stimuli (for a discussion, see Modirrousta-Galian & Higham, 2023). The combined effect would be increased discernment between misinformation and non-misinformation that may have been driven by empirical artifacts rather than participants’ enhanced abilities to discern (i.e., a type I error).
Empirical issues have implications also for the inferential links between the empirical findings of DTIIM studies and inoculation theory. In this case, the strength of links would depend upon the credibility of an assumption that the results of DTIIM tests reflect valid estimates of susceptibility to misinformation. We find there is a distinct possibility that item effects and the treatment of ratings data in analyses, along with the repeated use of compromised scales and stimuli associated with item effects, have combined to negatively impact the credibility of that assumption. We find this to be especially the case for tests of Bad News, implications of which include that this most popular and tested DTIIM may not create the effects upon which its popularity and endorsements depend.

Recommendations for Future DTIIM Research

The findings of this review support two broad recommendations to improve DTIIM research. First, we suggest an approach of theoretical pluralism be taken to researching DTIIMs. For example, theories of learning, motivation, and media literacy that differ meaningfully from psychological inoculation theory could provide legitimate frameworks with which to understand DTIIM effects. Second, we recommend DTIIM research includes variables of key relevance to psychological inoculation’s theorized mechanisms and pathways in improved empirical tests. Regarding perceived threat specifically, we reiterate earlier calls for inoculation researchers to habitually include tests for its influence in creating effects of resistance to persuasion (Banas & Rains, 2010; Banas & Richards, 2017). By so doing, the conferral of threat by a DTIIM may be confirmed or disconfirmed, and thus the possibility of effects being those of inoculation. However, if there are strong theoretical grounds upon which to doubt the relevance of threat to inoculation by DTIIMs, we encourage the presentation of comprehensive theoretical arguments in the literature.
Recommendations regarding dependent variables in DTIIM tests are also derived from the findings. Where results of a stimuli rating paradigm are intended to infer broad-spectrum effects of reduced susceptibility to misinformation, we recommend they report and interpret a full range of relevant dependent variables. These would include outcomes associated with (1) misinformation, (2) non-misinformation, (3) discernment, (4) response bias or skepticism, and, where applicable, (5) technique recognition (e.g., by multi-choice testing). By such design, a small effect of reduced acceptance of non-misinformation might be noted and interpreted as an acceptable treatment side effect so long as discernment between misinformation and non-misinformation was shown to be adequately improved. Alternatively, where there is no effect on discernment, a finding of inoculation against misinformation would be unsupported as it would lack the required specificity. DTIIM research might also consider a return to assessments of attitude change where relevant, for example, for interventions that focus on a particular topic (e.g., Bad Vaxx and vaccine misinformation). One benefit of considering attitudes is that attitudinal measures already incorporate the concept of discernment. That is, a rating in one direction (e.g., a positive attitude) is simultaneously the inverse of a rating in the other direction (a negative attitude) and thus a composite measure of attitudinal discernment between two possible positions (for/against).
We also recommend that statistical analyses used in stimuli rating studies of DTIIMs treat stimuli as a crossed random factor with participants in mixed-effects models (DeBruine & Barr, 2021; Judd et al., 2012; Pennycook et al., 2021). Further, because stimuli in the typical DTIIM testing paradigm are designed to include a rhetorical technique (signal) or not (noise), SDT approaches are well-suited (Batailler et al., 2022; Hautus et al., 2021). Specifically, we encourage the continued adoption SDT approaches such as ROC curve analysis which can accommodate multi-point ratings while avoiding a hard assumption of an equal-variance Gaussian distribution for the two classes of stimuli (Higham & Higham, 2018). In addition, we encourage the development of a synthesized SDT/mixed-effects analysis. SDT and multi-level approaches such as these may be combined to provide a robust method by which to ascertain participants’ detection and appraisal of stimuli characteristics (e.g., Sultan et al., 2022).
Finally, regarding scale construction, we recommend that (1) scales do not include misinformation and non-misinformation subscales constructed from differently sourced stimuli (i.e., produced or selected), (2) ratios of misinforming and non-misinforming items should be at least even if not somewhat skewed toward non-misinformation (i.e., to better reflect most people’s online experience), and (3) subscale-length should be at least sufficient to support adequate internal consistency. Future DTIIM research will also ideally validate item-scales on their psychometric properties and curb the practice of repeatedly matching unvalidated scales to specific interventions.

Future Directions for Inoculation Theorists

The findings of this review suggest that DTIIM research raises fundamental theoretical questions that might best be addressed by inoculation scholars. First, theorists could clarify what justifies (non)classification of an intervention as psychological inoculation. Regarding classification from intervention content, this would be helpful as there is apparently no consensus that inoculation must involve a pre-emptive refutation (i.e., inoculation prebunk). This is evidenced by an intervention included in this review that follows a forewarning with instructions on media literate behavior (Vraga et al., 2022; see also Agley et al., 2021), and a recent review that classifies misinformation countermeasures that constitute a warning only as a type of inoculation (Ziemer & Rothmund, 2024). Regarding classification of interventions from effects, where outcomes might plausibly be attributed to alternate theoretical explanations, inoculation theorists might consider if research should first establish threat conferral and a predicted inoculation pathway before classifying a procedure as one of inoculation. An alternative would be that any intervention argued to fit an analogy to some sort of biological inoculation could be classified as psychological inoculation even where repeated testing does not result in the predicted outcomes (i.e., a perpetually unsuccessful inoculation intervention versus not an inoculation intervention). This echoes a deeper question for the field regarding how instructive the analogy should be where new medical vaccines employ qualitatively different mechanisms to conventional ones to derive their effects (e.g., mRNA vaccines) considering that the biological inoculation analogy, like all analogies, must at some point break down (Banas, 2020).
Secondly, the importance of the relative persuasive strength of intervention- and test-materials for establishing psychological inoculation by DTIIMs is not clear. For example, Bad News and Go Viral! are argued to feature “severely weakened” persuasive content with testing carried out on “full dose” examples (van der Linden, 2024). However, the stimuli appear equivalent in style and persuasiveness and are apparently weakened only by the in-game context in which they appear. Much like the conferral of threat, whether the prebunk stimuli are persuasively weaker or not would be revealed by testing rather than a rationale drawing on descriptions of an intervention’s features.
Thirdly, we are not aware of studies that test conventional inoculation interventions on general skepticism within the traditional inoculation paradigm. This question has apparently never been broached, though it has arisen in the context of DTIIM research. We propose that as researchers continue to explore inoculation’s potential breadth of effects (e.g., Ivanov et al., 2022; Parker et al., 2012, 2016), the possibility that processes might sometimes confer resistance to any attack message should be considered.
Finally, we recommend that inoculation theorists exercise special caution in interpreting the results of the studies included in this review, especially where they are used to inform new boundary conditions and models of psychological inoculation. This posture should be favored at least until the inferential links between the tests and theory are shown to be adequately strong. We further recommend that theorists resist classifying the effects of DTIIM games as those of active inoculation or use them to explore differential active and passive inoculation effects. We suggest this approach should continue at least until the definition of active inoculation applied to DTIIMs is shown to reflect that of seminal understandings of active inoculation, and DTIIM games are shown to both meet that definition and create the theorized psychological outcomes. Ultimately, we recommend DTIIM effects not be assumed to be those of psychological inoculation until there is strong empirical evidence that they are.

Limitations

Like all empirical endeavors, this work has limitations. Importantly, it cannot determine if a DTIIM tested within a study included in this review is well classified as a psychological inoculation intervention, nor whether its reported effects are those of inoculation or something else. The focus of this review has instead been to determine DTIIM research’s ability to address those questions. Future research might directly analyze the content of DTIIMs to determine if their features meet the minimum requirements of (active) psychological inoculation. Further, subsequent reviews with the aim of clarifying DTIIM effects by meta-analytic means should consider re-analyzing data by methods recommended here before synthesizing effects. For example, because the influence of item effects is known and can be reduced by multi-level reanalysis, best practices dictate efforts should be made to do so ahead of meta-analysis. Such analyses should take care also to not combine effects from contrasting empirical paradigms that consider different dependent variables (e.g., attitudes and credibility ratings; Loughnan et al., 2025).

Conclusion

The application of multiple extensions to psychological inoculation theory that led to digital interventions to protect against the persuasive influence of rhetorical techniques was a timely reaction to misinformation in the context of the 2016 U.S. election and Brexit campaigns. The apparent arrival of a post-truth world lent great urgency to this work, spurning in researchers the sort of creativity and focus on engagement and scalability that will be required to meaningfully reduce susceptibility to misinformation. DTIIMs are a striking example of this new and necessary approach. However, as evidenced by this systematic review, fundamental questions pertaining to what they do, how they do it, and the potential implications for psychological inoculation theorizing have not yet been broached. As such, and with misinformation research now under attack from powerful detractors, we urge researchers to circle back and clarify the effects of DTIIMs, exercise caution in interpreting the outcomes of DTIIM studies, and help address the theoretical and empirical issues raised. This will solidify inoculation and misinformation research and advance work toward a new and improved generation of scalable interventions for attenuating the persuasive effects of misinformation.

Acknowledgments

We would like to thank Jon Roozenbeek and Philipp Schmid for reading drafts of the manuscript and providing valuable feedback.

Ethical Considerations

Ethics approval was not required.

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was funded in full by the Behavioural Science Institute of Radboud University.

ORCID iDs

Footnotes

1. One report, a PhD dissertation, was not available in full and efforts to contact the author failed. The available first 25 pages, which included the list of contents, made no reference to a digital intervention.
2. We note Basol et al. (2021: Study 2) presented hypothesis 8 as an “exploratory hypothesis” predicting the conferral of threat by Go Viral!. The pre-registration of the study includes this as two non-predictive exploratory analyses of two conceptually distinct measures of threat. We therefore coded it as two exploratory analyses.

Data availability statement

The data, non-anonymized preregistration, and supporting materials are available on OSF: https://osf.io/9e7gz/

Appendix A: Exclusion and Inclusion Criteria and Working Definitions of Key Terms

Exclusion and Inclusion Criteria

Title and Abstract Screening

Exclusion
Not in English.
Does not include a quantitative study.
Does not test an intervention.
Is not on the topic of susceptibility to misinformation, which may include, for example, disinformation, fake news, political propaganda, and undue influence.
Inclusion
None of the exclusion criteria are met.
It is unclear if an exclusion criterion is met.

Full Text Screening (For Each Study Within Each Paper)

Exclusion
Not in English.
Not a quantitative study.
A pilot study.
The study is not substantially detailed in the main body of the paper.
Does not test a game- or video-based intervention.
The intervention is not designed to psychologically inoculate against misinformation.
The intervention does not pre-emptively refute or “prebunk” a rhetorical technique or techniques of persuasion.
Inclusion
Quantitatively tests a digital (game or video) inoculation intervention to reduce susceptibility to misinformation.
Is intended as a technique-based (also sometimes referred to as logic-based) inoculation intervention.

Working Definitions of Key Terms

Games

Games are defined as browser- or app-based online games that can be played in a single sitting.

Video-Based

Video-based refers to interventions that are delivered in their entirety by video. That is, a single video constitutes the complete intervention. Interventions that include a video-based component and other stimuli, such as text-based instructions or information, are not video-based for the purpose of this screening.

Addresses Susceptibility to Misinformation

To address susceptibility to misinformation, the study must include a dependent variable that relates specifically to misinformation, broadly defined for these purposes as information that is false, misleading and/or from a low-quality source, regardless of intent. This does not include techniques of recruitment to an extremist group or political ideology.

Technique

A technique is defined for these purposes as any characteristic feature of unreliable information, including unintended fallacious rhetorical features or indicators of information being from a low-quality source. Technique-based inoculation does not pertain to the refutation of specific claims only (i.e., fact-based inoculation).

Appendix B: Reference List of Included Studies

References

Appel R. E., Roozenbeek J., Rayburn-Reeves R., Basol M., Corbin J., Compton J., van der Linden S. (2025). Psychological inoculation improves resilience to and reduces willingness to share vaccine misinformation. Scientific Reports, 15(1), 29830. https://doi.org/10.1038/s41598-025-09462-5
Axelsson C-A. W., Nygren T., Roozenbeek J., van der Linden S. (2024). Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques. Journal of Research on Technology in Education, 57(5), 992–1018. https://doi.org/10.1080/15391523.2024.2338451
Basol M., Roozenbeek J., Berriche M., Uenal F., McClanahan W. P., van der Linden S. (2021). Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data & Society, 8(1), 20539517211013868. https://dx.doi.org/10.1177/20539517211013868
Basol M., Roozenbeek J., van der Linden S. (2020). Good news about Bad News: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1), 1–9. https://doi.org/10.5334/joc.91
Capewell G., Maertens R., Remshard M., van der Linden S., Compton J., Lewandowsky S., Roozenbeek J. (2024). Misinformation interventions decay rapidly without an immediate posttest. Journal of Applied Social Psychology, 54(8), 441–454. https://doi.org/10.1111/jasp.13049
Cook J., Lepage C., Hopkins K. L., Cook W., Kolog E. A., Thomson A., Iddrisu I., Burnette S. (2024). Co-designing and pilot testing a digital game to improve vaccine attitudes and misinformation resistance in Ghana. Human Vaccines & Immunotherapeutics, 20(1), 2407204. https://doi.org/10.1080/21645515.2024.2407204
Cook J., Njomo D., Aura C., Wamalwa B., Abeyesekera S., Ssanyu J. N., Kabwijamu L., Ofire M., Lepage C., Thomson A., Waiswa P., Cook W., Knobler S. L., Hopkins K. L. (2025) Improving vaccine attitudes and misinformation resistance through gamification: A pilot study in Kenya and Uganda. PsyArXiv Preprints, 51(1), 41. https://osf.io/preprints/psyarxiv/hrv42_v1
Facciani M. J., Apriliawati D., Weninger T. (2024). Playing Gali Fakta inoculates Indonesian participants against false information. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-152
Graham M. E., Skov B., Gilson Z., Heise C., Fallow K. M., Mah E. Y., Lindsay D. S. (2023). Mixed news about the Bad News game. Journal of Cognition, 6(1), 1–14. https://doi.org/10.5334/joc.324
Harjani T., Basol M. S., Roozenbeek J., van der Linden S. (2023). Gamified inoculation against misinformation in India: A randomized control trial. Journal of Trial and Error, 3(1), 14–56. https://doi.org/10.36850/e12
Harrop I., Roozenbeek J., Madsen J. K., van der Linden S. (2023). The role of media in political polarization| Inoculation can reduce the perceived reliability of polarizing social media content. International Journal of Communication, 17, 5291–5315.
Hu B., Ju X. D., Liu H. H., Wu H. Q., Bi C., Lu C. (2023). Game-based inoculation versus graphic-based inoculation to combat misinformation: A randomized controlled trial. Cognitive Research: Principles and Implications, 8(1). https://doi.org/10.1186/s41235-023-00505-x
Hughes B., Jereza R., West J., Goldberg B. (2024). Inoculating against persuasion by white supremacist scientific racism propaganda: The moderating roles of propaganda form and subtlety. [Manuscript submitted for publication]. Polarization and Extremism Research and Innovation Lab (PERIL), American University.
Iyengar A., Gupta P., Priya N. (2023). Inoculation against conspiracy theories: A consumer side approach to India’s fake news problem. Applied Cognitive Psychology, 37(2), 290–303. https://doi.org/10.1002/acp.3995
King-Nyberg B., Lindsay D. (2024). Replication of Leder et al. (2024) [Unpublished manuscript]. University of Victoria. https://osf.io/mex84
Leder J., Schellinger L. V., Maertens R., van der Linden S., Chryst B., Roozenbeek J. (2024). Feedback exercises boost discernment of misinformation for gamified inoculation interventions. Journal of Experimental Psychology: General, 153(8), 2068–2087. https://doi.org/10.1037/xge0001603
Lees J., Banas J. A., Linvill D., Meirick P. C., Warren P. (2023). The Spot the Troll Quiz game increases accuracy in discerning between real and inauthentic social media accounts. PNAS Nexus, 2(4), 1–11. https://doi.org/10.1093/pnasnexus/pgad094
Lewandowsky S., Yesilada M. (2021). Inoculating against the spread of Islamophobic and radical-Islamist disinformation. Cognitive Research: Principles and Implications, 6, 1–15. https://doi.org/10.1186/s41235-021-00323-z
Maertens R., Götz F. M., Golino H. F., Roozenbeek J., Schneider C. R., Kyrychenko Y., Kerr J. R., Stieger S., McClanahan W. P., Drabot K., He J., van der Linden S. (2024). The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment. Behavior Research Methods, 56(3), 1863–1899. https://doi.org/10.3758/s13428-023-02124-2
Maertens R., Roozenbeek J., Basol M., van der Linden S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1–16. http://dx.doi.org/10.1037/xap0000315
Maertens R., Roozenbeek J., Simons J., Lewandowsky S., Maturo V., Goldberg B., Xu R., van der Linden S. (2025). Psychological booster shots targeting memory increase long-term resistance against misinformation. Nature Communications, 16(1), 2062. https://doi-org.ru.idm.oclc.org/10.1038/s41467-025-57205-x
Modirrousta-Galian A., Higham P. A., Seabrooke T. (2023). Effects of inductive learning and gamification on news veracity discernment. Journal of Experimental Psychology: Applied, 29(3), 599–619. https://doi.org/10.1037/xap0000458
Neylan J., Biddlestone M., Roozenbeek J., van der Linden S. (2023). How to “inoculate” against multimodal misinformation: A conceptual replication of Roozenbeek and van der Linden (2020). Scientific Reports, 13(1), 18273. https://doi.org/10.1038/s41598-023-43885-2
Nygren T., Guath M., Axelsson C-A. W. (2021). Bad News game in a peer education intervention: Impact on attitudes but not on skills [Unpublished manuscript]. Uppsala University.
Pennycook G., Berinsky A. J., Bhargava P., Lin H., Cole R., Goldberg B., Lewandowsky S., Rand D. (2024). Technique-based inoculation and accuracy prompts must be combined to increase truth discernment online. PsyArXiv Preprints. https://osf.io/preprints/psyarxiv/5a9xq_v1
Piltch-Loeb R., Su M., Hughes B., Testa M., Goldberg B., Braddock K., Miller-Idriss C., Maturo V., Savoia E. (2022). Testing the efficacy of attitudinal inoculation videos to enhance Covid-19 vaccine acceptance: Quasi-experimental intervention trial. JMIR public health and surveillance, 8(6), e34615. https://doi.org/10.2196/34615
Rędzio A. M., Izydorczak K., Muniak P., Kulesza W., Doliński D. (2023). Is the COVID-19 fake news game good news? Testing whether creating and disseminating fake news about vaccines in a computer game reduces people’s belief in anti-vaccine arguments. Acta Psychologica, 236, 103930. https://doi.org/10.1016/j.actpsy.2023.103930
Roozenbeek J., Maertens R., McClanahan W., van der Linden S. (2021). Disentangling item and testing effects in inoculation research on online misinformation: Solomon revisited. Educational and Psychological Measurement, 81(2), 340–362. https://doi.org/10.1177/0013164420940378
Roozenbeek J., Traberg C. S., van der Linden S. (2022). Technique-based inoculation against real-world misinformation. Royal Society Open Science, 9(5), 211719. https://doi.org/10.1098/rsos.211719
Roozenbeek J., van der Linden S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1–10. https://doi.org/10.1057/s41599-019-0279
Roozenbeek J., van der Linden S. (2020). Breaking harmony square: A game that “inoculates” against political misinformation. The Harvard Kennedy School Misinformation Review, 1(8), 1–26. https://doi.org/10.37016/mr-2020-47
Roozenbeek J., van der Linden S., Goldberg B., Rathje S., Lewandowsky S. (2022). Psychological inoculation improves resilience against misinformation on social media. Science Advances, 8(34), eabo6254. https://doi.org/10.1126/sciadv.abo6254
Roozenbeek J., van der Linden S., Nygren T. (2020). Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures. The Harvard Kennedy School Misinformation Review, 1(2), 1–23. https://doi.org/10.37016//mr-2020-008
Rubio-Campillo X., Marín-Rubio K., Corral-Vázquez C. (2023). Using in-game analytics to explore learning dynamics of information literacy in a social media simulator. In Spil T., Bruinsma G., Collou L. (Eds.), Proceedings of the 17th European Conference on Games Based Learning, ECGBL 2023 (pp. 556–563). (Proceedings of the European Conference on Games-based Learning; Vol. 17, No. 1). Dechema e.V. https://doi.org/10.34190/ecgbl.17.1.1464
Seabrooke T., Modirrousta-Galian A., Higham P. A. (2024). Re-examining the efficacy of the Bad News game: No evidence of improved discrimination of Indian true and fake news headlines [Manuscript submitted for publication]. School of Psychology, University of Southampton.
Tandoc E. C., Seet S. (2023). Winning the game against fake news? Using games to inoculate adolescents and young adults in Singapore against fake news. Estudios sobre el Mensaje Periodistico, 29(4), 771–781. https://dx.doi.org/10.5209/esmp.88599
Traberg C. S., Roozenbeek J., van der Linden S. (2024). Gamified inoculation reduces susceptibility to misinformation from political ingroups. Harvard Kennedy School Misinformation Review, 5(2), 1–17. https://doi.org/10.37016/mr-2020-141
Vraga E. K., Bode L., Tully M. (2022). The effects of a news literacy video and real-time corrections to video misinformation related to sunscreen and skin cancer. Health Communication, 37(13), 1622–1630. https://doi.org/10.1080/10410236.2021.1910165
Walsh S. (2021). Broad-spectrum inoculation confers resistance to radical Islamic and Islamophobic online misinformation in US sample [Unpublished doctoral dissertation]. University of Bristol.
Wong C. M. L., Wu Y. (2023). Limits to inoculating against the risk of fake news: A replication study in Singapore during COVID-19. Journal of Risk Research, 26(10), 1037–1052. https://doi.org/10.1080/13669877.2023.2249909

References

Agley J., Xiao Y., Thompson E. E., Chen X., Golzarri-Arroyo L. (2021). Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: Randomized controlled trial. Journal of Medical Internet Research, 23(10), e32425. https://doi.org/10.2196/32425
Appel R. E., Roozenbeek J., Rayburn-Reeves R., Basol M., Corbin J., Compton J., van der Linden S. (2025). Psychological inoculation improves resilience to and reduces willingness to share vaccine misinformation. Scientific Reports, 15(1), 29830.
Axelsson C-A. W., Nygren T., Roozenbeek J., van der Linden S. (2024). Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinforma- tion techniques. Journal of Research on Technology in Education, 57(5), 992–1018. https://doi.org/10.1080/15391523.2024.233845
Banas J. A. (2020). Inoculation theory. In Bulck J. (Ed.), The international encyclopedia of media psychology (1st ed., pp. 1–8). Wiley. https://doi.org/10.1002/9781119011071.iemp0285
Banas J. A., Miller G. (2013). Inducing resistance to conspiracy theory propaganda: Testing inoculation and metainoculation strategies. Human Communication Research, 39(2), 184–207. https://doi.org/10.1111/hcre.12000
Banas J. A., Rains S. A. (2010). A meta-analysis of research on inoculation theory. Communication Monographs, 77(3), 281–311. https://doi.org/10.1080/03637751003758193
Banas J. A., Richards A. S. (2017). Apprehension or motivation to defend attitudes? Exploring the underlying threat mechanism in inoculation-induced resistance to persuasion. Communication Monographs, 84(2), 164–178. https://doi.org/10.1080/03637751.2017.1307999
Basol M., Roozenbeek J., Berriche M., Uenal F., McClanahan W. P., van der Linden S. (2021). Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data & Society, 8(1), 1–17. http://dx.doi.org/10.1177/20539517211013868
Basol M., Roozenbeek J., van der Linden S. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1), 91. https://doi.org/10.5334/joc.91
Batailler C., Brannon S. M., Teas P. E., Gawronski B. (2022). A signal detection approach to understanding the identification of fake news. Perspectives on Psychological Science, 17(1), 78–98. https://doi.org/10.1177/1745691620986135
Boateng G. O., Neilands T. B., Frongillo E. A., Melgar-Quiñonez H. R., Young S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6, e00149. https://doi.org/10.3389/fpubh.2018.00149
Burgoon M., Miller M. D., Cohen M., Montgomery C. L. (1978). An empirical test of a model of resistance to persuasion. Human Communication Research, 5(1), 27–39. https://doi.org/10.1111/j.1468-2958.1978.tb00620.x
Capewell G., Maertens R., Remshard M., van der Linden S., Compton J., Lewandowsky S., Roozenbeek J. (2024). Misinformation interventions decay rapidly without an immediate posttest. Journal of Applied Social Psychology, 54(8), 441–454. https://doi.org/10.1111/jasp.13049
Chan M. S., Jones C. R., Hall Jamieson K., Albarracín D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. https://doi.org/10.1177/0956797617714579
Clark H. H. (1973). The language-as-fixed-effect fallacy: A critique of language statistics in psychological research. Journal of Verbal Learning and Verbal Behavior, 12(4), 335–359. https://doi.org/10.1016/S0022-5371(73)80014-3
Cohen J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.
Compton J. (2013). Inoculation theory. In Dillard J., Shen L. (Eds.), The SAGE handbook of persuasion: Developments in theory and practice (pp. 220–236). Sage. https://doi.org/10.4135/9781452218410.n14
Compton J. (2020). Prophylactic versus therapeutic inoculation treatments for resistance to influence. Communication Theory, 30(3), 330–343. https://doi.org/10.1093/ct/qtz004
Compton J. (2024). Inoculation theory. Review of Communication. Advance online publication. https://doi.org/10.1080/15358593.2024.2370373
Compton J. A., Pfau M. (2005). Inoculation theory of resistance to influence at maturity: Recent progress in theory development and application and suggestions for future research. Annals of the International Communication Association, 29(1), 97–146.
Compton J., van der Linden S., Cook J., Basol M. (2021). Inoculation theory in the post-truth era: Extant findings and new frontiers for contested science, misinformation, and conspiracy theories. Social and Personality Psychology Compass, 15(6), 12602. https://doi.org/10.1111/spc3.12602
Cook J., Lewandowsky S., Ecker U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS ONE, 12(5), e0175799. https://doi.org/10.1371/journal.pone.0175799
DeBruine L. M., Barr D. J. (2021). Understanding mixed-effects models through data simulation. Advances in Methods and Practices in Psychological Science, 4(1), 1–15. https://doi-org.ru.idm.oclc.org/10.1177/2515245920965119
DeCarlo L. T. (2011). Signal detection theory with item effects. Journal of Mathematical Psychology, 55(3), 229–239. https://doi.org/10.1016/j.jmp.2011.01.002
Droog E., Vermeulen I., van Huijstee D., Harutyunyan D., Tejedor S., Pulido C. (2024). Combatting the misinformation crisis: A systematic review of the literature on characteristics and effectiveness of media literacy interventions. Communication Research. Advance online publication. https://doi.org/10.00936502251363705.
Ecker U. K. H. (2023). Psychological research on misinformation: Current issues and future directions. European Psychologist, 28(3), 135–138. https://doi.org/10.1027/1016-9040/a000499
Ecker U. K. H., Lewandowsky S., Cook J., Schmid P., Fazio L. K., Brashier N., Kendeou P., Vraga E. K., Amazeen M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
European Commission. (2022). Final report of the commission expert group on tackling disinformation and promoting digital literacy through education and training (Technical Report No. NC-03-22-016-EN-N). Directorate-General for Education, Youth, Sport and Culture. https://op.europa.eu/en/publication-detail/-/publication/72421f53-4458-11ed-92ed-01aa75ed71a1/language-en
Fransen M. L., Mollen S., Rains S. A., Das E., Vermeulen I. (2024). Sixty years later: A replication study of McGuire’s first inoculation experiment. Journal of Media Psychology, 36(1), 69–78. https://doi.org/10.1027/1864-1105/a000396
Guay B., Berinsky A. J., Pennycook G., Rand D. (2023). How to think about whether misinformation interventions work. Nature Human Behaviour, 7(8), 1231–1233. https://doi.org/10.1038/s41562-023-01667-w
Hameleers M. (2023). The (un)intended consequences of emphasizing the threats of mis- and disinformation. Media and Communication, 11(2), 5–14. https://doi.org/10.17645/mac.v11i2.6301
Hautus M. L., Macmillan N. A., Creelman D. (2021). Detection theory: A user’s guide (3rd ed.). Routledge. https://doi.org/10.4324/9781003203636
Higham P. A., Higham D. P. (2018). New improved gamma: Enhancing the accuracy of Goodman–Kruskal’s gamma using ROC curves. Behavior Research Methods, 51(1), 108–125. https://doi.org/10.3758/s13428-0181125-5
Hughes B., Jereza R., West J., Goldberg B. (2024). Inoculating against persuasion by white supremacist scientific racism propaganda: The moderating roles of propaganda form and subtlety [Manuscript submitted for publication]. Polarization and Extremism Research and Innovation Lab (PERIL), American University.
Ivanov B., Rains S. A., Dillingham L. L., Parker K. A., Geegan S. A., Barbati J. L. (2022). The role of threat and counterarguing in therapeutic inoculation. Southern Communication Journal, 87(1), 15–27. https://doi.org/10.1080/1041794X.2021.1983012
Judd C. M., Westfall J., Kenny D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. https://doi.org/10.1037/a0028347
Kisa S., Kisa A. (2024). A comprehensive analysis of COVID-19 misinformation, public health impacts, and communication strategies: A scoping review. Journal of Medical Internet Research, 26, e56931. https://doi.org/10.2196/56931
Leder J., Schellinger L. V., Maertens R., van der Linden S., Chryst B., Roozenbeek J. (2024). Feedback exercises boost discernment of misinformation for gamified inoculation interventions. Journal of Experimental Psychology: General, 153(8), 2068. https://doi.org/10.1037/xge0001603
Lees J., Banas J. A., Linvill D., Meirick P. C., Warren P. (2023). The spot the troll quiz game increases accuracy in discerning between real and inauthentic social media accounts. PNAS Nexus, 2(4), 1–11. https://doi.org/10.1093/pnasnexus/pgad094
Lewandowsky S., Ecker U. K. H., Cook J., Van Der Linden S., Roozenbeek J., Oreskes N. (2023). Misinformation and the epistemic integrity of democracy. Current Opinion in Psychology, 54, 101711. https://doi.org/10.1016/j.copsyc.2023.101711
Lewandowsky S., Ecker U. K. H., Seifert C. M., Schwarz N., Cook J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018
Lewandowsky S., van der Linden S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384. https://doi.org/10.1080/10463283.2021.1876983
Loughnan D., van Stekelenburg A., Pouwels J. L., Kleemans M. (2025). Concerns regarding the methodology of a psychological inoculation meta-analysis on misinformation. Journal of Medical Internet Research, 27, e64430. https://doi.org/10.2196/64430
Lu C., Hu B., Li Q., Bi C., Ju X.-D. (2023). Psychological inoculation for credibility assessment, sharing intention, and discernment of misinformation: Systematic review and meta-analysis. Journal of Medical Internet Research, 25, e49255. https://doi.org/10.2196/49255
Maertens R., Roozenbeek J., Basol M., van der Linden S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1–16. http://dx.doi.org/10.1037/xap0000315
Maertens R., Roozenbeek J., Simons J., Lewandowsky S., Maturo V., Goldberg B., Xu R., van der Linden S. (2025). Psychological booster shots targeting memory increase long-term resistance against misinformation. Nature Communications, 16, 2062. https://doi-org.ru.idm.oclc.org/10.1038/s41467-025-57205-x
McGinley T. (2024, October 14). Armed man arrested after reportedly threatening FEMA workers. The New York Times.
McGuire W. J. (1961). Resistance to persuasion conferred by active and passive prior refutation of the same and alternative counterarguments. The Journal of Abnormal and Social Psychology, 63(2), 326–332. https://doi.org/10.1037/h0048344
McGuire W. J. (1964). Some contemporary approaches. In Berkowitz L. (Ed.), Advances in experimental social psychology (Vol. 1, pp. 191–229). Academic Press. https://doi.org/10.1016/S0065-2601(08)60052-0
McGuire W. J., Papageorgis D. (1961). The relative efficacy of various types of prior belief-defense in producing immunity against persuasion. The Journal of Abnormal and Social Psychology, 62(2), 327–337. https://doi.org/10.1037/h0042026
Miller C. H., Ivanov B., Sims J. D., Compton J., Harrison K. J., Parker K. A., Parker J. L., Averbeck J. M. (2013). Boosting the potency of resistance: Combining the motivational forces of inoculation and psychological reactance. Human Communication Research, 39(1), 127–155. https://doi.org/10.1111/j.1468-2958.2012.01438.x
Modirrousta-Galian A., Higham P. (2023). Gamified inoculation interventions do not improve discrimination between true and fake news: Reanalyzing existing research with receiver operating characteristic analysis. Journal of Experimental Psychology: General, 152(9), 2411–2437. https://doi.org/10.1037/xge0001395
Myers S. L., Thompson S. A. (2025, February 7). Falsehoods fuel the right-wing crusade against U.S.A.I.D. The New York Times. https://www.nytimes.com/2025/02/07/business/usaid-conspiracy-theories-disinformation.html?searchResultPosition=1
Oberauer K., Lewandowsky S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26(5), 1596–1618. https://doi.org/10.3758/s13423-019-01645-2
Page M. J., McKenzie J. E., Bossuyt P. M., Boutron I., Hoffmann T. C., Mulrow C. D., Shamseer L., Tetzlaff J. M., Akl E. A., Brennan S. E., Chou R., Moher D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. British Medical Journal, 71, 372. https://doi.org/10.1136/bmj.n71
Parker K. A., Ivanov B., Compton J. (2012). Inoculation’s efficacy with young adults’ risky behaviors: Can inoculation confer cross-protection over related but untreated issues? Health Communication, 27(3), 223–233. https://doi.org/10.1080/10410236.2011.575541
Parker K. A., Rains S. A., Ivanov B. (2016). Examining the “blanket of protection” conferred by inoculation: The effects of inoculation messages on the cross-protection of related attitudes. Communication Monographs, 83(1), 49–68. https://doi.org/10.1080/03637751.2015.1030681
Pelau C., Pop M.-I., Stanescu M., Sanda G. (2023). The breaking news effect and its impact on the credibility and trust in information posted on social media. Electronics 12(2), 423. https://doi.org/10.3390/electronics12020423
Pennycook G., Berinsky A. J., Bhargava P., Lin H., Cole R., Goldberg B., Lewandowsky S., Rand D. (2024). Technique-based inoculation and accuracy prompts must be combined to increase truth discernment online. Nature Human Behavior. PsyArXiv Preprints. https://osf.io/preprints/psyarxiv/5a9xq_v1
Pennycook G., Binnendyk J., Newton C., Rand D. G. (2021). A practical guide to doing behavioral research on fake news and misinformation. Collabra: Psychology, 7(1), 25293. https://doi.org/10.1525/collabra.25293
Pfau M. (1997). Inoculation model of resistance to influence. In Barnett G. A., Boster F. J. (Eds.), Progress in communication sciences: Advances in persuasion (Vol. 13, pp. 133–171). Ablex.
Pryor B., Steinfatt T. M. (1978). The effects of initial belief level on inoculation theory and its proposed mechanisms. Human Communication Research, 4, 217–230.
Richards A. S., Banas J. A. (2018). The opposing mediational effects of apprehensive threat and motivational threat when inoculating against reactance to health promotion. Southern Communication Journal 83(4): 245–255.
Roozenbeek J., Maertens R., McClanahan W., van der Linden S. (2021). Disentangling item and testing effects in inoculation research on online misinformation: Solomon revisited. Educational and Psychological Measurement, 81(2), 340–362.
Roozenbeek J., van der Linden S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1–10. https://doi.org/10.1057/s41599-019-0279
Roozenbeek J., van der Linden S. (2020). Breaking Harmony Square: A game that “inoculates” against political misinformation. The Harvard Kennedy School Misinformation Review, 1(8), 1–26. https://doi.org/10.37016/mr-2020-47
Roozenbeek J., van der Linden S., Goldberg B., Rathje S., Lewandowsky S. (2022). Psychological inoculation improves resilience against misinformation on social media. Science Advances, 8(34), eabo6254. https://doi.org/10.1126/sciadv.abo6254
Sultan M., Tump A. N., Geers M., Lorenz-Spreen P., Herzog S. M., Kurvers R. H. J. M. (2022). Time pressure reduces misinformation discrimination ability but does not alter response bias. Scientific Reports, 12(1), 22416.
Tandoc E. C., Seet S. (2023). Winning the game against fake news? Using games to inoculate adolescents and young adults in Singapore against fake news. Estudios sobre el Mensaje Periodistico, 29(4), 771–781. https://dx.doi.org/10.5209/esmp.88599
Thomas E., Sardarizadeh S. (2024, October 25). How a deleted LinkedIn post was weaponised and seen by millions before the Southport riot. BBC. https://www.bbc.com/news/articles/c99v90813j5o
van der Linden S. (2022). Misinformation: Susceptibility, spread, and interventions to immunize the public. Nature Medicine, 28(3), 460–467.
van der Linden S. (2024). Countering misinformation through psychological inoculation. Advances in Experimental Social Psychology, 69, 1–58. https://doi.org/10.1016/bs.aesp.2023.11.001
van der Linden S., Roozenbeek J. (2020). Psychological inoculation against fake news. In Greifender R., Jaffe M., Newman E., Schwarz N. (Eds.), The psychology of fake news: Accepting, sharing, and correcting misinformation (pp. 147–169). Routledge. https://doi.org/10.4324/9780429295379-11
Vraga E. K., Bode L., Tully M. (2022). The effects of a news literacy video and real-time corrections to video misinformation related to sunscreen and skin cancer. Health Communication, 37(13), 1622–1630.
Wendling M. (2024, October 17). ‘You stole the election’: Nervous volunteers on the front line of conspiracies. BBC. https://www.bbc.com/news/articles/cn0e71yp1e1o
Ziemer C., Rothmund T. (2024). Psychological underpinnings of misinformation countermeasures. Journal of Media Psychology – Theories, Methods and Applications. Advance online publication. https://doi.org/10.1027/1864-1105/a000407

Biographies

Daniel Loughnan is a Ph.D. candidate in the Communication and Media research group of the Behavioral Science Institute at Radboud University. He holds an M.Sc. in Behavioral and Social Sciences from the University of Groningen with a specialization in Social Psychology.
Aart van Stekelenburg is an assistant professor in the Communication and Media research group of the Behavioral Science Institute at Radboud University. He obtained his Ph.D. from Radboud University (communication science).
J. Loes Pouwels is an assistant professor in the Developmental Psychology research group of the Behavioral Science Institute at Radboud University. She obtained her Ph.D. from Radboud University.
Marieke L. Fransen is a professor in the Communication and Media research group of the Behavioral Science Institute at Radboud University. She was previously an associate and assistant professor at the University of Amsterdam. She has a background in social psychology and obtained her Ph.D. from the University of Twente (communication science).
Mariska Kleemans is a professor in the Communication and Media research group of the Behavioral Science Institute at Radboud University. She obtained her Ph.D. from Radboud University (communication science).