Boosting cognitive competences in online environments
Examples

Lateral reading
What is the boost?
Lateral reading is a simple heuristic for online fact-checking: Open multiple tabs in your browser and search the Web to verify the credibility of the information.
Which challenges does the boost tackle?
False and misleading information.
How does it work?
When a user views information from an unfamiliar source, they leave the page and verify the author/organization and the claims elsewhere (e.g., using search engines, Wikipedia).
Which competences does the boost foster?
Verifying online information and a source’s trustworthiness.
What is the evidence behind it?
Wineburg and McGrew (2017, 2019) conducted a study with Stanford undergraduates, university professors, and professional fact-checkers to determine the most effective strategies for evaluating the credibility of information online. Whereas undergraduates and professors stayed on the web page and read vertically, fact-checkers, when landing on an unfamiliar website, opened new tabs and read “laterally”, that is, they verified the source’s credibility on the web. Lateral reading was also included in a school curricula (Civic Online Reasoning curriculum): Students in the treatment group (which included teaching lateral reading strategy) were more likely to accurately judge a website’s credibility compared to a control group (McGrew et al, 2019; McGrew, 2020). In another recent field experiment, Wineburg et al. (2022) demonstrated that students in treatment classrooms (n = 271) grew significantly in their ability to judge the credibility of digital content compared to students in control classrooms (n = 228).
Key references
- Brodsky, J. E., Brooks, P. J., Scimeca, D., Todorova, R., Galati, P. Batson, M., Grosso, R., Matthews, M., Miller, V., & Caulfield, M. (2021). Improving college students’ factchecking strategies through lateral reading instruction in a general education civics course. Cognitive Research: Principles and Implications, 6, 1–18. https://doi.org/10.1186/s41235-021-00291-4
- McGrew, S., Smith, M., Breakstone, J., Ortega, T., Wineburg, S. (2019). Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology, 89, 485–500. https://doi.org/10.1111/bjep.12279
- McGrew, S. (2020). Learning to evaluate: An intervention in civic online reasoning. Computers & Education, 145, 103711. https://doi.org/10.1016/j.compedu.2019.103711
- Wineburg, S., Breakstone, J., McGrew, S., Smith, M. D., & Ortega, T. (2022). Lateral reading on the open Internet: A district-wide field study in high school government classes. Journal of Educational Psychology. Advance online publication. https://doi.org/10.1037/edu0000740
- Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and learning more when evaluating digital information. Stanford History Education Group Working Paper No. 2017-A1. http://dx.doi.org/10.2139/ssrn.3048994
- Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121, 1–40. https://eric.ed.gov/?id=EJ1262001
This short video from the
Stanford History Education Group explains how to use lateral reading and outlines the research behind it. Source:
Stanford History Education Group (2020).
Simple decision trees to judge the trustworthiness of information online
What is the boost?
The fast-and-frugal decision tree (FFT) “Can you trust this information?” (see figure below) is an example of a simple tool for deciding whether or not to trust a piece of information encountered online. It uses three key questions: (a) “Who is behind this information?” (b) “What is the evidence?” (c) “What do other sources say?"—identified by Breakstone et al. (2018) and Wineburg and McGrew (2019)—as cues in the simple decision tree.
Which challenges does the boost tackle?
False and misleading information.
How does the boost work?
To quickly decide whether or not a piece of information can be trusted, a user goes through the cues in the decision tree sequentially. Cues are framed as questions, starting with the most important one: “Who is behind the information?” If, after lateral reading, the user deems the source untrustworthy, the tree is already exited with the decision to not trust the information. If lateral reading proves the source trustworthy, the user moves on to the next cue (and so on). This simple decision tree offers a potential decision at each point.
Which competences does the boost foster?
Verifying online information and judging the trustworthiness (credibility) of the source.
What is the evidence behind the boost?
Evidence for the three questions used as cues in the fast-and-frugal decision tree (see figure below) comes from research by the Stanford History Education Group. For instance, McGrew et al. (2019) found that after two 75-min lessons on evaluating the credibility of online sources (an extended version of the three questions outlined above), students in the treatment condition (n = 29) tested on their online reasoning skills were more than twice as likely to score higher at posttest than at pretest, whereas students in the control condition (n = 38) were equally likely to score higher or lower at posttest than at pretest, indicating that the intervention was successful.
Evidence for the effectiveness of fast-and-frugal decision trees as effective decision aids in general comes both from basic research in computer science, user studies (Banerjee et al., 2017), as well as, their applied use in many different domains (e.g., in medicine). For reviews on fast-and-frugal decision trees (and simple decision aids more generally), see Katsikopoulos et al. (2021) and Hafenbrädl et al. (2016).
The effectiveness of this particular boost (the FFT “Can you trust this information?,” see figure below) has not yet been directly tested.
Key references
Breakstone, J., McGrew, S., Smith, M., Ortega, T., Wineburg, S. (2018). Teaching students to navigate the online landscape. Social Education, 82(4), 219–221. https://www.ingentaconnect.com/content/ncss/se/2018/00000082/00000004/art00010
McGrew, S., Smith, M., Breakstone, J., Ortega, T., Wineburg, S. (2019). Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology, 89, 485–500. https://doi.org/10.1111/bjep.12279
Inoculation against false and misleading information and manipulation online
What is the boost?
Inoculation (also known as prebunking) is a preemptive intervention that boosts people’s resilience to false and misleading information and manipulation online. Inoculation involves exposure to a weakened form of common disinformation and manipulation strategies.
Which challenges does the boost tackle?
False and misleading information; manipulation online.
How does the boost work?
People learn about common strategies used to manipulate and mislead the public (e.g., to cast doubt on climate change or spread conspiracy theories). Interventions can be implemented as a game or as warning messages on social networks (see below for examples and Lewandowsky et al., 2020, for a hands-on guide to inoculation and other, debunking techniques).
Which competences does the boost foster?
Cognitive resilience to manipulation.
What is the evidence behind the boost?
Inoculation or prebunking interventions have been tested in a variety of contexts, applications, and topics (Roozenbeek et al., 2022; Lewandowsky & van der Linden, 2021; van der Linden et al., 2017). Recent evidence suggests that the effect of inoculation can persist at least for three months after the intervention (Maertens et al., 2020).
Key references
- Lewandowsky, S., Cook, J., Ecker, U. K. H., Albarracín, D., Amazeen, M. A., Kendeou, P., Lombardi, D., Newman, E. J., Pennycook, G., Porter, E. Rand, D. G., Rapp, D. N., Reifler, J., Roozenbeek, J., Schmid, P., Seifert, C. M., Sinatra, G. M., Swire-Thompson, B., van der Linden, S., Vraga, E. K., Wood, T. J., & Zaragoza, M. S. (2020). The debunking handbook 2020. Available at https://sks.to/db2020. https://doi.org/10.17910/b7.1182
- Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32 348–384. https://doi.org/10.1080/10463283.2021.1876983
- Roozenbeek, J., Traberg, C. S., & van der Linden, S. (2022). Technique-based inoculation against real-world misinformation. Royal Society Open Science, 9(5), 211719. https://doi.org/10.1098/rsos.211719
- van der Linden, S., Maibach, E., Cook, J., Leiserowitz, A., & Lewandowsky, S. (2017). Inoculating against misinformation. Science, 358, 1141–1142. https://doi.org/10.1126/science.aar4533
Note
See inoculation.science for “research and resources on inoculation theory applied to misinformation” (maintained by the Cambridge Social Decision-Making Lab).
Overview of inoculation interventions
There are two components to inoculation (Cook et al., 2017:
- An explicit warning about a potential threat of disinformation or manipulation—for example, a warning about the statements of a panel of unqualified “experts” casting doubt on climate change.
- A refutation of an anticipated argument, which exposes the disinformation strategy.
In some cases, only the first component is used (see example below).
Inoculation 1.0: Prebunking messages
](/digital-cognitive/twitter_inoculation_hu1596dd3c7c2d69752701f3ea78bbdc66_538874_2000x2000_fit_lanczos_2.png)
Key references
- Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS ONE, 12, Article e0175799. https://doi.org/10.1371/journal.pone.0175799
- van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1, Article 1600008. https://doi.org/10.1002/gch2.201600008
Inoculation 2.0: Gamified interventions
Bad News game
](/digital-cognitive/badnews_hu6095237e5e3fc7def0623f739deb466d_91031_2000x2000_fit_lanczos_2.png)
Bad News (getbadnews.com) is a game that aims to develop a “broad-spectrum vaccine” against disinformation. It focuses on the tactics commonly used to produce disinformation, rather than on the content of a specific disinformation campaign. By playing Bad News, participants learn six common strategies for spreading disinformation (according to NATO Strategic Communications Centre of Excellence, 2017):
- impersonating people or famous sources online
- producing provocative emotional content
- amplifying group polarization
- floating conspiracy theories
- discrediting opponents
- trolling
The underlying idea of the game is that players train to become expert manipulators by applying disinformation techniques—thereby developing a competence to detect manipulation that they can use whenever they are online. The game is set in a weakened form of an environment where people are apt to encounter false information: social media.
Key references
- Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5, Article 65. https://doi.org/10.1057/s41599-019-0279-9
- Basol, M., Roozenbeek, J., & van der Linden, S. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3, Article 2. https://doi.org/10.5334/joc.91
- Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27, 1–16. https://doi.org/10.1037/xap0000315
Cranky Uncle game
![]() |
![]() |
Cranky Uncle is a game that uses cartoons, humor, and critical thinking to expose the misleading techniques of science denial and build public resilience against misinformation. The app was developed by Monash University scientist John Cook, in collaboration with creative agency Autonomy.
Key reference
Cook, J. (2020). Cranky Uncle vs. climate change: How to understand and respond to climate science deniers. Citadel Press. https://crankyuncle.com/book/
Radicalise app: Inoculation against extremist persuasion techniques

Radicalise is a game that aims to combat the effectiveness of online recruitment strategies used by extremist organizations. It inoculates players by simulating the key techniques and methods used to recruit and radicalize individuals via social media platforms: identifying vulnerable individuals, gaining their trust, isolating them from their community, and pressuring them into committing a criminal act in the name of the extremist organization.
Key reference
Saleh, N. F., Roozenbeek, J., Makki, F. A., McClanahan, W. P., & van der Linden, S. (2021). Active inoculation boosts attitudinal resistance against extremist persuasion techniques: A novel approach towards the prevention of violent extremism. Behavioural Public Policy, 1–24. https://doi.org/10.1017/bpp.2020.60
Simple self-reflection exercises to inoculate against online manipulation
What is the boost?
Taking its cue from inoculation and learning from experience, self-reflection helps people see through manipulation strategies that are deployed online. The boost thereby increases people’s autonomy online, where precisely targeted advertisement can be manipulative and subtle.
Which challenges does the boost tackle?
There is a knowledge gap between users and advertisers online. Platforms like social media collect behavioral data about their users that advertisers can exploit by tailoring images and messages towards inferred vulnerabilities (e.g., the tendency of extraverts to like larger groups of people). In the worst case, these techniques are manipulative and endanger people’s autonomy online. The usual transparency measures, like “why am I seeing this?” information on Facebook ads, are not effective in mitigating this danger.
How does the boost work?
People are prompted to actively reflect on their personality (e.g., whether they are introverted or extraverted), for instance by filling out a simple personality test (with or even without feedback).
Which competence(s) does the boost foster?
The ability to detect the strategies behind manipulation attempts online (e.g., microtargeted advertising).
What is the evidence behind the boost?
In one experiment, participants were asked to detect the advertisements that were targeted to them (Lorenz-Spreen et al., 2021). The control group detected up to 26 percentage points fewer advertisements compared to the group that went through the intervention. See here for a more in-depth description of the study.
Key reference
Lorenz-Spreen, P., Geers, M., Pachur, T., Hertwig, R., Lewandowsky, S., & Herzog, S. M. (2021). Boosting people'’s ability to detect microtargeted advertising. Scientific Reports, 11, Article 15541. https://doi.org/10.1038/s41598-021-94796-z
Self-nudging online
What is the boost?
Self-nudging online refers to self-imposed interventions in one’s proximal digital choice architecture aimed at enhancing self-governance and lowering distractions. Learn more about self-nudging.
Which challenges does the boost tackle?
Distracting information environments.
How does the boost work?
People act as their own choice architects by applying psychological principles behind nudges to their own proximate digital environments. For instance, just as one can remove tempting junk food from view, one can also hide addictive social media apps.
Which competences does the boost foster?
Self-governance.
What is the evidence behind the boost?
Related evidence behind nudging interventions and research on situational control and habits formation (see review in Kozyreva et al., 2020).
Key references
- Hertwig, R., & Reijula, S. (2020). Creating citizen choice architects. Behavioral Scientist. https://behavioralscientist.org/creating-citizen-choice-architects/
- Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. [see section “Self-nudging: boosting control over one’s digital environment”, pp. 132–135] Psychological Science in the Public Interest, 21(3), 103–156, https://doi.org/10.1177/1529100620946707
- Reijula, S., & Hertwig, R. (2022). Self-nudging and the citizen choice architect. Behavioural Public Policy, 6(1), 119–149. https://doi.org/10.1017/bpp.2020.5
. Source: [Kozyreva et al (2020)](https://doi.org/10.1177/1529100620946707).](/digital-cognitive/self-nudge_fig_10_hub3acfda17108d7c071973bb753fa5ae9_223886_2000x2000_fit_lanczos_2.png)