Individual Misinformation Tagging Reinforces Echo Chambers; Collective Tagging Does Not

Fears of online misinformation motivate people to fact-check one another. Here, the authors show that individual misinformation tagging (via direct replies) pushes people into echo chambers but find no such effect for collective tagging (via Community Notes).
Individual Misinformation Tagging Reinforces Echo Chambers; Collective Tagging Does Not
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Misinformation spreads rapidly on social media, but our efforts to combat it sometimes backfire. Our research journey began with a puzzle: despite growing efforts to fact-check misinformation online, we see increasing echo chambers where people predominantly engage with like-minded sources. We wondered whether fact-checking itself might unintentionally contribute to this pattern. Does fact-checking reinforce echo chambers rather than reduce them? Does fact-checking cause criticized posters to become defensive and retreat into information bubbles?

To investigate this, we analyzed over 700,000 tweets from users who received fact-checking through two different approaches: individual tagging (where users directly reply with links to fact-checking websites, such as PolitiFact) and collective tagging (through Twitter's Community Notes system, where fact-checks must be approved by a diverse group of users before being shown; see Fig. 1a).

We measured two key variables before and after users received fact-checks: political diversity and content diversity. Political diversity captured whether users engaged with sources holding opposing political stances, such as when a typically conservative user shared content from liberal sources. Content diversity measured whether users explored unfamiliar topics, which we quantified by comparing each tweet's content against the user's historical posts.

Fig. 1: Misinformation Tagging and Outcomes Measurement. a Individual misinformation tagging in which individuals cite PolitiFact fact-checking articles. Collective misinformation tagging through the Community Notes platform, which selectively exposes verified misinformation tags that receive diverse votes as helpful. b Operationalization of tweet political and content diversity. Political diversity captures whether a poster cites a source with opposing political stance (binary 0/1), assessed from the aggregate stances of referenced sources. Content diversity captures whether a post discusses topics unfamiliar to the author’s historical tweets (continuous).

Our most striking finding was that users were more likely to receive fact-checks when they began exploring diverse political sources and topics. Consider a Twitter user who typically shares only conservative news sources suddenly engaging with liberal media, or someone who typically discusses COVID-19 vaccine skepticism starting to explore topics related to medicine and health. When users venture into new or opposing territory, they become more visible in the social media feeds of fact-checkers who hold different viewpoints, increasing their likelihood of being singled out and criticized.

This led to the study’s crucial finding: users’ responses to misinformation tagging vary significantly based on the method used. When receiving individual tags through replies with links to fact-checking websites like PolitiFact, users show an immediate decrease in their engagement with both politically and topically diverse content as they retreat into their information bubbles. Nevertheless, when receiving collective tags through the Community Notes system, we found no evidence of such retreat. Instead, users showed a short-term increase in content diversity, though this effect diminished over time.

Fig. 2: Political and content diversity change with the intervention of individual and collective misinformation tagging. a The x-axis denotes the timeline of tweets posted before and after tagging, with negative values representing the number of weeks before posting tagged tweets and positive values representing the number of weeks after. The y-axis represents political and content diversity, with dots indicating the diversity score for each corresponding week, and error bars showing 95% confidence intervals. Solid lines connect the dots revealing trends of political and content diversity before and after tagging, with gray dotted lines tracing the counterfactual trend if fact-checks had not occurred. b Illustration of political and content diversity dynamics before and after tagging. Before individual and collective tagging, posters exhibit increased political and content diversity, which increases the likelihood of encountering a fact-checker. After individual tagging, posters retreat into information bubbles; after collective tagging, they venture further beyond them.

Why the differences?

We found several key distinctions between individual and collective misinformation tags:

  1. Individual tags tend to be shorter, more toxic, and more emotionally charged 
  2. Collective tags are typically longer, more neutral in tone, and more carefully deliberated 
  3. Individual tags appear quickly but are more reactive, while collective tags take longer to release but undergo peer review

These differences matter because they affect how users receive and process corrections to their misconceptions. When users feel attacked or dismissed through toxic individual tags, they tend to defensively withdraw. But when presented with carefully constructed, collectively verified corrections, they're more likely to remain open to new information.

Intriguingly, the gap between individual and collective tagging persists even when we control for these linguistic differences and speed. This suggests that the power of collective tagging stems from something deeper than better and kinder writing alone - perhaps from the legitimacy that comes from diverse validation, or from the way collective systems structurally encourage cross-verification of information.

Fig. 3: Linguistic characteristics of fact-checking messages. a univariate kernel density function for toxicity. b univariate kernel density function for sentiment. c univariate kernel density function for length (characters). d histogram for reading ease e univariate kernel density function for delay (log-transformed days). The purple line (or bar) represents the distribution within individual misinformation tags, while the yellow line (or bar) represents the distribution within collective tags.

The Bigger Picture

Our findings suggest a paradox in how misinformation spreads and gets moderated. Users are most likely to post misinformation that matters not when they're deeply entrenched in their echo chambers, but when they're actually trying to engage with new and unfamiliar information. This makes sense – people are more likely to make mistakes when exploring unfamiliar territory.

This insight has important implications for misinformation moderation strategies. While rapid, “vigilante” tagging might seem more efficient at addressing misinformation, it risks driving users back into their echo chambers. Collective systems like Community Notes might be slower, but appears more effective at maintaining an open and diverse information ecosystem.

Our research points to an underexplored aspect of misinformation moderation systems: whether they encourage or discourage information exploration. While individual tagging will likely remain an important tool, platforms should consider ways to make it more constructive – perhaps by encouraging more neutral, deliberative responses or by implementing collective verification systems.

The challenge ahead lies in balancing the need for quick responses to misinformation with the importance of maintaining an environment where users feel motivated to explore and actively engage with diverse viewpoints. As social media platforms continue to evolve, understanding these dynamics will be crucial for building more healthy online information ecosystems that welcome deliberation by a broader swath of voices.

Methodological Note

We employed two causal inference approaches: interrupted time series (ITS) analysis and delayed feedback (DF) analysis. The ITS analysis examined trends in users’ behavior before and after receiving misinformation tags, while the DF analysis compared users who received tags with similar users who had not yet received them but would later. Under their respective identification assumptions, these methods allowed us to distinguish between natural behavioral changes and those specifically caused by misinformation tagging. To ensure robustness, we conducted additional analyses controlling for potential confounders, including automated accounts (bots), engagement with low-credibility sources, negative sentiment in posts, and direct replies to taggers. The results remained consistent across these specifications.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Political Communication
Humanities and Social Sciences > Media and Communication > Media Policy and Politics > Political Communication
Computers and Society
Mathematics and Computing > Computer Science > Computing Milieux > Computers and Society
Sociology
Humanities and Social Sciences > Society > Sociology

Related Collections

With collections, you can get published faster and increase your visibility.

RNA modifications

This cross-journal Collection between Nature Communications, Communications Biology and Scientific Reports welcomes submissions on the molecular biology of RNA modifications and methods developed to identify and characterize them.

Publishing Model: Open Access

Deadline: Apr 30, 2025

Biology of rare genetic disorders

This cross-journal Collection between Nature Communications, Communications Biology, npj Genomic Medicine and Scientific Reports brings together research articles that provide new insights into the biology of rare genetic disorders, also known as Mendelian or monogenic disorders.

Publishing Model: Open Access

Deadline: Apr 30, 2025