We all have opinions, no one wants to hear all of them. (Credit: Tero Vesalainen on Shutterstock)
A small group of prolific ‘posters’ are ruining social media for the rest of us.
In A Nutshell
• Americans vastly overestimate online toxicity. Only 3.1% of Reddit users have posted severely toxic comments, but the average American believes 43% have done so—a 13-fold overestimation.
• A tiny group creates most harmful content. The 3.1% of Reddit users who post toxic comments generate 33% of all content on the platform, but people wrongly assume this volume of toxicity comes from widespread participation rather than a small, highly active group.
• Misperceptions fuel unnecessary pessimism. When Americans believe nearly half of social media users post harmful content, they feel more negative emotions and perceive greater moral decline in society than actually exists.
• Simple education can fix the problem. Learning the truth about how few users create toxic content made study participants feel more positive, believe less strongly in moral decline, and better understand that most people share their desire for less harmful online content.
Is social media more akin to a garbage dump than town square at this point? Many believe the internet is dominated by trolls, but research suggests that notion may be overblown.
Just 3% of Reddit users have posted severely toxic comments, but Americans believe that number is 43%. This dramatic misperception reveals how a tiny minority of prolific accounts shapes the public’s view of social media and society itself.
Researchers from Stanford University surveyed over 1,000 Americans and discovered they consistently overestimated the prevalence of harmful online behavior by substantial margins. The findings, published in PNAS Nexus, show most people vastly miscalculate how many of their fellow citizens post hateful content and share false news on social platforms.
Study participants believed nearly half of all Reddit accounts had contributed severely hateful, aggressive, or disrespectful comments. Platform data told a different story. Only 3.1% of active accounts posted such content over an 18-month period. That gap is a 13-fold overestimation.
The pattern wasn’t limited to toxic language. Americans guessed that 47% of Facebook users had shared false news stories when the actual figure sits at 8.5%. Notably, participants estimated one-third of Facebook users were “super-sharers” who posted 10 or more fake news articles. The reality? Less than half of one percent, a roughly 100-fold overestimation.
Lead researcher Angela Lee and her colleagues framed the central question: how many of their fellow citizens do Americans expect to post harmful content when they go on social media? The answer shows a troubling disconnect between perception and reality that may be fueling unnecessary cynicism about society.
Small Group of Toxic Users Creates Most Harmful Content
The research team analyzed actual platform data from multiple sources to compare against public perception. For Reddit, they examined accounts that posted content classified as “hateful, aggressive, and disrespectful” by Google’s Perspective API. This artificial intelligence tool flags content as toxic when it estimates that 90% of people would view it as hateful, aggressive, or disrespectful.
Across three separate studies, Americans consistently misjudged the scope of the problem. The first study involved 295 participants who estimated that over a third of all active Reddit accounts had posted toxic content at least once. Even the median guess landed at 30%, still 10 times higher than the 3% reality.
Study participants accurately estimated how much total content these problematic accounts produce. They just vastly overestimated how many accounts were creating it.
Platform data shows the 3.1% of Reddit users posting toxic content generated 33% of all comments on the site, amounting to over 559 million posts. Participants guessed these toxic accounts produced 38% of content, closely tracking reality. But they thought 38% of all users fell into this category rather than just 3.1%.
People encounter roughly the expected amount of toxic content but attribute it to far more widespread participation than actually occurs. Rather than recognizing a small group of highly active accounts, people imagine toxic behavior as broadly distributed across the user base.
Researchers tested whether participants simply misunderstood how “toxic” was defined. In a second study with 185 Americans, participants viewed 20 actual comments from Reddit and identified which ones met the study’s criteria for toxicity. They proved highly accurate, correctly distinguishing toxic from non-toxic content.
Yet they still overestimated prevalence. After demonstrating they understood what qualified as toxic, these same participants guessed 45% of Reddit users had posted such content. Only eight out of 185 participants did not overestimate the number. People grasp what harmful content looks like but badly misjudge how many users create it.

Believing Social Media Is More Toxic Changes How People Feel
These warped perceptions carry real consequences for how Americans view their country. In a third experiment with 611 participants, researchers randomly assigned people to either learn the true statistics about online toxicity or read neutral information about Reddit’s history.
Those who discovered their estimates were substantially inflated reported feeling more positive emotions afterward. Learning that a tiny minority creates most harmful content led participants to report more positive emotions.
Participants who learned the reality believed the moral character of their fellow Americans was in less decline. Beliefs about moral decline persist stubbornly across cultures, with previous research documenting this pessimism in at least 60 countries. Yet a brief educational intervention about social media toxicity was enough to shift these perceptions.
The correction also reduced what researchers call “pluralistic ignorance” around social media content. This occurs when people underestimate how much others share their values. Americans in the study underestimated how strongly their fellow citizens want to see less harmful content online. Learning that toxic content comes from a tiny minority helped correct this misperception.
The misperceptions remained consistent regardless of how much time people spent on Reddit or social media generally. Researchers found no significant link between platform usage and estimation accuracy, meaning even regular users fail to grasp the concentrated nature of harmful content production.
Why People Overestimate Toxic Social Media Users
Several factors may explain why Americans so badly misjudge online toxicity. People generally overestimate the size of small demographic groups, which could extend to the minority of harmful users. The sheer volume of toxic content people encounter might lead them to assume many different people are creating it without tracking who actually posts it.
Social media algorithms amplify negative content, which could make its producers appear more numerous than they are. Research shows people pay more attention to negative information and remember it better than positive content. Combined with the ability to post anonymously or pseudonymously online, these factors create an environment where a few prolific accounts can seem like broad public opinion.
The study focused specifically on toxic language and false news sharing, two distinct forms of harmful content with different motivations behind them. However, both are produced by small groups of highly active accounts and are perceived as degrading online discourse.
Teaching People the Truth About Social Media Toxicity
Researchers noted their educational intervention was brief and specific, yet still moved the needle on participants’ beliefs about moral decline. A short two-paragraph explanation about how most people never share toxic content was enough to shift perceptions. Targeted education about how social media works could help counter the excessive cynicism these misperceptions create.
The study did not find that correcting misperceptions changed participants’ overall cynicism or generalized trust in human nature, indicating limits to how far people generalize from learning about social media dynamics. However, the documented effects on emotions, perceived moral decline, and pluralistic ignorance show meaningful impacts from a simple correction.
The researchers argue their findings show how social media interactions may undermine social cohesion through a specific perceptual mechanism. When people believe many of their fellow citizens are posting harmful content, they develop more negative views of society and perceive greater moral decline than actually exists.
“If social media platforms are to remain a part of modern society, people should recognize that the opinions they see are not representative of public opinion,” the researchers write.
The findings suggest that some pessimism about society may stem from a simple misunderstanding about who creates harmful online content. Mistaking a vocal minority for a somewhat vocal majority leaves Americans feeling worse about their country and fellow citizens than the facts warrant. Teaching people that most social media users never post harmful content could help restore faith in fellow citizens and reduce unwarranted pessimism about the state of society.
Paper Notes
Limitations
The researchers acknowledge several limitations in their work. The study focused on two specific types of harmful content (toxic language and false news sharing) and may not capture all forms of problematic online behavior. Future research should examine whether similar misperceptions exist for other types of harmful content or for offline harmful behavior. Additionally, while the study showed that correcting misperceptions improved how people felt and reduced perceived moral decline, it did not significantly change participants’ overall cynicism or generalized trust in human nature. More research is needed to understand the specific behaviors that result from these misperceptions and whether certain platform features like anonymous or pseudonymous posting amplify the effects.
Funding and Disclosures
Angela Lee is supported by the Mark & Mary Stevens Stanford Interdisciplinary Graduate Research Fellowship and the Stanford Social Impact Labs. No other funding was received for this research. The authors declare no competing interests.
Publication Details
Authors: Angela Y. Lee, Eric Neumann, Jamil Zaki, and Jeffrey Hancock. Affiliations: Department of Communication, Stanford University, Palo Alto, CA (Lee, Hancock); Department of Psychology, Stanford University, Palo Alto, CA (Neumann, Zaki). The study was published in PNAS Nexus, Volume 4, Issue 12, December 2025. DOI: 10.1093/pnasnexus/pgaf310. The research involved three studies conducted from June 2023 to April 2024 with a total of 1,090 US-American adults recruited via CloudResearch Connect and matched to national quotas of age, gender, race, and ethnicity from the 2020 United States Census. All procedures were approved by the Stanford University Institutional Review Board.







