(Image by ProStock Studio on Shutterstock)
Cornell study reveals the seven ways Americans try to shut each other down in social media news comments
In A Nutshell
- Nearly half of objections in online news comment sections are personal attacks, not reasoned arguments.
- Cornell researchers identified seven tactics people use to shut down comments, from insults to moral appeals to threats.
- Cross-talk between opposing political groups is common, challenging the idea that social media is only an echo chamber.
- Personal attacks may feel like silencing tactics, but they usually escalate conflict and deepen divides.
ITHACA, N.Y. — If you post a comment under a news story online, there is a good chance that any pushback you get will not focus on your point. Instead, it will go after you as a person. Cornell University researchers studied thousands of replies to trending news videos on YouTube and Twitter and found that nearly half of all objections took the form of personal attacks. These so-called ad hominem comments aim to discredit the speaker rather than engage with the argument.
The findings, published in PLOS One, show that news comment sections often bring people with opposing views into direct conflict. The trouble is that these confrontations usually escalate rather than lead to meaningful debate.
Examples were blunt. One reply read: “Quacks like a RACIST republican, you get called a RACIST republican.” In this study, attacks like these appeared far more often than careful corrections or reasoned disagreement.
The Seven Ways People Try to Shut Down Others
Led by Ph.D candidate Ashley Shea, the Cornell team identified seven recurring tactics that commenters use when they decide a post should not stand. Ad hominem attacks were the most common, appearing in 42 to 45 percent of all objections. But name-calling was not the only strategy.
One was what the researchers called moral corruption. These replies told people their comment was wrong because it violated some moral principle. For example: “If you are a veteran then you know you can’t say that. Be proud and a soldier.” This tactic appealed to shared values and urged a supposedly better way to phrase a point.
Another common tactic was logical disqualification. These replies rejected a comment as factually wrong or illogical. One response read: “NO. Raising the debt ceiling pays bills mostly spent during the Trump years. You should study up before posting what you don’t understand.” Roughly 17 to 23 percent of objections fell into this category.
The study also uncovered darker interventions. About 3 percent of objections included physical threats. One reply said: “Anybody who attempts to ban AR-15s deserves death by the most painful means possible.” While rare, these comments marked a sharp escalation of hostility.
Three other tactics rounded out the typology. Some users practiced self control by announcing they were leaving the conversation altogether, often with angry farewells like “I’m not talking to you anymore. Everything you said is a lie. We’re done.” Others engaged in space control, telling their opponent to exit the conversation: “You make no sense. Go back to the kiddie table and let the adults talk.” The last tactic, content threats, dismissed the material itself without explanation, using blunt declarations such as “FAKE NEWS.”
Testing the Trolling Typology
To check whether others could reliably spot these tactics, the researchers trained 371 workers from Amazon’s Mechanical Turk platform. Participants were introduced to one tactic at a time through short tutorials, then tested. Only about 32 to 82 percent of workers passed the initial quiz, showing that some tactics were harder to learn than others. For example, fewer than 35 percent managed to identify content threats or logical disqualification correctly at first.
However, among those who passed training, accuracy jumped to 86 to 94 percent when tested on fresh examples. This suggested that while the classification system worked in principle, scaling it up with casual coders would be difficult. Nearly 40 percent of recruits dropped out during training, a costly obstacle to broader use.
Beyond the Online News Echo Chamber
The study also sheds light on how people from different political camps actually meet in the same comment sections. Past research shows that 69 percent of active YouTube users post comments on both left-leaning and right-leaning channels. That means news comment spaces are not closed bubbles. Instead, they are arenas of cross-talk where diverse ideological groups collide.
This helps explain why trending videos attract heated exchanges. For example, stories about mass shootings often produced more moral corruption responses. In those moments, many commenters reached for shared values to condemn what was said, rather than simply attacking the person. Other topics, such as celebrity news, drew large numbers of comments but still followed the same overall pattern: objections tended to be more confrontational than constructive.
When Users Become Digital Vigilantes
The researchers placed these behaviors within a larger idea: digital vigilantism. Because social media platforms cannot catch every offensive or false comment, users often take matters into their own hands. They see themselves as responsible for stepping in, a practice the researchers described as “expressive citizenship.” By attacking others, people try to silence content they view as harmful, playing the role of informal moderators.
This self-appointed policing can sometimes uphold community norms, but the reliance on personal attacks shows a troubling drift. Instead of fostering open exchange, comment sections risk becoming stages for character assassination. That weakens democratic discourse, which depends on reasoned argument and mutual respect.
What is especially concerning is that many users treat personal attacks as a practical way to shut others down. Yet research suggests the opposite: such tactics rarely silence opponents for long. Instead, they fuel more hostility, pushing people deeper into their own camps. When disagreement defaults to insults, the chance for common ground shrinks.
Why It Matters
Social media remains one of the main spaces where Americans talk about the news. That makes the quality of these conversations more than a matter of online etiquette. They shape how people understand politics, society, and each other. Cornell’s research highlights a sobering truth: objection in these forums is less about facts and more about fighting for dominance.
If half of objections turn into personal attacks, then the health of online public discourse is at risk. Users step into comment sections not to exchange views but to discredit, dismiss, or drive others out. This transforms what should be a civic forum into a battlefield where the loudest insult wins.
Paper Summary
Methodology
Researchers analyzed two samples of replies to trending news videos. The first sample covered 7,500 replies from 14 YouTube and Twitter videos between August and October 2021, spanning topics from the Kabul airport evacuation to COVID treatments. The second sample added 2,004 replies from CNN’s top YouTube news videos in August 2022. A team of graduate coders classified objections, and their typology was tested with Mechanical Turk workers.
Results
Across 9,504 replies, 723 contained objections using at least one of the seven tactics. Ad hominem attacks dominated, making up 42 to 45 percent of objections. Logical disqualification (16 to 23 percent) and moral corruption (8 to 20 percent) followed. Physical threats accounted for about 3 percent, with content threats, self control, and space control filling out the rest. Roughly 8 percent of all replies across both samples contained objections.
Limitations
The study focused on U.S. news content and English-language comments. The second sample was restricted to CNN videos. Training crowd workers revealed challenges, as many could not reliably classify certain tactics. The study also examined only direct replies, not entire conversation threads.
Funding and Publication
The research was supported by the National Science Foundation. The study was published in PLOS One on August 20, 2025 by Ashley L. Shea and colleagues from Cornell University and the University of Missouri. It is open access and available through the Open Science Framework.







