You leave an hour of doomscrolling convinced the country is splintering. Outside, a neighbor you disagree with politically helps you jumpstart a car. The dissonance is not a glitch. It is how today’s internet works.
A recent analysis published in PNAS Nexus argues that a small, hyperactive slice of users produces most of the hostile content online, creating a funhouse mirror in which the rest of us appear meaner than we are. That intuition will feel familiar to anyone who has watched a handful of accounts yank an entire comment thread into the ditch. It also fits with the old participation rule of the web: most people lurk, a minority contributes a little, and a tiny group posts all day.
On X/Twitter, 10% of users create 80% of tweets in the United States, according to the Pew Research Center. Concentration of speech is the norm, not the exception.
That concentration is not just a quirk of social dynamics. It shapes what platforms recommend, how news cycles form, and what we mistakenly assume about “what people think.” When a few prolific accounts are unusually hostile, their outsized output can masquerade as a social norm. Network scientists even have a name for this: the majority illusion, where behaviors by well-connected power users appear common to everyone else. You can find the mathematics in a careful analysis by Lerman and colleagues (PLOS ONE), but the lived experience is simpler: a few loud voices fill the room.
Why the minority feels like a majority
Two forces amplify the noise from that minority. First, our brains. Humans are tuned to notice threats and moral violations. Negative and morally charged content is sticky, a pattern documented across platforms in research on moral-emotional language and virality (see Brady et al. in PNAS). Second, the machines. Engagement-driven ranking systems often reward content that triggers strong reactions because it keeps us on the screen.
As one landmark study put it, “Falsehood diffuses farther, faster, deeper, and more broadly than the truth” on Twitter, largely because people share it more, not bots (Science, 2018).
Now layer automation on top. Bot networks boost and brigade. Reputable security firms, including Imperva, report that automated traffic is rising globally and can dominate in hosting-heavy markets like Ireland, warping what trends and who appears popular (Imperva 2024 Bad Bot Report).
Bad bot traffic now comprises roughly a third of global web traffic, with some countries seeing bot shares above 70% in certain sectors, Imperva reports. That scale is more than nuisance; it is narrative infrastructure.
The result is a tight loop: a small group of heavy posters, occasionally augmented by automation, produces antagonistic content; algorithms surface it; our attention locks onto it; and the perceived norm shifts. The average person, who is neither posting nor piling on, becomes invisible.
The measurement trap—and why it matters
Quantifying toxicity is harder than it sounds. The PNAS Nexus study relied on automated tools to flag abusive language at scale. That is sensible, but it comes with caveats worth underlining. The popular Perspective API and similar models evaluate text snippets, which means they can miss dog whistles, coded language and sarcasm. Researchers have also documented bias, where identity terms inflate “toxicity” scores absent true hostility (see the original Jigsaw analysis on unintended bias).
None of this invalidates the core finding that hostility is concentrated among heavy users. It does mean that “how toxic is the internet?” is the wrong question. A better one is, what do ordinary users perceive as the norm, and how does that perception shift behavior? The social-psychology terms here—*pluralistic ignorance*, *false consensus*—describe a dynamic where people privately reject a norm but publicly go along because they believe “everyone else” accepts it. That belief is often born online.
This perception gap has consequences. It fuels cynicism, distorts voting expectations, and deters decent people from participating in public life. It also gives cover to truly antisocial actors. When you think “everyone is awful,” the bar for your own conduct sinks. In that sense, as one commenter put it, moderating the worst participants is not a courtesy, it is an obligation if a platform aims to be anything more than a sociopath’s playground.
So what actually helps?
There is no silver bullet, but there are pragmatic levers—in design, policy and daily habits—that narrow the gap between what feels normal online and what most people actually believe.
- Throttle hyperactivity, surface representativeness. Rate-limit serial posters, de-boost repeat bad actors and show readers how many unique users, not just total comments, support a view. The point is to highlight breadth, not volume.
- Add friction where it matters. Delayed posting in heated threads, prompts that encourage users to read before replying, and reminders about community norms have been shown to reduce knee-jerk toxicity. Small speed bumps change outcomes.
- Invest in real moderation. Community guidelines mean little without consistent enforcement and enough human moderators. The EU’s Digital Services Act pushes large platforms toward risk assessments and independent audits. Users should expect similar accountability elsewhere.
- Label automation and origin signals carefully. Verified account provenance, bot labeling, and limits on coordinated inauthentic behavior are compatible with privacy when done right. Transparency beats guesswork about shadowy agitators.
- Recalibrate your own feed. Mute and block liberally. Switch away from “most engaging” sorts. Seek communities with active, accountable moderation. And touch grass. Engagement is not the same as endorsement, and your brain needs evidence from the physical world to recalibrate.
Platforms will protest that these steps reduce time-on-site and ad revenue. They probably do in the short run. The longer game is healthier communities that people trust enough to participate in, which is also a business model. A platform drowning in ragebait and brigading drives away the very users advertisers value.
There is a fair counterargument worth hearing: if large numbers of citizens vote for harmful policies, the problem is not just an online mirage. True. The point here is not to launder reality but to measure it. If most neighbors are kind in person but your timeline insists they are enemies, that mismatch is an actionable signal about your information diet and the platforms shaping it.
We should also resist comforting myths about blame. Not all hostility is foreign; plenty of it is homegrown. And not all of it is bots; humans still drive much of the worst content because outrage performs. The fix begins with recognizing the mechanics, then choosing differently.
Imagine a public square where the loudest one percent cannot commandeer the microphone. You would still encounter disagreement, even sharp criticism. But the background hum would sound more like daily life. Neighbors jumpstart cars. Strangers hold doors. Online, we can design for that reality to be visible again.
