mediaalgorithmsradicalization

The Algorithm Knows What You Hate: Inside the Outrage Machine

Editorial16 min read

Modern platforms didn’t invent hatred. They industrialized its distribution.

The foundational distortion in the information environment comes from a simple economic fact: in advertising-funded systems, the scarce resource is attention, and the most reliable way to capture attention is to trigger high-arousal emotion—especially moral outrage. In that incentive structure, “harmful” and “engaging” can become complements, not tradeoffs. The system doesn’t need ideological coherence to thrive. It needs compulsion, repetition, and retention.

Attention Is the Commodity

Herbert Simon warned decades ago that an abundance of information creates a scarcity of attention. In the feed era, that scarcity is monetized. Tim Wu calls them "attention merchants"; Shoshana Zuboff describes "surveillance capitalism" where behavioral data becomes prediction products sold in behavioral futures markets. Either way: your engagement is the product.

"Engagement" became the universal proxy: time spent, likes, shares, comments, watch-through. Algorithms optimize for what keeps people on-platform. The outcome is predictable:

  • high arousal beats low arousal
  • conflict beats calm
  • identity beats nuance
  • repetition beats revision
  • tribal signaling beats deliberation

Algorithmic audits now confirm what this logic produces. Milli et al.'s 2023 analysis found that 62% of political tweets selected by Twitter's algorithm expressed anger, versus 52% in chronological feeds. 46% contained out-group animosity versus 38% baseline. The algorithm isn't neutral—it systematically amplifies the content that generates the strongest reactions, which tends to be the most divisive.

In this environment, the algorithm doesn't ask, "Is this true?" It asks, "Will this hold attention?"

Outrage Optimization: Why Anger Outcompetes Nuance

Outrage spreads because multiple mechanisms stack on top of each other—and the research now quantifies the effects:

  • High-arousal emotions are more shareable than low-arousal content. Berger and Milkman's foundational research found physiological arousal, not emotional valence, drives virality: anger and anxiety increase sharing; sadness decreases it.
  • Moral-emotional language increases diffusion in politicized discourse. Brady et al.'s analysis of 563,312 social media messages found each moral-emotional word increased diffusion by 20%—what they term "moral contagion."
  • Positive feedback trains behavior: Brady's preregistered experiments showed that when outrage gets likes/shares, future outrage expression becomes more likely—consistent with reinforcement learning dynamics.
  • Out-group hostility is engagement-gold: Rathje, Van Bavel, and van der Linden analyzed 2.73 million posts and found content referencing the political out-group was shared twice as often as in-group content. Each out-group term increased sharing odds by 67%—4.8 times stronger than negative affect, 6.7 times stronger than moral-emotional language alone.
  • Emotional contagion transmits across networks: exposure to emotionally valenced content can influence users' own posting behavior, even without explicit persuasion.
  • Falsehood diffuses faster: Vosoughi, Roy, and Aral's landmark Science study of ~126,000 news stories found false information spread "significantly farther, faster, deeper, and more broadly than truth in all categories." The top 1% of false cascades reached 1,000–100,000 people; truth rarely exceeded 1,000. Crucially, humans—not bots—drove this asymmetry.

Put simply: platforms don't just host outrage. They can reinforce it. And the reinforcement is measurable.

The Drift Pattern: How You Move Without Noticing

The most reliable radicalization pathway isn't a single "rabbit hole." It's gradual drift inside a feedback loop:

  1. You engage with something emotionally activating.
  2. The recommender system treats that as preference.
  3. Your feed shifts toward the emotional register that performs.
  4. Your sense of "normal" adapts.
  5. Creators respond to what performs, escalating tone and framing.
  6. Communities reward conformity and punish deviation.

Over time, the loop can shift positions—sometimes radically—without demanding a conscious update to moral self-concept. People often experience the drift as: "I didn't change. The world got worse."

Why Smart People Aren't Immune

Dan Kahan's Cultural Cognition Project at Yale has documented a disturbing finding: "The members of the public most adept at avoiding misconceptions of science are nevertheless the most culturally polarized." Higher cognitive proficiency and scientific literacy produce more polarization, not less, because these skills enable more effective motivated reasoning—the selective crediting of evidence that confirms group beliefs and dismissing of evidence that contradicts them.

Kahan calls this "identity-protective cognition": people process information to protect their status within valued groups, not to update toward accuracy. The psychological costs of holding beliefs that conflict with one's cultural community outweigh abstract benefits of being right.

Geoffrey Cohen's landmark experiments demonstrated this starkly: when told their party endorsed a policy, partisans supported it—even when the policy contradicted their stated ideological commitments. Liberals backed stringent welfare cuts when told Democrats endorsed them; conservatives supported generous welfare expansion when told Republicans backed it. Most tellingly, participants denied being influenced while assuming their opponents would be.

That's ideological incoherence as a system output: stable identity + moving issue positions.

The Commitment Ratchet

Classic psychological mechanisms reinforce the drift:

  • Foot-in-the-door: Freedman and Fraser's 1966 study found that small initial commitments change self-perception. Homeowners who agreed to display a small "Be a Safe Driver" sign showed 76% compliance with a later request for a large, ugly sign—versus only 17% when asked directly. "I am the kind of person who supports this cause."
  • Cognitive dissonance: Having publicly defended a position, people experience psychological discomfort when confronted with contradictory evidence. The path of least resistance is attitude change to match behavior, not behavior change to match evidence.
  • Sunk cost: The more someone has publicly defended a position, the more psychologically costly abandoning it becomes. Prior mistakes increase rather than decrease future commitment.

Not a Conspiracy—An Emergent Outcome

No one needs to coordinate a plan to radicalize the public for the effect to emerge.

Profit optimization acting on predictable human psychology produces “selection pressure” for:

  • content that outrages
  • content that humiliates the out-group
  • content that compresses reality into binary moral theater
  • content that turns politics into sport

Creators learn quickly. Audiences reward them. Platforms multiply them.

The result can look like a coordinated propaganda machine because it behaves like one—even when it's mostly incentive-driven.

Case Study: When Principles Collide with Tribes

The Alex Pretti incident of January 2026 illustrates ideological incoherence in action.

On January 24, federal agents (ICE) shot and killed Alex Jeffrey Pretti, a 37-year-old registered nurse, during a protest in Minneapolis. Pretti was legally armed with a holstered pistol—a right he possessed as a licensed gun owner—and was filming the agents. Under a consistent libertarian or conservative framework, Pretti would seem to check every box for a cause célèbre: Second Amendment rights, skepticism of federal overreach, concern about militarized enforcement.

Yet the reaction from many self-identified gun-rights supporters and "don't tread on me" libertarians inverted. Instead of rallying to Pretti's defense, many justified the shooting. Rhetoric shifted to a "law and order" frame: Pretti was blamed for "provoking" agents, for bringing a gun to a volatile situation, for being associated with "rioters."

This reaction is intelligible only through negative partisanship: because the protest was directed against ICE—an agency coded as "ours" by the populist right—Pretti was categorized as out-group. Once identified as "enemy," his rights as a gun owner were nullified. The "Back the Blue" identity overrode the "Second Amendment" identity. The same people who would decry federal overreach in other contexts found themselves defending it when it targeted people they opposed.

Not everyone followed this pattern. State-level Libertarian Party chapters condemned the shooting as authoritarian overreach. Gun-rights organizations publicly criticized the administration's framing. The divergence confirms that the pipeline is probabilistic, not deterministic. But the incident reveals how negative partisanship can override decades of stated principle in a matter of days—when the tribe demands it.

The Filter Bubble Myth (and the Real Problem)

The popular "filter bubble" story—"algorithms trap you where you never see the other side"—is oversimplified. Eli Pariser's 2011 thesis became cultural shorthand for algorithmic isolation, but empirical research consistently finds smaller effects than assumed.

Gentzkow and Shapiro's Quarterly Journal of Economics study found that ideological segregation of online news consumption was significantly lower than face-to-face interactions with neighbors, coworkers, or family. Bakshy, Messing, and Adamic's Science study of 10.1 million Facebook users found algorithms reduced cross-cutting content by only 5–8%, while user click choices reduced it by 70%. The Reuters Institute's 2022 literature review concluded that politically partisan echo chambers are "generally small—much smaller than often assumed."

The deeper problem isn't what you don't see. It's how you see what you do see.

Philosopher C. Thi Nguyen distinguishes between epistemic bubbles and echo chambers. An epistemic bubble is a structure where contrary voices are simply missing—by omission, not by design. These are relatively easy to pop: expose someone to different perspectives. An echo chamber is a structure that actively discredits outside sources, making communities resilient to correction. Distrust is part of the structure.

The modern information environment creates the worst of both worlds—an "anti-bubble":

  • you see outside arguments, but
  • they're framed for mockery, delegitimation, and contempt, so
  • exposure functions as attack, not information.

The outside world is visible, but only as hostile caricature. A tweet from an opposing politician isn't hidden; it's quote-tweeted for ridicule. In that context, more exposure can increase polarization rather than reduce it. Bail et al.'s PNAS study found that exposure to opposing views sometimes makes strong partisans more polarized—the manner of exposure matters as much as the fact of it.

“Rabbit Holes” and the Myth of the Passive Victim

Recommendation systems can facilitate pathways to more extreme content, but the strongest evidence supports a mixed picture:

  • for most users, effects look more like soft narrowing than “turning normies into extremists”
  • for vulnerable users actively seeking extreme content, the system can reduce search costs dramatically
  • creator supply, community dynamics, and external networks matter as much as algorithmic ranking

So the “pipeline” is real in the sense of probabilistic selection pressures, not deterministic conversion.

Gateway Influencers and Parasocial Trust

A key bridge between mainstream and fringe is the gateway influencer: credible-seeming figures who brand themselves as skeptics “just asking questions.”

Long-form podcasts and livestreams create parasocial bonds. Those bonds transfer trust from person → frame → ideology. Once trust is relational, counter-evidence feels like an attack on a friend. That’s not a knowledge problem. It’s an identity problem.

Audience Capture: When Creators Become Caricatures

Creators don't simply shape audiences. Audiences shape creators.

When engagement is livelihood, creators track what performs. If the most "tribally satisfying" takes produce a spike, escalation becomes rational. Center for Democracy and Technology research found political content from influencers had 50–70% higher engagement than non-political content. The engagement premium drives the business model.

Audience capture creates a ratchet effect:

  • the audience rewards purity and hostility
  • the creator adapts
  • the audience's baseline shifts
  • nuance becomes punishable
  • extremity becomes the new normal

Writer Gurwinder Bhogal documented multiple cases: Maajid Nawaz evolved from careful counter-terrorism expert to conspiracy theorist writing about "shadowy New World Order"; Dave Rubin shifted from progressive Young Turks host to Blaze TV personality. The 2024 DOJ indictment of Tenet Media revealed Tim Pool, Dave Rubin, Benny Johnson, and others received nearly $10 million from Russian state media—demonstrating how ideological drift can align with external incentive structures even when creators claim to be unaware.

The creator may feel like they're "telling the truth more boldly." Often they're converging on the content that gets rewarded—what Bhogal calls "the gradual and unwitting replacement of a person's identity with one custom-made for the audience."

Cohen and Holbert's 2021 research found parasocial relationships proved to be "a powerful predictor of Trump-Support, outperforming all other predictors including past voting behavior." These parasocial bonds insulate followers from counter-evidence and create audience expectations that further constrain creator behavior. The relationship becomes mutually reinforcing radicalization.

Cable News: The Attention Economy Before the Feed

It's tempting to blame everything on social media. But evidence suggests cable news can have larger polarization effects than social platforms.

Hosseinmardi et al.'s Stanford/Penn/Microsoft study found up to 23% of Americans were polarized via TV at peak (November 2016), with left-leaning TV audiences 10 times more likely to remain segregated than online audiences. Martin and Yurukoglu's American Economic Review study found Fox News increased Republican vote shares by 0.3 percentage points among viewers induced to watch 2.5 additional minutes weekly—with effects growing over time due to both increasing viewership and increasingly conservative slant.

Cable opinion programming became the feed with a remote control: constant urgency, adversarial framing, and the emotional rhythm of "breaking news" as entertainment. The economic logic, documented by Harvard/MIT researchers: when cable news covers culture war issues, they gain audience from entertainment viewers; when they cover economics, people switch channels. Outrage entertainment is simply more profitable than informative journalism.

Global Patterns: When Platforms Become Infrastructure for Violence

The dynamics observed in the US are mirrored—often with more lethal consequences—in the Global South. These cases reveal how specific platform features interact with local contexts to produce unique forms of radicalization.

Myanmar: Facebook as Infrastructure for Genocide

For many citizens of Myanmar, Facebook was the internet. "Zero-rating" programs allowed free access to Facebook while charging for other websites, trapping users in a single, manipulated feed. Military officials and nationalist monks used the platform to systematically dehumanize the Rohingya Muslim minority, referring to them as "fleas" or "dogs." A 2016 internal Facebook study found 64% of all extremist group joins were due to recommendation tools. The UN reported that Facebook played a "significant role" in the genocide that followed, facilitating the organization of pogroms and the expulsion of hundreds of thousands.

Brazil: WhatsApp's Dark Social

In Brazil, the primary vector for radicalization was WhatsApp—"dark social" where content flows through private, encrypted groups invisible to moderators. During the 2018 election, supporters of Jair Bolsonaro utilized a sophisticated pyramid structure: "public" groups disseminated memes and conspiracy theories; highly engaged users were then invited into elite "private" groups where radicalization deepened. The "forward" feature allowed disinformation to spread exponentially before journalists were even aware of its existence. Research found 86% of false content during the election benefited Bolsonaro.

India: Rumor Cascades and Mob Violence

In India, WhatsApp's "relational" nature—where messages arrive from friends, family, and neighbors—bypasses critical filters. Viral rumors alleging gangs of child kidnappers were roaming villages led to dozens of lynchings of innocent strangers, often accompanied by manipulated videos and urgent calls to "protect the community." Political parties industrialized this through "IT Cells"—vast networks of paid and volunteer operatives who coordinate simultaneous dissemination of sectarian narratives, turning digital misinformation into kinetic violence within hours.

Historical Parallels: From Radio to Reddit

While the speed of digital radicalization is new, the dynamic has precedent. New media technologies destabilize democratic norms before society develops the "antibodies" to manage them.

Father Charles Coughlin, the "Radio Priest" of the 1930s, commanded an audience of 30 million Americans—25% of the population. His career presaged the modern influencer model: he bypassed newspaper gatekeepers using new technology, built an intensely loyal following through parasocial bonds, began with populist economic critiques before drifting into virulent antisemitism, and attacked the institutions of journalism themselves when criticized. Tianyi Wang's American Economic Review study found that a one standard deviation increase in exposure to Coughlin's anti-FDR broadcast reduced Roosevelt's vote share by approximately two percentage points in 1936.

Nazi Germany provides a darker parallel. Adena et al.'s quantitative study found that after the Nazis seized control of radio in January 1933, their propaganda produced a 1.2 percentage point increase in Nazi vote share. Goebbels proclaimed radio the "eighth great power": "It would not have been possible for us to take power or to use it in the ways we have without the radio."

The lesson: radio enabled but did not determine fascism's rise. Institutional responses and countervailing forces mattered. The question is whether contemporary democracies possess the will to reform information ecosystems that profit powerful interests—or whether we are still in the "Coughlin phase" of the internet, where technology has empowered demagogues and our regulatory and social immune systems have not yet caught up.

What Helps (and What Doesn't)

There's no single fix because the problem is joint: platform design + business model + human psychology.

Interventions with evidence:

  • Accuracy prompts: Prompting users to think about accuracy before sharing reduces misinformation spread—though McLoughlin et al.'s finding that users share outrage-evoking content without reading suggests such interventions have limits when emotional arousal is high.
  • True user agency: Under the EU's Digital Services Act, very large platforms face obligations around recommender transparency and must offer options not based on profiling. The theory: if engagement-ranked feeds amplify divisive content, genuine non-nudged alternatives (chronological or non-profiled feeds) can reduce amplification pressure.
  • Bridging algorithms: Current recommender systems are "blind" to social impact—they optimize only for engagement. Bridging algorithms introduce a new metric: "cross-partisan appeal." Instead of amplifying posts loved by one side and hated by the other, they amplify posts that receive positive engagement from both sides. Experiments with systems like Polis in Taiwan and YourView in Australia show this can surface consensus and reduce affective polarization. Piccardi et al.'s 2025 Science study provided causal evidence: altering exposure to hostile content on Twitter changed affective polarization by approximately 2 degrees on feeling thermometers—equivalent to roughly 3 years of natural polarization change.
  • Inoculation / pre-bunking: Works better than endless debunking in high-noise environments. Teaching people to recognize manipulation techniques before encountering them builds resistance.

What doesn't work as well as hoped:

  • Generic "media literacy" can backfire if it becomes weaponized skepticism—people learn to "question sources" only when the claim threatens their tribe. Kahan's research suggests higher information processing capacity can increase rather than decrease polarization by enabling more effective motivated reasoning. The more promising literacy looks like emotional skepticism: noticing when outrage is being manufactured to steer attention.
  • More exposure to opposing views: Bail et al.'s study found this sometimes makes strong partisans more polarized. The manner of exposure matters—ridicule and contempt-framing don't build bridges.
  • Deplatforming: Evidence is mixed. Father Coughlin's removal from radio in 1939 ended his influence; modern deplatforming sometimes drives users to less-moderated spaces where radicalization can accelerate.

The Bottom Line

The algorithm doesn’t need you to believe any particular ideology.

It needs you engaged, emotionally activated, and returning. In an attention market, outrage is one of the most reliable currencies. The result is not just misinformation. It’s the slow deformation of what feels normal, what feels credible, and what feels morally required.


This is the fourth article in a series examining democratic decline. The next article explores the collapse of local news and the nationalization of American politics—how the disappearance of local reporting turned every school board meeting into a proxy war.

Topics

mediaalgorithmsradicalization