Re-platforming Hatred
How a new class of digital demagogues found their way back into the mainstream’s bloodstream
(Photo: Nick Fuentes, NYT)
It was between 2012 and the summer of 2018 that Facebook first revealed what it had become: not only a breeding ground for extremism, but a platform unwilling or unable to contain it until the damage was unmistakable. The clearest case came from Myanmar, where Buddhist nationalist groups (yes, they are real) used Facebook to spread dehumanizing propaganda against the Rohingya, a Muslim minority. By 2017 the crisis had erupted into mass violence, and investigators began tracing how pro-junta networks had turned Facebook groups into engines of ethnic cleansing. In March 2018 a United Nations fact-finding mission delivered its conclusion with unusual bluntness: Facebook had played a “determining role” in the genocide.
In that same period Meta began to understand the scale of the problem and started taking action against individual figures. The most visible test case came later that year when the company removed Alex Jones and the Infowars network. Meta did not present it as a political gesture. It presented it as a necessary response to a pattern of targeted harassment and radicalization that the platform had been slow to confront. For the first time Facebook was willing to say that some personalities were using the architecture of the site to push audiences toward extremism. That decision, hesitant and overdue, marked the beginning of Meta’s era of high-profile deplatforming.
Soon thereafter Meta widened the net and removed several other high-profile figures whose work revolved around different strains of extremism. Louis Farrakhan, the longtime leader of the Nation of Islam, had spent years circulating conspiratorial rhetoric about Jews and other minorities. Paul Joseph Watson, a prominent Infowars personality, built an audience by blending anti-immigrant panic with a steady stream of culture-war agitation. Laura Loomer used confrontational activism and fabricated narratives to fuel an online presence built almost entirely on provocation.
Their removal signaled that Meta was no longer focusing on isolated violations. The company was beginning to treat these personalities as part of a larger ecosystem that thrived on harassment, conspiracy, and the deliberate radicalization of young audiences. It was the first attempt to draw a boundary between political expression and the kind of content that corrodes the public space the moment it reaches scale.
Now comes the part that moves from history into interpretation. It is difficult to imagine that Mark Zuckerberg, coding in a Harvard dorm room, understood the social and political power his platform would eventually wield. His priority was scale in a world where the user was the product and growth was the only real metric. I do not doubt that he was disturbed when he realized the platform was amplifying hatred. Yet during the same period Facebook was pulled into two separate storms: foreign interference in the 2016 presidential election, and the Cambridge Analytica scandal, which broke into public view in March 2018. Zuckerberg struggled to manage these crises in public. His testimony and press appearances made him look uneasy, and nothing in his delivery helped people trust that he understood the consequences of what he had built. From that point forward the Facebook brand carried the weight of those failures, and the company has never fully shaken the sense that it lost control of its own architecture.
(Photo by Brendan Smialowski / AFP / Getty)
The First Trump Presidency and the Pandemic Years
In terms of opposition politics, we all remember the allegations that Trump had colluded with Russia. Between late 2015 and the fall of 2016 I began noticing a new kind of content circulating in Facebook groups that leaned conservative. One example was the Prokhorenko story, the dramatic tale of a Russian soldier calling an airstrike on himself to kill ISIS fighters. It was presented as proof that Moscow was taking the fight to ISIS while the Obama administration hesitated. At the time it looked like sentimental battlefield lore. In hindsight it feels like an early influence product, tailored for American audiences who were already skeptical of Obama’s foreign policy. The narrative was clean and persuasive: Russia as decisive and moral, the United States as timid. This was the moment when a foreign government learned how to use Facebook’s architecture to slip disinformation into the habits of ordinary voters. By 2017 and 2018, Facebook was being criticized for its inability to detect or contain this kind of election interference, long after the content had already shaped political sentiment.
Around the same time another pattern took shape on Facebook that exposed the platform’s vulnerability to conspiracy influencers. After the Las Vegas mass shooting in October 2017, misinformation began circulating within hours. Pages tied to Jones, Loomer, and similar figures filled the vacuum with speculation that blended political grievance with invented motives. It was one of the first major tests of Facebook’s ability to contain disinformation during a crisis. The platform failed. The story moved faster than the moderators, and the most sensational claims were shared millions of times before any fact-checking reached the same audience. It showed how quickly conspiracy narratives could harden into belief, and how easily extremist personalities could dominate the information space long before Facebook reacted.
This was also the period when QAnon began to migrate from anonymous forums into mainstream social platforms. The first Q posts appeared on 4chan in late 2017, then moved to 8chan, and by 2018 the theory had found fertile ground inside Facebook’s large and unmoderated groups. Parenting communities, local-town pages, and veteran groups became early collection points for the movement. By 2019 the theory had pulled in millions of users who did not realize they were stepping into a closed ideological world. Facebook did not ban QAnon content until 2020, long after the movement had developed its own internal media ecosystem. The rise of QAnon on Facebook showed that the platform had become a delivery system for conspiracy thinking that could reach people far outside the fringe. It also revealed the scale of Meta’s problem: extremist narratives were no longer confined to anonymous message boards. They had entered the daily routines of ordinary users, and the company did not understand how to stop it.
The arrival of COVID-19 became the largest stress test of the digital information ecosystem since the dawn of social media. For the first time the entire world faced the same crisis in real time, and the platforms were expected to manage a public health emergency while moderating the political speech of hundreds of millions of users. That burden collided with a presidential election year marked by mistrust, federal mismanagement, and partisan exhaustion. The first major confrontation came in May 2020 with the removal of “Plandemic,” a conspiracy film that accused public health officials of engineering the virus. It reached millions of viewers in a matter of days. Facebook, YouTube, and Twitter responded by deleting every copy they could find. This was their first coordinated act of large-scale content removal during the pandemic and it marked a shift from passive moderation to active suppression.
The same pressures shaped the early debate around the Wuhan lab leak theory. Several US intelligence agencies privately regarded a laboratory accident as plausible, even if unproven, yet Meta and other platforms downranked or removed posts that raised the possibility. The decision was rooted in the domestic political climate rather than any scientific consensus. Many Democratic officials dismissed the theory as a xenophobic distraction. Republicans insisted it deserved open debate. At the same time reports of harassment against Asian Americans were rising across the country, and “Stop Asian Hate” began to surface on social media as a counter-movement to what many saw as growing anti-Asian sentiment. Within that environment Meta chose to treat the lab leak idea as inherently inflammatory. The company was trying to contain prejudice as much as misinformation, and it did so at a moment when the intelligence community itself had not closed the question. In later interviews Zuckerberg said the Biden administration urged Facebook to act more aggressively on pandemic misinformation. European regulators were applying their own pressure as they advanced tighter controls on digital platforms.
The result was a moderation system that collapsed uncertainty into certitude and removed discussions that fell well within the bounds of legitimate inquiry. The approach did not calm the political environment. It accelerated the sense that the platforms had become political actors and that their judgment was shaped by fear of public backlash rather than careful evaluation. It left behind a vacuum of trust that would later be exploited by figures who understood how to turn resentment into influence.
The COVID-19 period primed the world for a renewed openness to conspiracy rhetoric. Millions of people lost their livelihoods, routines collapsed, and the long months of isolation pushed the disgruntled into online spaces where frustration could harden into ideology. The conditions were perfect for a familiar narrative to return: that global elites were engineering something larger, perhaps a plan to control society or usher in a New World Order. Public officials did not always help. Messaging about “The New Normal,” repeated on billboards and in press briefings, fed into the belief that governments were preparing citizens for a permanent change in the social contract. In that environment many people looked for someone to blame, and the blame often followed the oldest route available. Conspiracies that begin with vague accusations of elite manipulation tend to end in the same place, with Jews cast as the architects of a worldwide plot. The pandemic created an audience that was stressed, anxious, and primed for stories that offered simple explanations. It did not take long for those stories to find them.
(Photo: Alex Kent)
The Post–January 6 Vacuum and the New Digital Landscape
By early 2021 the United States was already saturated with mistrust. COVID misinformation, unresolved debates over the Wuhan lab leak, and the experience of an election carried out during a pandemic left millions of people convinced the institutions around them could not be trusted. The riot at the Capitol was the political expression of that mood. A crowd acting on the belief that the election had been stolen forced its way into Congress while the sitting president encouraged them to hold their ground. When the violence ended the platforms found themselves in a position they had never been in before. They removed Donald Trump from Facebook, Twitter, and YouTube within days. It was the most consequential takedown of an individual in the history of the internet. To some Americans it looked like overdue accountability. To others it looked like a coordinated act of political censorship. Both interpretations hardened the belief that the platforms were no longer neutral custodians of speech.
The months that followed did not calm the public landscape. Instead a new constellation of personalities emerged to fill the vacuum. Tucker Carlson became the central figure of the mainstream right. His nightly program blended cultural grievance, distrust of institutions, and a foreign policy posture that increasingly echoed Russian narratives about Western decline. Steve Bannon provided the movement with a permanent home on his podcast. He framed January 6 as the opening chapter of a larger struggle and supplied the conspiratorial architecture that held together a fragmented audience.
Alongside this political realignment another current was forming in the online world. Andrew Tate became the defining voice of a generation of young men who felt alienated and resentful. His message relied on the language of dominance and contempt. It carried an emotional posture rather than a political program, but it contributed to a worldview in which cruelty was treated as strength and empathy as weakness. His influence multiplied through short clips circulating on TikTok and Instagram. By the time he was deplatformed in 2022, millions had already absorbed his vocabulary of grievance and superiority.
Figures like Nick Fuentes pushed that worldview into explicitly ideological territory. Fuentes drew on the posture of Tate, the anti-institutional rhetoric that had hardened during the pandemic, and the conspiratorial style that Bannon normalized. He framed white identity politics as a rebellion against cultural decay and cast democracy as a system designed to suppress the people he claimed to represent. His movement grew not because of formal power but because it offered a simple story to a public that felt disoriented and angry. Carlson, for his part, gave that story oxygen by treating extremist narratives as legitimate cultural critiques. What had once been fringe began to feel familiar.
The broader platform environment made this possible. YouTube was struggling to police radicalization pipelines. TikTok rewarded engagement without understanding context. Telegram provided a safe harbor for influencers who needed no moderation at all. By late 2022 these systems overlapped. American audiences were absorbing Russian narratives about Ukraine through accounts that had no awareness of their source. Propaganda, outrage content, and domestic grievance had fused into a single stream.
Elon Musk’s purchase of Twitter in October 2022 accelerated the process. He dismissed much of the trust and safety staff, reversed long-standing policies, and reinstated accounts that had been banned for coordinated harassment or incitement. He framed the changes as a restoration of free speech. In practice the platform lost whatever stability it had left. The loudest and most transgressive voices regained control of the conversation. Many who had felt censored or marginalized interpreted the shift as vindication. Fuentes, Tate’s supporters, and countless lesser influencers treated Musk’s takeover as a signal that the old rules were gone. The new environment rewarded spectacle and punishment. It also rewarded the kinds of narratives that foreign influence campaigns had already learned to exploit.
By the end of 2022 the United States was no longer operating in a slow or medium tempo information conflict with Russia. The tempo had accelerated. Kremlin narratives about Western weakness, Ukrainian corruption, and American hypocrisy spread with unprecedented ease. Domestic influencers repeated these messages as if they were organic observations. The boundaries between foreign propaganda and homegrown resentment dissolved. This is the landscape that carried the public into 2023. The conspiracies that had grown during the pandemic did not fade. They evolved. They adapted to each crisis. And, as they often do, they began to converge on the oldest target available.
That convergence reached its full expression after October 7, when the world witnessed a new wave of antisemitism unfold in real time. That is where the next section begins.
(Photo: Al-Jazeera)
The Digital Convergence After October 7
The events of October 7 did not invent antisemitism, but they revealed how quickly it could cross boundaries that once kept different political communities apart. The first wave of reaction online began as commentary on the violence. It soon became something much broader. Anti-Zionist posts, reaction clips, and protest footage turned into meeting points where users from entirely different worlds arrived at the same time. The far right was present, as expected, but the scale of participation went well beyond that. Pro-Palestinian activists, Arab nationalists, young progressives, disaffected libertarians, and countless anonymous accounts entered the same comment spaces and encountered rhetoric that had previously been confined to extremist forums.
The shift was visible in the language itself. Comment threads under otherwise conventional political reels filled with lines like “We were wrong about the Austrian painter” or “He understood the problem,” phrases that once signaled participation in white nationalist subcultures. They now appeared under videos shared by people who had no prior contact with those movements. The platforms did not create this convergence. They rewarded it. Anti-Zionist content acted as a mixing chamber where political critique, ethnic hostility, and shock-value provocation fused into one stream. Users who arrived to condemn Israeli policy found themselves engaging with overtly antisemitic tropes. Others showed up to express solidarity with Palestinians and ended up repeating memes that originated in neo-Nazi channels. Engagement was the only metric the system cared about, so the most incendiary material stayed in circulation longest.
The convergence did not stop at slurs or casual bigotry. It fed into the broader ecosystem of digital suspicion. The refrain “distrust the official narrative” became a unifying principle across communities that otherwise shared nothing. Historical revisionism surged, the kind that flourishes when institutions lose credibility and when events are reframed as conspiracies rather than conflicts. It followed the pattern described by analysts who have chronicled how revisionist punditry weakens the West by eroding its sense of history and purpose. The same dynamic was now playing out in real time. People who entered the conversation with geopolitical concerns were suddenly debating whether atrocities had happened at all, or whether they were staged to manipulate Western opinion. The effect was not merely ideological fragmentation. It was the dissolution of shared reality.
Foreign actors recognized the opportunity immediately. Russian and Iranian channels amplified divisive narratives, exploiting the emotional intensity that followed October 7. Russia benefited from anything that weakened Western cohesion and anything that undermined support for Ukraine. An anti-Israel worldview fit neatly into that strategy because it positioned the West as hypocritical and morally compromised. China benefited in a different way. A distracted and divided West is less capable of sustaining long-term strategic pressure. The platforms became ideal terrain for these operations. They did not need to create new narratives. They only needed to amplify the ones that were already destabilizing the public sphere.
Figures like Nick Fuentes understood this environment instinctively. He inserted himself into these feeds with familiar claims about Jewish control and global manipulation. The audience was no longer limited to his ideological base. It now included young people who had wandered into the conversation through anti-Zionist posts, users motivated by anger rather than doctrine, and others who had been primed by years of conspiratorial thinking. Fuentes did not build this audience. The platforms delivered it to him.
By late October a pattern had emerged. Antisemitism was no longer contained within predictable political factions. Xenophobia, anti-Zionism, conspiracy culture, and classical antisemitism had merged into a single, volatile discourse. The result was not coordination. It was collision. It was the moment when different resentments, different ideologies, and different foreign interests aligned just enough to turn a crisis into an opportunity.
This is the landscape that set the stage for what came next.
(Photo: YouTube screenshot)
The Recalibration of 2025
Trump’s inauguration in January 2025 signaled a shift that much of the world had already anticipated. The cultural pendulum swung hard against what his supporters called woke excess, and the administration treated the victory as a mandate to dismantle the norms of the previous decade. This shift extended into the platforms that shape public life. Meta spent the following months recalibrating its moderation systems, narrowing its definition of hate speech and adjusting the thresholds that once triggered removals. The company presented the changes as an expansion of free expression. The immediate outcome was the surge of content that had been waiting for precisely this moment. Extremist communities had built a coded vocabulary designed to exist just outside the reach of detection systems. Once enforcement loosened, they moved into the open.
The coded language arrived quickly. The “juice box” emoji became shorthand for Jews. The nose emoji served the same function. The Austrian flag paired with a paint palette circulated as a reference to Hitler. Fried chicken emojis appeared in openly racist contexts. None of this evolved spontaneously. These symbols had been tested for years in fringe spaces. What changed was that enforcement had weakened and the coded hostility moved into mainstream reels and group chats. Meta’s trust and safety teams recognized the shift but lacked a policy framework or political support to confront it.
Celebrity culture added another layer of volatility. Kanye West released a song titled “Heil Hitler,” and clips of it circulated widely across platforms that once would have limited its reach. The track was not satire and not coded. It was explicit. Its circulation gave extremist communities a sense of cultural validation. It showed younger users that explicit Nazi references were now treated as rebellious rather than disqualifying. It also provided an anthem for the broader movement, a shared reference point that allowed fringe rhetoric to blend effortlessly with mainstream pop culture. The platforms did not contain the song. They amplified it.
Signals from Meta’s leadership reinforced the perception that strong enforcement was no longer a priority. Dana White joined the board with a public persona aligned with Trump and a worldview shaped by tribal politics. Zuckerberg continued appearing on shows like Joe Rogan’s program, a space that often amplified misinformation and anti-establishment narratives. None of this caused the moderation collapse, but it strengthened the belief that the company had repositioned itself culturally and politically. American politics fueled the division. Foreign adversaries recognized the opportunity. Russian and Chinese information networks used the platform’s new permissiveness to magnify narratives designed to fracture Western cohesion.
This was the environment in which Tucker Carlson decided to platform Nick Fuentes. Carlson had spent years dismissing extremism as a media invention. Now he presented Fuentes as a thinker worth hearing. Whatever bans Fuentes faced on Meta no longer mattered. His audience clipped the interviews, distributed them as reels, and flooded the platform with repackaged content that avoided direct slurs while carrying the same ideological message. The moderation tools were designed to detect explicit language. They were not designed to recognize ideology laundered through jokes, symbols, and thousands of coordinated small accounts.
Alternative platforms accelerated the trend. Discord servers became operational hubs outside any meaningful oversight. Telegram channels handled real-time distribution of propaganda, conspiracy content, and harassment campaigns. These spaces bled into mainstream feeds through reposts and curated outrage. Fuentes did not need official accounts. His followers did the work for him. The companies targeted individuals, but individuals were no longer the center of gravity. The momentum belonged to networks.
By 2025 the pattern was unmistakable. The platforms had lost the capacity to shape the flow of extremist content in any meaningful way. The political environment rewarded transgression. Algorithms rewarded conflict. Foreign actors rewarded chaos. Each force strengthened the others. No corrective course was visible. Into that space stepped voices eager to turn resentment into identity and identity into ideology.
(Photo: Donald J. Trump official Facebook account)
My Conclusions
At a bare minimum Meta needs to retune its system for identifying egregious hate speech. The technology exists. The precedent exists. When Stormfront’s domain registrar suspended the site in 2017, it showed that determined actors can disrupt extremist ecosystems when they choose to act. Meta is far larger than any registrar and has deeper insight into the behavior of its users. It has simply chosen not to use that power consistently. The company has become an institution that is, in some ways, too big to fail, yet not disciplined enough to meet the moment.
I have seen what happens when divisive narratives seeded by foreign governments rip a society apart. I watched Russian influence operations destabilize Ukraine long before the full-scale invasion. The goal was always the same. Fragment the public. Turn neighbor against neighbor. Destroy any belief in shared facts. The same strategy is now being directed at the United States. The people pushing it are not improvising. They know exactly how fragile a polarized democracy can become. Every coded slur, every manufactured controversy, every algorithmic dogwhistle serves a purpose. Our adversaries benefit from a balkanized America that is too busy fighting itself to defend its interests.
Other countries, especially authoritarian regimes, have responded to this threat in ways that reveal their priorities. China now requires certain professional qualifications before citizens can make broad public assertions on scientific or technical subjects. The state presents this as a protective measure. It is also censorship, but it reflects a recognition that uncontrolled misinformation weakens a country from within. The United States has taken the opposite approach. We have built an information environment that allows hostile actors to move freely and rewarded the companies that refuse to take responsibility for the damage.
Meta is not unaware of any of this. It is simply choosing a path that preserves its political leverage. By easing its enforcement rules and positioning itself as a champion of free expression, the company gains favor with the current administration and avoids another cycle of political scrutiny. The posture is framed as principle, but the incentives behind it are commercial and strategic. In doing so Meta has allowed its platforms to become accelerants for the political and cultural divisions that foreign adversaries exploit every day.
We cannot regulate ourselves into national unity, but we can demand that companies with global influence act with a minimum level of responsibility. The tools to curb hate speech exist. The tools to impede foreign propaganda exist. The question is whether Meta will use them, or whether it will continue to present disengagement as neutrality while its platforms deepen the fractures running through American society. The stakes are no longer theoretical. They are geopolitical. If we fail to recognize that, someone else will write the ending for us.







