The machinery of modern propaganda: Renée DiResta on invisible rulers and bespoke realities
The disinformation researcher explains how small communities of propagandists have transformed the rules of influence and public opinion.
Renée DiResta's path from tech startup co-founder to one of the country's leading disinformation researchers began with a parental concern about vaccination rates in California schools. A decade later, she's become an expert on everything from state-sponsored influence operations to the role algorithms play in manufacturing viral moments.
Now an associate research professor at Georgetown University's McCourt School of Public Policy, DiResta is one of the country's leading experts on disinformation and online manipulation. Her path from tech startup co-founder to disinformation researcher led her to investigate everything from Russia’s interference in U.S. elections to QAnon, ultimately landing her at Stanford's Internet Observatory, where she served as technical research manager until the program was effectively shuttered in 2024 amid intense political pressure.
Her 2024 book, Invisible Rulers: The People Who Turn Lies Into Reality, exposes how small communities of propagandists increasingly shape public opinion through what she calls "bespoke realities" — custom-made versions of truth for different audiences. But DiResta's work studying the mechanics of online manipulation took a surreal turn when she became a character in the very conspiracy theories she was researching.
Conservative activists and lawmakers targeted her through lawsuits and congressional inquiries, painting her as part of a fictional "censorship industrial complex" — a campaign that contributed to Stanford's decision to wind down the Internet Observatory.
In this edition of Depth Perception, we speak with DiResta about the simple maxim driving modern propaganda (“if you make it trend, you make it true”) and why she believes transparency is the only counter to narrative laundering. —Parker Molloy
What drew you to studying online manipulation and disinformation? Was there a particular moment or experience that made you realize this was the work you needed to be doing?
I got interested in this, not as an academic, but through activism. There was a measles outbreak at Disneyland when my first kid was a year old, and someone with measles was on [San Francisco public transit] during morning rush hour. I’d just done the preschool application process and had gone down a rabbit hole looking at the rise of vaccine opt-outs. As a New York transplant, it was crazy to me that in California you could just say, “Yeah, I'm not going to vax my kindergartener against polio” in public school. I was a co-founder of a tech startup at the time, but I called my state representative and asked if anything could be done and that’s how I wound up getting involved in the fight to pass a pro-vaccine bill in 2015.
I opened Invisible Rulers with the story of the fight over that bill because it was transformative. I was doing network and messaging analysis of the opposition, trying to understand why anti-vax rhetoric was so influential so that we could counter it. We were trying to grow a pro-vaccine page on social media without a lot to work with; Facebook’s ad targeting tool proactively suggested anti-vaccine keywords, its [recommendation] engine was promoting anti-vax groups. The anti-vax movement was years ahead in terms of investing in social [media]. Meanwhile, on the other side, people vaccinated their kids and went on with their lives. They didn’t make content about it.
We won that legislative fight, but I felt like I’d seen the future while the CDC and others were not taking what had happened online seriously enough. So I started writing about why this was actually a very big deal, that this was going to be the future of public opinion shaping and institutions were behind.
The title of your book, Invisible Rulers, suggests these actors wield real power despite operating in the shadows. Can you explain what makes someone an "invisible ruler" versus a cultural influencer?
The title comes from Edward Bernays 1928 book, Propaganda, where he writes, “there are invisible rulers who control the destinies of millions.” He meant behind-the-scenes operators, like PR experts, who shape public opinion without the public realizing it. At the time, the term “propaganda” wasn’t yet a dirty word. Bernays talks a lot in the book about creating demand by appealing to group identity….
That ambiguity between ethical persuasion and manipulation is still with us; the demarcation of “just a cultural influencer” is not always as neat as we might like. Andrew Breitbart realized this. So in the chapter on influencers, I highlight some of the communication styles that we all see online today: the creator who presents as an entertainer, versus your bestie, versus the one who positions themselves as a Guru.
Algorithmic curation also acts as an invisible force that heavily steers group identity and norms without the target — us — being aware of it. But I spend the second half of the book going deep on influencer-propagandists in particular … the ones who were part of spreading the Big Lie, who focus on shaping political realities.
Long Shadow podcast: Renée DiResta on the twisted logic of social media’s algorithms
In the latest episode of Long Shadow: Breaking the Internet, Renée DiResta illuminates how social media pulls users into rabbit holes flooded with conspiracy theories. It began with the dawn of the News Feed, Facebook’s algorithmic content engine that allowed the social network to conduct a mass experiment on the human psyche — what we like and hate, what makes us happy and angry. As the episode shows, over the course of a decade, Facebook’s algorithm drove the world to like, comment, and eventually, kill.
In 2015, DiResta began joining anti-vax groups on the social network as a way to investigate disinformation. But soon after, the site began suggesting that she join other unrelated groups that shared a similar conspiratorial worldview — groups that explored the flat-Earth theory, Pizzagate, and QAnon. “It was suggesting things that I had not ever proactively typed out,” she tells host Garrett Graff. But it was also doing something that Sir Timothy Berners-Lee, the inventor of the world wide web, predicted all the way back in 1997: It was cordoning DiResta and others in her Facebook groups into bespoke little realities.
Long Shadow: Breaking the Internet retraces 30 years of web history — a tangle of GIFs, blogs, apps, and hashtags — to answer the bewildering question many ask when they go online today: “How did we get here?” Episode four is out now. Listen and subscribe wherever you get your podcasts.
You write that modern propagandists operate by a simple rule: "If you make it trend, you make it true." How does that actually work?
Click, contribute, commit: It’s a process of capturing attention, inspiring participation, creating a sense of investment in the moment or issue, and entrenching the person more deeply in the belief they’ve expressed. A post from an emerging trend hits your feed, because some algorithm decides you’re likely to pay attention. You engage, reposting it or creating your own content. Now you’ve participated, so you’re more invested. Your take hits your network’s feeds; people who trust you. You’ve helped fuel the moment, and you’ve also publicly aligned yourself with a position.
I see this as the extremely online manifestation of something Jacques Ellul was writing about in the 1960s: Propaganda is most effective, not when it tries to transform a belief, but when it invites participation.
People are often pulled into participating in trending moments, drawn in by the emotion of what’s in front of them. This isn’t malicious; it’s human nature, particularly if it’s a rumor that can’t yet be verified. But it’s a dynamic that propagandists can exploit. Virality functions as a proxy for legitimacy.
What's the most important thing mainstream media gets wrong when covering disinformation campaigns?
The media often uses “disinformation” as a catch-all for misleading or manipulative content, when “propaganda” would be the more appropriate term. I appreciate that you used the word “campaign,” because a campaign implies an objective and intent, and that’s what distinguishes disinformation. We should reserve the term for deliberate coordinated efforts to deceive. When the media tells the stories, then the focus should be on the campaign — on the power play, not the pieces of content. Who is doing this? Why now? What effect did it have?
I think the Active Measures Working Group of the 1980s had the right idea. They exposed Soviet disinformation campaigns by exposing the mechanics … not litigating the individual claims. This is also useful because often in these campaigns the content isn’t explicitly false. It’s often something that is strategically framed or twisted to shift perception, erode trust, or create a false perception of majority opinion.
You write about "bespoke realities" — custom-made versions of truth for different audiences. What's the most striking example you've encountered of how these separate realities operate?
Well, before 2023, I would have said QAnon, or perhaps January 6. But now, we’re watching U.S. government capacity be dismantled in response to online conspiracy theories about the 2020 election, the anti-vax Secretary of Health and Human Services is promoting discredited theories about autism. The examples are piling up.
The throughline is this: When influencers, algorithms, and crowds reinforce each other in some of these conspiratorial spaces, they aren’t just shaping opinions — they’re creating belief systems. Some of these come with a sense of grievance, and people feel called to act.
Ironically, some of the MAGA leaders who worked the hardest to create pervasive distrust in institutions now run those institutions — and are facing their own angry crowds. FBI Deputy Director Dan Bongino saying Epstein didn’t kill himself was quite the moment. But where we are feels very precarious, because some of the people who now run the government are willing to continue echoing theories that help them maintain power. There is also the growing capacity of generative AI to manufacture unreality, or to create plausible deniability in actual reality.
“The media often uses ‘disinformation’ as a catch-all for misleading or manipulative content, when ‘propaganda’ would be the more appropriate term.” —Renée DiResta
Stanford effectively shut down the Internet Observatory amid the legal and political pressure. How do you continue this work when institutions are afraid to support it?
Stanford was a canary in the coal mine, unfortunately. I think the last few months have hopefully made clear that intimidation and capitulation don’t work. Democracies need resilient institutions, but they also need resilient people, and resilient people can continue the work elsewhere if necessary. When formal structures pull back or are compromised, work can decentralize and disperse. Some people might go independent, some might join smaller labs, some might collaborate across borders. Legal entities can step up and support those targeted by lawfare. The reason disinformation researchers were targeted is because they help people understand the tactics and objectives of those who wish to manipulate the public. That work needs to continue.
You’ve argued that we need to change the incentive structure of social media — rewarding accuracy and civility rather than engagement. What would that actually look like in practice?
It means shifting platform incentives to reward bridging over baiting. Instead of amplifying the most extreme, emotionally charged takes, platforms could prioritize content that brings different communities into conversation — people who disagree but engage in good faith. That’s how we rebuild some shared reality: not by forcing consensus, but by making it easier to encounter nuance and harder to go viral for outrage. On the accuracy front, I think Community Notes is a worthwhile addition — though I believe it should be incentivized via platform funding to those who participate. We have revenue sharing for creators, why not for this kind of program? Meta can take some of the money it’s pulling from fact-checking and pioneer redirecting it.
Are there any examples of platforms or countries that have found better approaches to balancing free expression with limiting manipulation?
I like Reddit’s approach. Federated moderation: some central rules, and then a lot of control devolved down to subreddit [moderators] who can set their own norms. No posting cat pics in the dog subreddit. I know that Bluesky is currently thought of as “Lib Twitter,” but it has amazing technological potential for real agency as well, with its instance moderation combined with community labelers and custom feeds.
The web's "original sin": Garrett Graff on tech's deadly embrace of the algorithm
A quarter century ago, as the world held its breath waiting to see if Y2K would bring a digital apocalypse or just another New Year's hangover, President Bill Clinton celebrated how technology had created a moment of shared global experience.
What gives you hope about the future of our information environment? What keeps you up at night?
What gives me hope is that more people now recognize that propaganda matters, that the information environment is infrastructure, and that building better systems is critical. I see platforms experimenting with user agency. I see the public paying attention, and actively seeking alternatives to manipulation.
What keeps me up at night is the fracturing of the public. We don’t just disagree on values — we increasingly live in separate realities, and political elites who benefit from this are not stepping up and leading us back together. They’re accelerating the split. When shared facts collapse, so does the foundation for democratic debate. When accountability collapses, people begin to feel that violence is justified. And the people who benefit from that fragmentation — the ones profiting from outrage and manufacturing division — are very good at what they do.
For journalists covering disinformation, what's the most important principle to keep in mind to avoid amplifying the very narratives they're trying to debunk?
I was talking to a journalist friend the other day, discussing the issue of debunking AI generated content and how it’s become hard to offer tips at this point … and she said something I appreciated in this particular context: Focus not on the ‘what’, but on the ‘so what’. Overfocusing on the one AI image of the burning car isn’t particularly helpful. The bigger story is the overall campaign. There is not actually a mass riot in city X, but the reason this faked car image is circulating is because someone wants you to believe there is.
Further reading from Renée DiResta:
“Elon Musk’s Soap Operas for Conspiracy Buffs” (The Atlantic, March 23, 2025)
Invisible Rulers: The People Who Turn Lies Into Reality (PublicAffairs, 2024)
“Rumors on X Are Becoming the Right’s New Reality” (The Atlantic, Oct. 11, 2024)
“My Encounter With the Fantasy-Industrial Complex” (The Atlantic, June 15, 2024)
“How Online Mobs Act Like Flocks Of Birds” (Noēma, Nov. 3, 2022)
Woohoo!! One more book to the reading list!
Man, there are a lot of talented people in the field I'm so weirdly interested in.