Debunked by AI: The future of misinformation on social
Ethan Mollick, professor of management at Wharton Business School, has a simple benchmark for tracking the progress of AI’s image generation capabilities: “Otter on a plane using wifi.”
Mollick uses that prompt to create images of … an otter using Wi-Fi on an airplane. Here are his results from a generative AI image tool around November 2022.
And here is his result in August 2024.
AI image and video creation have come a long way in a short time. With access to the right tools and resources, you can manufacture a video in hours (or even minutes) that would’ve otherwise taken days with a creative team. AI can help almost anybody create polished visual content that feels real — even if it isn’t.
Of course, AI is only a tool. And like any tool, it reflects the intent of the person wielding it.
For every aerial otter enthusiast, there’s someone else creating deepfakes of presidential candidates. And it’s not only visuals: Models can generate persuasive articles in bulk, clone human voices, and create entire fake social media accounts. Misinformation at scale used to take serious operations, time, and expenses. Now, anyone with a decent internet connection can manufacture the truth.
In a world where AI can quickly generate polished content at scale, social media becomes the perfect delivery system. And AI’s impact on social media can’t be ignored.
Misinformation is no longer just about low-effort memes lost in the dark corners of the web. Slick, personalized, emotionally charged AI content is misinformation’s future. To understand the implications, let’s dive deeper into social media misinformation and AI’s role on both sides of the misinformation fence.
Social Media Misinformation Today
What is misinformation?
Before I begin, I should note how I’ll discuss the term “misinformation.” Technically speaking, this issue has a few different flavors:
- Misinformation is false information shared without the intent to deceive. It’s usually spread accidentally because people believe it’s true. When your uncle shares a fake news story on Facebook, that’s misinformation.
- Disinformation is false information shared deliberately to mislead, manipulate, or harm a person or persons. Its purpose is often to create political, social, or financial gain. Think bad state actors or troll farms meant to deceive intentionally.
- Malinformation is when someone shares true information intending to cause harm, often by taking it out of context. It’s a real story used maliciously. For example, someone leaking private emails to smear a public figure is malinformation.
For our purposes, I’ll focus on misinformation as much as possible and will note differences otherwise.
Social Media Misinformation: A Brief History
The fact that we need distinctions hints at the scope and scale of social media misinformation today. False or inaccurate printed content has existed since the Gutenberg printing press.
The advent of newspapers also brought “fake news” and hoaxes — one of my favorites being The Great Moon Hoax of 1835, a series of fake articles in the New York Sun covering the “discovery” of life on the Moon.
Misinformation has followed every medium — newsprint, radio, television. But the internet? Two-way communication on the World Wide Web has helped misinformation like “fake news” proliferate.
Once users could create content online — not just consume it — the door opened to an almost limitless supply of misinformation. And as social media platforms became dominant, that supply didn’t just grow; it became incentivized.
News on Social Media
Today, 86% of Americans get their news from digital devices; information sits in their palms, awaiting engagement. Ironically, the more accessible information becomes, the less we seem to trust it — especially our news.
Social media has only exacerbated these challenges. Firstly, social media platforms have become primary news sources. The 2024 Digital News Report from Reuters & Oxford found:
- News use has fragmented, with six networks reaching significant global populations.
- YouTube is still the most popular, followed by WhatsApp, TikTok, and X/Twitter.
- Short news videos are increasingly popular, with 66% of respondents watching them each week — and 72% of consumption happens on-platform.
- More people worry about what is real or fake online: 59% of global respondents are worried, including 72% of Americans.
- TikTok and X/Twitter are cited for the highest levels of distrust, with misinformation and conspiracy theories proliferating more often on these platforms.
The more we rely on social media platforms for news, the more their algorithms prioritize engagement over accuracy in the challenge to keep us scrolling. Platform creators are then encouraged to provide relevant content to capture attention, engagement — and dollars.
And if the goal is engagement, not accuracy, why limit yourself to real news? When “outrage is the key to virality,” as social psychologist Jonathan Haidt says, and virality leads to rewards, you do whatever it takes to go viral.
And it works, as the data shows. MIT research shows fake news can spread up to ten times faster than true news on platforms like X/Twitter. A story need not be true to be interesting, and in an attention economy, interesting wins.
Mind you, misinformation is often unintentional. And the reward systems these platforms offer to users encourage sharing interesting content regardless of veracity. Your uncle may not know if an article is true, but if sharing it gets him twice as much engagement on Facebook, there’s a good chance he pushes that button.
But now, it’s not just humans spreading falsehoods. Generative AI’s ascendence is fueling the fire — revving up a powerful misinformation engine and making it harder than ever to tell what’s real or not.
AI Can Create Misinformation, Too
Generative AI tools, with broad access and easily manipulated prompts, expand creative powers to nearly anybody with a fast enough internet connection.
So far, the ability to manufacture fake images and videos is AI’s greatest contribution to misinformation proliferation. Common offenders include “deepfakes,” AI-generated multimedia used to impersonate someone or represent a fictitious event. These can be funny; others, damaging.
For example:
- The “swagged-out Pope,” with images of Pope Francis in a puffy jacket.
- Russian state-sponsored fake news sites mimicking The Washington Post and Fox News to disseminate AI-generated misinformation.
- Drake’s “Taylor Made Freestyle,” which used deepfakes of Tupac Shakur and Snoop Dogg. Drake removed the song from his social media after the Shakur estate sent a cease-and-desist letter.
- A campaign robocall to New Hampshire residents using a deepfake of President Biden. The consultant behind the robocall was assessed a $6 million fine by the FCC and was indicted on criminal charges.
Organizations can also use AI copywriters to mass produce thousands of fake articles. AI bots can share those articles and simulate engagement at scale. This includes auto-liking posts, generating fake comments, and amplifying the content to trick algorithms into prioritizing it.
One often-cited prediction suggests that by 2026, up to 90% of online content could be “synthetically generated” — meaning created or heavily shaped by AI. I feel that the number is inflated, but the trend line is real: content creation is becoming faster, cheaper, and less human-driven.
That said, I’ve also found that some fears over AI misinformation’s effect on real life could be overblown. Ahead of the 2024 U.S. presidential election, four out of five Americans had some level of concern with AI spreading misinformation before Election Day.
Yet, amid efforts from foreign actors or deepfakes like the New Hampshire robocall, AI’s impact ended up muted. While technological advances could lead to more effects in future elections, this result shows the limitations of AI-driven misinformation in the current technological climate.
And from a brand safety perspective, marketers aren’t panicking either — at least not when using established social media platforms. Our own research found that marketers felt most comfortable with Facebook, YouTube, and Instagram as safe environments for their brands. While AI-generated misinformation makes noise in political and academic circles, many marketing teams remain somewhat confident.
So if AI-driven misinformation isn’t swaying elections or bothering marketers (yet), where does that leave us? These AI tools are evolving, as are the tactics. Which begs the question: Can AI fight the fire it helped light?
But … AI Can Also Be the Solution
For years, search engines like Google have tried to fend off the spread of misinformation. Many news sources also put misinformation management front and center. For example, Google News has a “Fact Check” section highlighting erroneous information. And, while automation and bots are helping, it faces an uphill battle in the Age of AI.
What AI unlocks is scale. While generative AI can create misinformation, it can detect, flag, and remove that content just as effectively. AI-generated content is becoming more realistic and harder for humans to spot, which means scalable AI countermeasures become essential. That’s true for protecting public trust and brand reputation.
Marketers are caught between an AI arms race. They’re trying to use AI in their business branding to help them do their jobs faster and better. But AI-powered misinformation can negatively affect brand credibility, platform visibility, and consumer loyalty. In short, marketers need help.
Here are some organizations on the front lines of that fight, using AI to rein in misinformation.
Cyabra
Cyabra focuses on detecting fake accounts, deepfakes, and coordinated disinformation campaigns. Cyabra’s AI analyzes details like content authenticity and network patterns and behaviors across platforms to flag false narratives early.
Fake profiles can pop up and push misleading online narratives with breathtaking speed. If your brand is monitoring online risk and sentiment, a tool like Cyabra can keep pace with the spread of misinformation.
Logically
Logically pairs AI with human fact-checkers to monitor, analyze, and debunk misinformation. Its Logically Intelligent (LI) platform helps governments, nonprofits, and media outlets track misinformation’s origins and spread across social media.
For marketers and communicators, Logically can offer an early warning detection system for false narratives around their brand, industry, or audience.
Reality Defender
Reality Defender uses machine learning to scan digital media for signs of manipulation, like synthetic voice or video content or AI-generated faces. I haven’t found many tools offering proactive detection — you can catch deepfakes before they go viral.
This kind of early detection can help brands protect their campaigns, spokespeople, or public-facing content from synthetic manipulation.
Debunk.org
Debunk.org blends AI-driven web monitoring with human analysis to detect disinformation across over 2,500 online domains in over 25 languages. It tracks trending narratives and misleading headlines, then publishes reports countering emerging falsehoods.
Global brands will find Debunk.org especially helpful, given its tool’s multilingual nature. You can navigate international markets and regional misinformation spikes more intelligently.
Consumers are also getting AI-powered support. For example, TikTok now automatically labels AI-generated content thanks to a partnership with The Coalition for Content Provenance and Authenticity (C2PA) and its metadata tools.
And with Google investing heavily in its Generative Search Experience, the company includes an “About this result” panel in Search to help users assess the credibility of its responses.
As AI advances, so too will the tactics used to deceive, and the tools designed to stop it. What’s around the AI river bend? Let’s look at where misinformation could head in the Age of AI — and what experts are already seeing.
What We Can Expect: Misinformation in the Age of AI
Emotional Manipulation and “Fake Influencers”
According to Paul DeMott, CTO of Helium SEO, the most dangerous misinformation tactics may be the ones that don’t feel like misinformation.
“As AI gets better, some subtle ways misinformation spreads are slipping under the radar. It’s not always about fake news articles; AI can create believable fake profiles on social media that slowly push biased info,” he said. “Researchers might not be paying enough attention to how these fake accounts work to influence people over time.”
DeMott sees the issue extending beyond fake people into the message’s emotional design.
“One thing that could make it harder to spot misinformation is how AI can target specific emotions. AI can create messages that prey on people’s fears or desires, making them less likely to question what they are seeing,” he said.
He believes the next wave of misinformation solutions must match AI’s budding emotional awareness with detection systems ready for subtext.
“To counter this, we might need to look at AI solutions that can detect these subtle emotional cues in misinformation. We can use AI to analyze patterns in how misinformation spreads and identify accounts that are likely to be involved,” said DeMott.
“It’s a constant cat-and-mouse game, but by staying ahead of these evolving tactics, we have a shot at keeping the information landscape a bit cleaner.”
Hyper-Personalization and Psychological Biases
Kristie Tse, a licensed psychotherapist and founder of Uncover Mental Health Counseling, sees the danger not only in the tech but also in the psychology behind why misinformation works.
“One emerging misinformation tactic that’s being underestimated is leveraging highly personalized, AI-generated content to manipulate beliefs or opinions,” she said.
“With AI becoming increasingly sophisticated, these tailored messages can feel authentic and resonate deeply with individuals, making them more effective at spreading falsehoods.”
Tse explains how misinformation hijacks humans’ emotional wiring, leading to challenges like the speed of spread.
“The speed at which misinformation spreads is often faster than our ability to fact-check and correct it, partly because it taps into strong emotional responses — like fear or outrage — that bypass critical thinking,” she said. “Psychological factors, such as confirmation bias, play a significant role. People are more likely to believe and share misinformation that aligns with their existing beliefs, making it harder to counteract.”
But AI could help us if we build the right tools.
“On the solution side, we might be overlooking the potential for AI to create tools that proactively detect and counter misinformation in real-time before it goes viral,” said Tse.
“For example, AI could flag manipulated content, suggest reliable sources, or even simulate a debate to highlight contradictory evidence. However, these solutions need to be user-friendly and widely accessible to truly make an impact.”
AI Ecosystems That Reinforce Biases
James Francis, CEO of Artificial Integrity, warns we’re focusing too much on content moderation and not enough on context manipulation.
“We‘re not just dealing with fake articles or deepfakes anymore. We’re dealing with entire ecosystems of influence built on machine-generated content that feels real, speaks directly to our emotions, and reinforces what we already believe,” he said.
Francis notes that people usually fall for lies because the content feels emotionally right.
“What worries me most isn‘t the technology — it’s the psychology behind it. People don‘t fall for lies because they’re gullible. They fall for them because the content feels familiar, comfortable, and emotionally satisfying,” he said. “AI can now mimic that familiarity with incredible precision.”
With such an ecosystem in play, he believes the real challenge isn’t removing falsehoods but empowering people to stop and think.
“If we want to push back, we need more than just filters and fact-checkers. We need to build systems that encourage digital self-awareness,” he said. “Tools that don‘t just say ‘this is false,’ but that nudge users to pause, to question, to think. I believe AI can help there, too — if we design it with intention. The truth doesn’t need to shout. It just needs a fair shot at being heard.”
Synthetic Echo Chambers
Rob Gold, VP of marketing communications at Intermedia, raises the alarm on one of AI’s more insidious abilities: creating networks of fake credibility.
“It’s not just a fake or misinformed article, but the potential for AI to manufacture the illusion of academic or expert consensus by building large networks of interconnected fake sources,” he said.
Gold shares that AI could mimic credibility by creating articles, studies, posts — even Reddit threads — fooling users and search engines.
“It wouldn’t be hard at all to build a strong, fake echo chamber supporting a false story. It tricks us because we tend to trust information that seems backed up by many sources, and AI makes scaling that creation simple,” he said.
“Imagine trying to disprove a fake claim about, say, security flaws in cloud communications when there are half a dozen fake ‘studies’ that all agree and cite one another.”
To fight this, he says we need smarter tools able to detect citation loops and sudden explosions of information.
“These tools should flag strange patterns, like lots of new sources appearing quickly, sources that heavily cite each other but have no history, or sources that don’t link back to any established, trusted information,” Gold said.
“Ironically, seeing too many of these tightly linked, brand-new sources pointing only to each other might become the warning sign itself.”
Confusion Attacks Against the Fact-Checkers
Will Yang, head of growth and marketing at Instrumentl, sees an even deeper problem simmering: AI content design not only to trick humans but also to confuse other AIs.
“Neural Network Confusion Attacks are a sneaky new tactic emerging as AI technology advances. These attacks involve creating AI-generated content designed to confuse AI fact-checkers, tricking them into misidentifying genuine news as false,” he said.
These attacks fool AI systems, of course. But they also erode public trust in all moderation efforts.
“Researchers might underestimate the psychological impact this has, as users begin to question the reliability of trusted sources,” he said. “This erosion of trust can have real-world consequences, influencing public opinion and behavior.”
Yang suggests the solution is for AI systems to get smarter at both detection and identifying manipulative intent.
“Training these systems not only on typical data patterns but also on detecting subtle manipulation within AI-generated text can help,” he said.
“This means enhancing AI models to recognize inconsistencies often overlooked by conventional systems and focusing on anomaly detection. Expanding datasets used for AI training to include diverse scenarios could also reduce the success of these confusion attacks.”
Social Media Misinformation Is Getting Smarter. So Must We.
Ethan Mollick posted another otter video in January 2025. Watch it, and you might mistake it for cinema.
Otters on planes are fun and games. But this same technology can whip up fake videos or audio of celebrities and politicians. It can tailor emotionally precise content that slips easily into a family member’s Facebook feed. And it can create an ocean of fake articles or fictional studies to manufacture expertise overnight, leaving users none the wiser.
I work with AI in marketing regularly, but writing this piece reminded me how fast this space is moving. The truth may not need to shout, but amid louder AI-generated noise, it needs help to be heard.
Whether you’re scrolling social media feeds as a marketer or an everyday user:
- Stay aware.
- Ask questions.
- Understand how AI systems work.
Thankfully, AI isn’t only amplifying misinformation; it’s also helping us detect and manage it. We can’t outsource the truth to machines. But we can make them part of our solution.