GettyImages 1044059142

Deepfake tracking non-profits warn that generative disinformation is real and targets are not consumers but influencers

Technology

As we navigate the complexities of the 2024 election, one concern has been at the forefront of many minds: the potential for AI-generated disinformation to influence voter opinions and sway the outcome. While some feared that the 2024 election would be decided by AI-generated fake news, the reality is far more nuanced.

A Misconception: The Targeted Approach to Disinformation

Many people assume that deepfakes, AI-generated videos or audio recordings designed to deceive, are primarily targeted at specific individuals or groups. However, this assumption is misplaced. In an interview with TechCrunch, Oren Etzioni, a renowned AI researcher and founder of the nonprofit TrueMedia, shed light on the complexity of disinformation campaigns.

"There’s a diversity of deepfakes," Etzioni explained. "Each one serves its own purpose, and some we’re more aware of than others. Let me put it this way: For every thing that you actually hear about, there are a hundred that are not targeted at you. Maybe a thousand. It’s really only the very tip of the iceberg that makes it to the mainstream press."

This is not to say that disinformation campaigns are harmless or lacking in impact. The opposite is true. Etzioni pointed out that while most people tend to think that what they experience is representative of others, this assumption is far from accurate.

The Reality: America as a Hard Target

In the context of disinformation campaigns, America is actually considered a hard target due to its relatively well-informed populace, readily available factual information, and trusted press. This may seem counterintuitive, given the current state of media polarization and misinformation. However, it’s precisely this combination of factors that makes it challenging for disinformation campaigns to gain traction.

We tend to think of deepfakes as something like a video of Taylor Swift doing or saying something she wouldn’t. But the most insidious deepfakes are those that manipulate situations and people in ways that can’t be easily identified or countered.

The Variety of Disinformation Campaigns

Etzioni emphasized the importance of understanding the variety of disinformation campaigns. "You don’t see it because you’re not on the Telegram channel, or in certain WhatsApp groups — but millions are," he noted, citing an example of a deepfake that manipulates people into believing something that is patently false.

This is just one aspect of the complex landscape of AI-generated disinformation. TrueMedia’s work focuses on detecting and mitigating these threats, which can take many forms, including:

  • Audio and video manipulation: AI algorithms can be used to create convincing fake audio or video recordings.
  • Deepfakes: Advanced deep learning techniques can generate highly realistic fake videos or images.
  • Language generation: AI models can produce coherent, context-specific text that is designed to deceive.

The Challenge of Detection

While TrueMedia and other organizations are working tirelessly to develop effective countermeasures against AI-generated disinformation, the challenge remains significant. Etzioni noted that "the purveyors of generative disinfo didn’t feel it necessary to take part" in the 2024 election, which may seem like a silver lining.

However, this is precisely what makes it so concerning. If the stakes are high enough, malicious actors will find ways to exploit these vulnerabilities, regardless of their difficulty or complexity. The only way to stay ahead is to continue developing effective detection and mitigation strategies, as well as educate the public about the risks and consequences of AI-generated disinformation.

The Future: Navigating a Complex Landscape

As we move forward into an increasingly complex landscape of AI-generated disinformation, it’s essential that we remain vigilant. By understanding the various forms of disinformation, their impact on society, and the challenges associated with detection and mitigation, we can work towards developing effective countermeasures.

In this journey, collaboration between researchers, policymakers, and industry leaders will be crucial in shaping a safer, more informed public discourse.