That's the premise behind "Disinformation" - with award-winning Evergreen host Paul Brandus. Get ready for amazing stories - war, espionage, corruption, elections, and assorted trickery showing how false information is turning our world inside out - and what we can do about it. A co-production of Evergreen and Emergent Risk International.
The Future of Media Literacy: How to Combat Disinformation in the Age of AI
| S:2 E:1
In season two of the Disinformation podcast, Paul Brandus discusses the emergence of avatars and false media outlets created by artificial intelligence, blurring the lines between fact and fiction; truth and untruth. He and Meredith Wilson, CEO of Emergent Risk International, discuss how the rapid growth of AI accelerates the possibility of false narratives spreading faster than ever before, and how these false narratives are forms of disinformation. Paul also interviews Jack Stubbs, VP of Intelligence at Graphika, who discusses the discovery of a fake media outlet called Wolf News, which is part of a Chinese state-aligned political influence operation.
Got questions, comments or ideas or an example of disinformation you'd like us to check out? Send them to [email protected]. Thanks to Jack Stubbs of Graphika, our sound designer and editor Noah Foutz, audio engineer Nathan Corson, and executive producers Michael DeAloia and Gerardo Orlando. Thanks so much for listening.
00:00(Clip audio) Hello, everyone. This is Wolf News. I'm Alex.
00:11Paul Brandus Alex is a good-looking young
man, well-dressed, nice haircut, but he's not real. He's an avatar
created by artificial intelligence. And Wolf News? That's fake, too.
It's a newscast, somewhat realistic in appearance, with content critical
of the United States and supportive, in fact, created by Communist
China. The emergence of avatars masquerading as humans can be seen as a
further blurring of the line between fact and fiction; between truth
and untruth. The rapid growth and scaling up of artificial intelligence
is accelerating the possibility of false narratives spreading further
and faster than ever before. Such false narratives have another name,
of course - Disinformation.
I'm Paul Brandus, and that's the name of
this series. It's called simply Disinformation.
01:09Meredith Wilson And I'm Meredith Wilson, founder and CEO of Emergent Risk International, and I'll be providing analysis throughout each episode.
01:18Paul Brandus And welcome to season two of this series. We also have a
brand new newsletter devoted to disinformation. It's called
Disinformation Monitor. I'll tell you how to get that at the end of the
show. As I mentioned, Wolf News is a Chinese creation. It was
discovered by Grafica, a New York-based network analysis firm that,
among other things, contracts online networks. Jack Stubbs is
Grafica's VP of Intelligence. He joined me from London.
01:53Jack Stubbs Wolf
News is a fake media outlet that is operating as part of a Chinese
state-aligned political influence operation that Grafica discovered in
2019. The way this thing kind of predominantly operates is whoever is
behind it creates a whole series of fake persona accounts. They use
those to see these very distinctive political video clips, and then they
use more fake accounts to amplify those to a wider range of audiences.
02:25Paul Brandus And the videos are really
distinctive. Grafica has coined a term for this Chinese product, part
spam, part camouflage, thus the term spam-aflage. Get it?
02:36Jack Stubbs The narratives are typically
very supportive of Beijing and the CCP, very critical of the West,
obviously, typically criticizing the United States for its various
perceived shortcomings. But towards the end of last year, we saw a set
of videos from the spam-aflage operation that actually was kind of new.
There was some stuff in there we hadn't seen before, which is that
they were using generative AI avatars in their videos. This is where
Wolf News comes in. They created this fake media outlet that purported
to be an independent news site called Wolf News, and then they had news
presenters talking to the camera as they were recording the videos.
And what we found out was that these news presenters, they aren't real.
03:19Paul Brandus They're entirely fictitious
people, and they're actually portrayed on the screen using generative
AI. Like many manufacturers of disinformation, Wolf News mixes in terms
that are factual, for example, by mentioning the fact that the United
States has a big problem with mass shootings. By mentioning things
that are true, it makes it easier to slip in other things, other
messages, that might not be.
03:44Jack Stubbs What this is about is using
covert behaviors, deceptive online behaviors, to try and exert
political influence by a Chinese state-aligned actor. And in terms of
how effective it's been, again, it's really hard to measure the impact
of this stuff. Often that also hinges on acting intent, right? If
their intent was to get a bunch of views, then it seems like they've
done that. If their intent wasn't maybe, you know, actually changed the
way people are thinking about things, very hard to assess that. But
what we see typically with Chinese state-aligned influence operations is
they seem to pursue a tactic of kind of flooding the zone, and they're
basically trying to get as much of their own stuff into the conversation as possible.
04:25Paul Brandus This observation that the Chinese are trying
to flood the zone, overwhelm their targets with information, that's a
very common Russian tactic as well, as we've discussed in prior
episodes. Meanwhile, keep this in mind, this so-called generative
technology, the manufacturing of fake audio and video, can still be
rather clunky, but what happens when the technology gets better?
05:12Paul Brandus In
2017, a renowned Scottish dictionary, Collins, declared fake news to be
the word of the year, conveying the impression that fake news was
something new. In fact, evidence of it dates back thousands of years.
That certainly includes, by the way, the relatively short quarter of a
millennia of our own country's history. That alone is worthy of a
future episode. So this stuff is hardly novel. There have always been
bad actors, those with malicious intent, but what is new is the push
button speed, the ubiquity, and the low barrier to entry. That
combination alone is powerful and unprecedented, and when you layer on
top of that the remarkable potential of AI, well, it hardly seems like
hyperbole to say we're in a new world. Meredith Wilson is chief
executive officer of Emergent Risk International.
06:20Meredith Wilson It's unfortunately probably a
wave of the future, though maybe not one that you would want to be on
if you were a journalist. I think the danger there, there's several
things, and this kind of goes to the broader issue with AI right now,
which is that anybody really can create these things with a very limited
skill set. You really just need to be able to give directions to the
right AI tool, and you can do this. The problem, of course, is that
comes with a lack of accountability. So you create this fake news
channel, you put out this fake news, it goes viral, people believe it.
Who are we going to hold accountable for that? Right? So at what point
can we hold the Chinese government accountable for that? It wouldn't
be like a Fox News versus Dominion systems issue, right? It would be a,
there really is no accountability attached to something like that, and
we're going to see this at scale across the internet, whether it's the
fake news avatar, whether it's fake content that's created and put into
a fake newspaper, which we already have that problem. But there are
very few people behind it at the end of the day, and we may not know who
those people are. So holding them accountable, ergo, finding a way to stop it, will be very, very difficult.
07:40Paul Brandus I
was hoping for a reassuring answer, and I didn't get it. This is really
07:47 Meredith Wilson See, old journalists, right?
07:52 Paul Brandus Yeah, this is really
As for the explosive growth of generative AI and its
use in influence operations, Stubbs firm Graphika has broken it down
into three areas which make it easier to understand. It's as simple, he says, as ABC.
We use a framework. We came up with a graphic called the ABC
frameworks to kind of think about and talk about the different
activities we encounter in our work. It's really simple. A is for
actors, B is for behaviors, and C is for content. What we're seeing
with generative AI is that it's having a pretty profound effect across
all three of those areas. So in terms of actors, it's learned about
entry quite significantly, right? These generative AI tools is really
important. They're all commercially available and they're very easy to
access. So anyone with a credit card now has access to capabilities
that previously were quite advanced. For example, being able to produce
convincing, fluent written content in multiple languages. Previously,
you would actually need to speak the language yourself. Now you can
have chat GPT do that for you. Second one is behaviors. And the real
advantage here is about basically the ability to scale. So the Swarth
news incident is a nice example of that as well. In the past, we've
seen a completely separate influence operation in Pakistan that did
something quite similar. They produced this kind of like fake media
outlet with quote unquote new presenters talking to the camera. But
they actually hired real people to do that. So they went online, they
found commercial script readers, they paid them a couple of hundred
bucks to record a video, to the video back, edited it, uploaded it. And
now what you can see with the likes of Swarthblage and Wolf News, if
they don't need to do that, they can just go to a commercial website,
they can type in a script, click a button, and this generative AI person
says it all for you. And then the last one is content. Generative AI
is really good at making incredibly convincing, fake or deceptive
content. And maybe your listeners have seen this. There was an amazing
fake photo a couple of months ago of the Pope allegedly wearing this
very kind of glitzy puffer jacket and it went viral online. That was a real moment for a lot of people, I think, including myself, who kind of pride themselves on being able to spot this kind of thing and actually were like, yeah, why not?
10:04Paul Brandus But understanding generative AI, how it works and how it can
fuel disinformation is one thing. Thwarting it or more realistically diluting its impact, that's another story.
10:21Jack Stubbs I think the
simple but quite boring answer is it's about critical thinking. And
this isn't a new problem, right? It's just that there's going to be
more deceptive content produced at a higher scale by a greater range of
people. We need to teach ourselves to interrogate our sources. As I
said earlier, I'm a former reporter and this is the thing that's kind of
really important to me. When you see something on the Internet,
whether it's a picture of the Pope in a puffer jacket or a purported
news video from Wolf News, you need to be asking yourself, like, is this
real? Where did it come from? Who created it? If they created it, how
and why? And we basically need to be interrogating our sources of all
the information that we receive. Is it human nature for people to do
11:07 Jack Stubbs No, I don't think so. Particularly on the Internet, it's a very
attention-poor environment. People are very inclined to scroll past and accept things of face value.
11:16Paul Brandus Yeah, that's the part that
worries me. I mean, how do you, and I hear this over and over again
when I talk to other experts in this field, they say, yes, critical
thinking, media literacy, better education. It all makes sense, but to
get people to actually question what they're seeing and reading and
hearing, boy, it just doesn't strike me as being compatible with human
nature as we know it. And I think that's the part that scares me. What do you think?
11:50Jack Stubbs Right. And
it scares me as well. But it's not that this is a new challenge, right?
I think that's really important to bear in mind the context of this. I
mean, Stalin was artificially editing people out of photographs in the
1930s. Most people now kind of understand that you see a photo of
something doesn't necessarily mean that it actually happened. People
are very familiar with the concept of Photoshop and the creation of
misleading still images. The Hollywood film studio has been able to
create essentially highly convincing fakes for also over a decade,
right? We just see them as special effects in the movies. The point is
that that kind of capability is now increasingly accessible, so more
people are going to do it. And it's going to lead to a significantly
higher volume of false and misleading or deceptive content. So the interrogate your sources instinct, it needs to become ingrained in people
12:44Paul Brandus When these skills,
critical thinking and media literacy, are lacking, it pushes us closer
to what futurists have called a post-fact world, a world where people
don't know what to believe or don't believe anything at all. This
brings up a different sort of challenge, a challenge that companies with
reputations, market shares, customers and shareholders to think of
must and are trying to adapt to. But Meredith Wilson says with artificial intelligence and disinformation, there may be limits to what can be done.
13:18Meredith Wilson There is a strong limit to your ability to control
what goes viral and what happens out there. It is really different than
doing corporate PR work 30, 40 years ago, right? There was a playbook,
you know, this is how we manage a reputation crisis. This is how we
manage it. When it comes to disinformation and social media, it's much
harder to know exactly how to manage that. And now we add a layer on
top of that of AI and it gets that much harder because we don't
necessarily know where it's coming from. But
there are a lot of companies that are, you know, are really looking at
tools to better understand disinformation, to better understand what's
being said as early as possible. So whether that is monitoring social
media, whether that is, you know, employing different types of tools
that sift through information, all of those types of things are in
place in a lot of companies. And then it comes down to maybe we can't
control what's happening, but we can control how we react to it and our
messaging as a result, right?
14:37(Clip audio) Hello, everyone. This is Wolf News. I'm Alex.
14:42Paul Brandus So perhaps we can't do much for
now, at least, to disrupt fake TV channels and AI-created avatars. But
perhaps this isn't even our biggest problem - perhaps the real
challenge lies in the fact that the tools used to create such fakery,
such potential division and maliciousness are in the hands of all of us.
What could possibly go wrong?