That's the premise behind "Disinformation" - with award-winning Evergreen host Paul Brandus. Get ready for amazing stories - war, espionage, corruption, elections, and assorted trickery showing how false information is turning our world inside out - and what we can do about it. A co-production of Evergreen and Emergent Risk International.
Filtering Through The Fake: Raising the Bar for Information Consumption
| S:2 E:11
"I can filter out four or five star quality hotels and restaurants today. We can't do that with our personal information."
In this episode of the award-winning podcast series "Disinformation," host Paul Brandus discusses the need for safety standards in the information we consume. While there are regulations in place for the safety of our food, water, and air, there are none for the content we see just as often. Meredith Wilson, CEO of Emergent Risk International, delves into the challenge of discerning what is real in an era of misinformation. Tune in to this thought-provoking conversation on the need for safety standards in the content we engage with.
[00:01:54] Synthetic media and deepfakes.
[00:05:20] Regulating information and free speech.
[00:08:15] Social media and disinformation.
[00:15:07] Trust and information integrity.
[00:16:33] Information quality standards.
Got questions, comments or ideas or an example of disinformation you'd like us to check out? Send them to [email protected]. Subscribe wherever you get your podcasts. Special thanks to our guest Meredith Wilson, our sound designer and editor Noah Foutz, audio engineer Nathan Corson, and executive producers Michael DeAloia and Gerardo Orlando. Thanks so much for listening.
00:07Paul Brandus: the familiar sound of
hamburgers on the grill. But long before that meat is purchased by you
or a restaurant, it is inspected to ensure that it is safe to consume.
For this, we can thank meat inspectors from the USDA, the U.S.
Department of Agriculture. There are similar standards with regard to
the air we breathe, the water we drink, and so forth. But when it comes
to something else we consume on a daily basis, there are few, if any,
such regulations that something else is information. If we have, and we
accept, minimum safety standards for some things we ingest, again food,
water and air, should there be similar safety standards for the content
we consume? Content that is false, after all, can be damaging. There is
another word for this false content, of course, disinformation. I'm Paul
Brandus, and that's the name of this award-winning podcast series,
Disinformation. As usual, I'll be joined by Meredith Wilson, Chief
Executive Officer of Emergent Risk International, a global risk advisory
firm. She'll offer her insights into this crucial topic. Question,
which one of these voices is real? 01:30clip audio: Hey, I'm Sander and
today I'm really excited to explore with you the possibilities of
synthetic. No, you're not Sander. I don't think you're going to be
exploring any opportunities here.
01:39Paul Brandus: Can't tell? Let's listen a bit more.
01:42clip audio: Well, I look
like you. I can speak like you and I can probably even write better than
you. So maybe just let me take this one. No, you're not going to be
taking this one. I'm going to be taking this one, but maybe next time.
Okay.
01:54Paul Brandus: But there was also a
video, and frankly it was impossible to discern the real sander from the
artificial one. We are in a new and dangerous era. Content that is
generated artificially, like that, is often called synthetic media.
That's a fancy phrase, let's just cut to the chase here and use a more
relatable term like fake, phony, made up. Deepfakes is another good word
that has entered the lexicon. Well, fasten your seatbelts because this
kind of content, some say, is about to swap the Internet. A recent study
by the European Law Enforcement Group Europol says that by 2026, just
two years from now, as much as 90% of online content could be
artificially generated. 90%! Let that sink in for a second. Let me
provide some context here. Deepfake technology, powerful, cheaper, and
easier to use by the day, can be used to produce content that looks
real, sounds real, and can show people saying or doing things that they
never said or did, or even create people that never existed in the first
place. It's hardly difficult to imagine how this could impact every
strata of our society, politics, economics, law enforcement, everything
up to and including war and peace. Now to paraphrase John F. Kennedy,
this is a man-made problem, therefore it can be solved by man, or can
it? Meredith Wilson, the CEO of Emergent Risk International, says this
analogy of safety regulations around things we consume, food, air, and
water, is reasonable but limited. She says that's because information is
far more subjective and therefore far more difficult to regulate.
03:54Meredith Wilson: I think it is a
reasonable analogy. I think it's also almost an intractable problem
because of It's the same reason that guns are an intractable problem.
It's the same reason that there are four or five of these sort of gaps,
if you will, across our regulatory landscape. When you think about
information, you think about freedom of speech, and you think about the
Constitution, and you think about democratic openness principles. And so
what happens is At first, you think, OK, we can control the bad speech.
We can control the hate speech. But then you realize that that person
over there thinks that what you think is perfectly fine content is hate
speech. Maybe they disagree with you on abortion, or maybe they disagree
with you on guns, or maybe they disagree with you about the way that
you talk about things. Maybe you use more extreme language, and you
don't see that as a problem, but they do. And suddenly, you have the
slippery slope of controlling what people are saying. So I think where
that comes into play with regulating information is who gets to be that
person, who gets to be the arbiter of what is true and what's not true.
05:20Paul Brandus: That's the central
problem right there, the seemingly intractable problem. In a country of
330 million people, how do you agree on standards upon which to make
content safe or safer to consume? Perhaps a story about a famous Supreme
Court case might be illustrative of this dilemma. The case was
Jacobellis v. Ohio in 1964. The issue, whether an Ohio movie theater
could show a film, that state authorities considered obscene. It went
all the way to the High Court, which ruled 7 to 2, that the movie in
question was not obscene and was thus constitutionally protected. But
even among the seven justices in the majority, there were four different
opinions as to what constituted obscenity, including the utterly
memorable line from Justice Potter Stewart, who said that offering an
intelligent description of obscenity was probably beyond him, but he
added, quote, but I know it when I see it. That long ago story is
illustrative of the challenge we face today. forming some sort of
consensus on what is or is not true, what is or is not disinformation,
is a towering problem. To paraphrase Justice Stewart, you may think you
know it when you see it, but someone else may have a very different
point of view. Again, more from Meredith Wilson.
06:49Meredith Wilson: What is the set of
facts that we're going to agree on? This wasn't always a problem in
America, but with the advent of the Internet, this has become a problem
because people have decided that they have different views of what is
right and wrong, what is factual and what is not factual. And that puts
us in a really difficult place, but it also makes it really hard to
regulate.
07:13Paul Brandus: That being said, some of the most important voices in the country see the value in making at least some sort of effort.
07:21Elon Musk (clip): It's important for
us to have a referee, just as you have a referee in a sports game, or
all sports games, and that the games are better for it to ensure that
the players obey the rules, play fairly. I think it is important for
similar reasons to have a regulator, which you can think of as a
referee, to ensure that companies take actions that are safe and in the
interest of the general public.
07:46Paul Brandus: that of course, Elon
Musk, the owner of X, formerly known as Twitter, he testified recently
on Capitol Hill about the need for federal regulators to, as he said,
referee disruptive technologies like artificial intelligence, which as
I've mentioned, are increasingly driving what we see on the internet.
08:06Elon Musk (clip): while regulators
are not perfect, there's no regulatory agency that I'm aware of that I
think we should, at the federal level at least, that we should delete.
08:15Paul Brandus: That being said, Musk,
who you just heard say that technology refs are needed, has taken a dim
view of some who are attempting to monitor the social media space where
his company X operates. He has called fact checkers, quote, some of the
biggest liars and called for one such company, NewsGuard, to be
disbanded after it criticized X and its policies for enabling what
NewsGuard calls disinformation about the war between Israel and Hamas.
And just last week, the news agency Reuters reported that social media
researchers have canceled, suspended, or changed scores of studies of X
because of what it called actions taken by Musk that limit access by
those researchers to X's user data. In other words, it is now harder for
independent researchers to study tweets, their origin, and so forth. Of
course, X is hardly the only social media platform to come under
increasing scrutiny over the war. Take TikTok for example. TikTok calls
itself a joyful place where all your daydreams come true. That benign
image, however, is not shared by some. A California venture capitalist,
Jeff Morris, thinks that bots are driving content about the war in a
lopsided way. At a Republican presidential debate last week, the
platform was ripped by candidates, many of whom say TikTok should be
banned in the United States. A top lobbyist for TikTok here in
Washington is Trent Lott, the former U.S. Senate Majority Leader. He
tells me he disagrees with Morris' claim, the 10-to-1 hashtag ratio, and
adds that TikTok has, quote, bent over backwards to be fair. It's worth
noting, incidentally, that platforms like X and TikTok are now major
sources of news for millions of Americans. A Pew Research study says
that 1 in 7 adults use X as a news source, while 1 in 10 use TikTok, a
figure Pew says which has tripled in just three years. The top social
media platforms for news, by the way, are Facebook and YouTube at 31%
and 25% respectively. Speaking of venture capital, by the way, it seems a
lot of money is being plowed into startups that are focusing on
disrupting myths and disinformation. We'll talk with one of them after
this short break.
10:56ad read: This series on
disinformation is a co-production of Evergreen Podcasts and Emergent
Risk International, a global risk advisory firm. Emergent Risk
International. We build intelligent solutions that find opportunities in
a world of risk.
11:21Paul Brandus: Welcome back. Here's a familiar voice.
11:24Fake Gayle King (clip): Ladies, honestly, I didn't expect my weight loss to spark so many questions.
11:28Paul Brandus: Sounds like Gayle King
of CBS News. Just one problem, it's really not her. It's from a
synthetic video, fake, phony, of her peddling a weight loss program on
Instagram. It sure does look and sound like Ms. King. Social media
companies that are often associated, for better or worse, with myths and
disinformation, were once upon a time funded by angel and venture
capital investors. Instagram, for example, got started with a quarter
million dollar check back in 2010. It's a wonderful app, but as the
phony Gayle King video shows, Instagram, which is now owned by Meta, has
a problem. So here's a question, could Angel and VC investors also fund
startups to somehow thwart disinformation? Matt Abrams thinks so. He's
an Oregon-based venture capitalist who frames his investment thesis in
terms of what he calls IQT information, integrity, quality, and
transparency. In other words, like I said at the top of this episode,
making your content safer to consume.
12:37Fake Gayle King (clip): I think that
capital is like water and electricity. It goes to the lowest and easiest
path of least resistance. And the easiest path of least resistance over
the last 20 years has been the attention economy, surveillance economy,
incentive structures and models. And this is where the capital
ecosystem has a tremendous opportunity and responsibility to actually be
investing now in solutions which I frame in terms of this IQT,
solutions which are strengthening our information integrity, our
quality, our transparency. where they will provide, in my mind, both
better long-term economic returns, as well as short-term returns, and
better economic resiliency.
13:26Paul Brandus: Investors in startups,
or any asset class really, often have a herd mentality. They see where
others are putting their money and follow suit.
13:36Fake Gayle King (clip): So investors
go to where the herd go. You would think that the venture market in
particular is much more lenient into the risk, but it is a herd
mentality perspective. And this is where we need to change that
trajectory as to there are tremendous opportunities to make a lot of
profit for doing things in the right ways and the right reason. of
investing in solutions which are strengthening our information
ecosystem.
14:07Paul Brandus: Other investors, stiff
opportunity as well. The tech site Crunchbase reports that hundreds of
millions of dollars have poured into startups over the past few years.
One of the biggest is New York-based Flashpoint, which says it, quote,
combines data insights and automation to identify risks and stop threats
for cyber fraud and physical security teams. Another is San
Francisco-based Primer. Both companies have secured what's called Series
D funding, which in layman's terms means they're attracting big bucks
and are nearing the point where they could go public. This all sounds
interesting, but as Abrams points out, one of the things that has
allowed myths and disinformation to flourish is something that could be
beyond the realm of technology. And that is the very basic matter of
trust. Trust in government is low. Trust in media is low. What do you do
about that?
15:07Fake Gayle King (clip): This is where
I want to emphasize, without trust, and this is historically what we've
had to contend with previously, so this isn't anything new. We're
seeing a frame of trust right now, and AI is simply accelerating that.
in the sense that people don't have the understanding of what's the
provenance of the information, where does it originate from, how is it
manipulated across the information supply chain, who paid for it, what's
the money behind it. So this is where it gets to, there's a tremendous
opportunity to then be able to say, hey, I want to invest in the
solutions that enable this future that we want to see, whether it's on
the consumer side or on the enterprise side. On the consumer side, I
want to be able to, whether personally, how do I have just default
filters, if you will, just show me the quality information. I can filter
out four or five star quality hotels and restaurants today. We can't do
that with our personal information. And likewise, on the enterprise
side, it's a similar challenge. Where does this information originate
from? How is it manipulated? And what is the quality and integrity of
that information at which we make business decisions or government
decisions, et cetera.
16:23Paul Brandus: Let me play again one of the key things Matt said.
16:27Fake Gayle King (clip): I can filter out four or five star quality hotels and restaurants today. We can't do that with our personal information.
16:33Paul Brandus: It's certainly an
interesting idea and, again, not unlike that hamburger on the grill, we
have quality standards, safety standards around things we consume. The
food we eat, the air we breathe, the water we drink. Is it possible, is
it feasible to do the same with the information we consume? Thanks to
venture capitalist Matt Abrams, our sound designer and editor Noah
Fouts, audio engineer Nathan Corson, executive producers Michael D'Eloia
and Gerardo Orlando. And on behalf of Meredith Wilson, I'm Paul
Brandes. Thanks so much for listening.