Deepfakes: Is This Video Even Real? | NYT Opinion


Hello. Today I’m
going to be talking to you about a new
technology that’s affecting famous people. Remember when Obama called
Trump a dipshit? “Complete dipshit.” Or the
time Kim Kardashian rapped, “Because I’m
always half naked”? Or when Arnold Schwarzenegger
impersonated himself? “Get out of there! There’s a
bomb in there! Get out!” Deepfake. Deepfake. Deepfake. This is a deepfake, too. I’m not Adele. But I am an expert in
online manipulation. So deepfakes is
a term that is used to describe video
or audio files that have been created using
artificial intelligence. My favorite is probably
Lisa Vanderpump. It started as a very basic
face-swapping technology. And now it’s turned
into film-level C.G.I. There’s been this
huge explosion of, “Oh my goodness, we
can’t trust anything.” Yes, deepfakes are
eerily dystopian. And they’re only going to get
more realistic and cheaper to make. But the panic around
them is overblown. In fact, the alarmist
hype is possibly more dangerous than
the technology itself. Let me break this down. First, what everyone is freaking out
about is actually not new. It’s a much older
phenomenon that I like to call the weaponization
of context or shallowfakes with Photoshop and
video editing software. There’s so many needs. How about the time Nancy Pelosi appeared to be
drunk while giving a speech? “But you never know. “But this president
of the United States.” Turns out that video
was just slowed down at 75%. “It was very, very,
very, very, very strange.” You can have a really
simplistic piece of misleading content
that can do huge damage. For example, in the
lead-up to the midterms, we saw lots of imagery around
this caravan of people who were moving towards the U.S. This photo was shared
with captions demonizing the so-called migrant
caravan at the U.S.-Mexico border in 2018. But a reverse image search
showed it was actually Pakistani refugees in Greece. You don’t need deepfakes’
A.I. technology to manipulate emotions or
to spread misinformation. This brings me to
my second point: What we should be
really worried about is the liar’s dividend.
The lies and actions people will get away
with by exploiting widespread skepticism
to their own advantage. So, remember the
“Access Hollywood” tape that emerged a few weeks
before the 2016 election? “When you’re a star,
they let you do it. “You can do anything.” “Whatever you want.”
Around that time. Trump apologized, but then
more recently he’s actually said, I’m not
sure if I actually said that. When
anything can be fake it becomes much easier
for the guilty to dismiss the truth as fake. What really keeps me awake at
night is less the technology. It’s how we as a society
respond to the idea that we can’t trust what
we see or what we hear. So if we are fearmongering, if we are hyperbolic, if we are
waving our hands in the air, that itself can be
part of the problem. You can see where
this road leads. As public trust
in institutions like the media, education
and elections dwindles, then democracy itself
becomes unsustainable. The way that we respond
to this serious issue is critical. Partly this is the platforms
thinking very seriously about what they do with
this type of content, how they label this
kind of content. Partly is the
public recognizing their own responsibility. And if you don’t know
100%, hand on heart, “This is true,” please don’t share, because
it’s not worth the risk.