May 29, 2019 | Read Online
With global leaders the implications are potentially severe
Someone released a video recently that seemed to show Nancy Pelosi slurring and mangling her speech. The video spread virally in right-leaning circles, but it soon turned out to be fake.
I commented on this in my most recent newsletter, saying:
What this shows us is that it’s not the machine learning that makes Deepfakes dangerous; it’s the willingness of a massive percentage of the US population to believe total garbage without an ounce of scrutiny.
Unsupervised Learning, No. 179
But a reader on Twitter named David Scrobonia had an even more interesting point about this.
This is a really interesting point about deepfakes.
Seeing them can detonate in your brain and affect your emotional view of the subject, even if your logical brain learns/knows it’s false.
That’s the same mechanism as advertising, i.e., target the emotions, not the logic. https://t.co/VrPlV7Defa
Type-1 vs. Type-2 deepfake attacks
What struck me most about David’s comment is that it’s an emotional attack that’s designed specifically to bypass one’s reason, and that it still works even if you know or learn that it’s false. This immediately made me think of advertising, and then malicious advertising.
Ads also work in both modes.
Just like ads, deepfakes can be used in two independently useful ways, which—until I’m made aware of existing nomenclature—I’ll call Type-1 and Type-2 influence attacks:
Type-1: You create a piece of media that is designed to convince people that something false is true, e.g., this pill will reverse your biological age.
Type-2: You create a piece of media that’s still effective even if people know it’s untrue, e.g., Axe body spray will attract women like insects to lanterns.
What used to cost millions now can be done for pennies
What’s fascinating is that you can create a single campaign that works for both attacks, and the Pelosi video is a perfect example. If you are a Fox News regular, and you see the video on Facebook, you’re likely to believe it’s real and it will lower your opinion of her greatly based on “evidence”.
How much intact likely depends on many factors.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
But even if you’re shown by your liberal friend that the video was fake, the emotional impact of that negative impression remains mostly intact.
Nobody is immune
Consumerism and advertising were arguably weapons released on the American public, with tremendous effect.
The other remarkable thing about this type of influence campaign is that it works on virtually everyone. With advertising, it often works best on the people who think they’re immune. Yes, of course ads don’t work on you: I’m sure the reason you own zero generic brands is purely due to quality concerns.
For some, reputation is their most valuable asset
Another similarity with advertising is that the best defense to these types of dual-impact (Type-1 and Type-2) influence campaigns is not exposing yourself in the first place.
Ads work on people that see them, and we might be dealing with the same potency with deepfakes.
We can only hope that the platforms, the public, and yes—the government—realizes that they’re dealing not just with a Type-1 threat here. It’s not enough to highlight that the content is not real.
If something is truly damaging—like a smoking ad that targets children—the answer isn’t a label on the commercial playing during cartoons. The solution is to ban it.
GAN generation of realistic video from a single image
But where does that leave us in a free and open society? I can see no faster ramp to a first amendment collision, since it’s 100% ok to have and voice negative opinions about a person, or to engage in satire.
So where’s the line between that and malicious influence campaigns designed to harm reputations and shape public opinion? And how does that line shift given the combination of provenly potent techniques used to sell products and new technologies that can be used to falsely mimic real and trusted people?
I don’t know the answer, but I do know that we better figure it out quickly.
No related posts.