Real Talk on Fake People
Check Twitter these days and you’ll see, in the form of real-seeming faces or funny texts of fake speeches, the potency of powerful computing tools and the ease with which they allow the generation of complete fabrications. Though the use of these media manipulation tools, the interesting verbiage of which has ascended (descended?) to meme-hood, has so far been largely gimmicky, it has papered over a dark reality – of which the 2018 epidemic of deepfakes was a stark reminder. The complicated future of media in the age of neural networks and machine learning has arrived, bringing with it media manipulation tools of endless sophistication and widespread availability. All this is happening at a time when the public’s faith in gatekeepers is low and keeps falling lower – so what could possibly go wrong?
“A lie can travel halfway around the world before the truth can get its boots on” is a fairly common phrase that you, if you’re like me, have spent most of your life believing originated with Mark Twain. Fittingly, however, even that origin story isn’t accurate. Benign inaccuracies like this and more sinister mistruths have always had an easy job getting around, while the truth and those after it have to labor for hours, days, and weeks getting their proverbial boots on. Thanks to the aforementioned advances in certain technologies and their ease of use, however, the job of a lie is only getting easier. For this, too, we can thank technology companies.
Take NVIDIA, for example. The giant known largely for the graphics card it puts in your laptops has displayed astonishing capabilities with its image processing tools. Inside papers that look unremarkably academic, it has demonstrated the ability to simulate different weather conditions of the same picture, and automatically repair damaged digital images. Audio hasn’t been safer either, from tools such as Adobe’s Voco, which elicited concern from experts well before its commercial release, prompting Adobe to consider a form of “watermarking detection” in order to prevent its abuse. Though neither Voco or its competitor WaveNet (another Google product) are yet commercially available, Canadian company Lyrebird has already made its product accessible to users in an online demo, paying little heed to how many bad actors it’s handing over its weapons-grade technology.
There are also academics pushing the bounds of what can be accomplished, like the researchers at the University of Washington who outlined in a paper how they were able to synthesize realistic video of former US President Barack Obama lip-syncing with audio of his older speeches and public appearances. The researchers made it a point to only synthesize video with words that the former President had already said, with researcher Steven Seitz stating, “We very consciously decided against going down the path of putting other people’s words into someone’s mouth.” If these assurances come off as comforting, prepare to be disappointed. Seitz had said in July of 2017, “You can’t just take anyone’s voice and turn it into an Obama video.” Less than a year later, Buzzfeed had done just that.
In one of the most chilling cases of the last year, there was the deepfake problem. Out of nowhere, it seemed, there were hordes of online users, through apps such as FakeApp, creating disturbingly realistic, sexually explicit videos of everyone from popular celebrities to unsuspecting former romantic partners. Recognizing the massive violations of privacy involved and started at the pace at which deepfakes were being generated, prominent websites rushed to ban such content. Over the ensuing days, as the issue spread among the broader public, the practice was met with widespread rebuke and revulsion. Unfortunately, the Internet is forever, and so both the technology and its products both remain far too accessible.
What does this mean for the media we consume? When anything can be warped and reshaped, is there any truth that can be trusted? Were even one of the main mitigating factors – sophistication, availability, public trust – to be meaningfully dealt with, the situation would not seem so dire. Unfortunately, it does not seem like we can close Pandora’s Box on the technology, and our failing trust in each other is a problem no one has yet been able to solve. When these issues in conjunction with each other all bear down on us at once, we must ask as Aviv Ovadya did: “What happens when anyone can make it appear as if anything has happened, regardless of whether it did or not?” If that sounds a little too Bradbury for you, he’s not the only one concerned.
Alex Champandard, AI expert and co-founder of CreativeAI, says there is an urgent need for “identifying and building trust in relationships — including with journalists and press outlets, publishers, or any content creator for that matter.” Among experts the consensus is quickly becoming clear – there is a significant threat posed to the public’s ability to trust its eyes and ears, and the society-wide danger of a growing inability to know what’s real only grows as these media manipulation tools become more sophisticated and widely available. As the integrity of information – the inherent value of it in describing the world as it is – fades, this breakdown of an integral piece of the social fabric will have wide-ranging repercussions.
The rule of law could stall as it becomes increasingly difficult for juries and judges to discern the real from the manufactured. Law enforcement and national defense may spend countless resources in chasing falsified but convincing information. Inter-societal rituals, from the most granular level of individual interaction to the macro level of international diplomacy will be compromised by the threat of malicious actors. The public at large will be unable to decide what has meaning and what doesn’t – conditions that could prove at best detrimental, and at worst fatal, to the democratic system. The likeliest outcome in the years ahead is not one where fact becomes indistinguishable from fiction and where the ordinary person cannot tell real from not, but one where the idea of what is true and what isn’t is made irrelevant, and the ordinary person simply ceases to care.
Feature Image by Zebestrian
I kid you not I literally woke up two hours ago feeling this at a subconscious level that we have the choice to partake in dumping toxic waste into the world and pollute calming waters or we can be mindful of what we say, how we say it, and realize those who want to anger us in drive by posts are feeding an addicted beast. Great post and I completely agree.Report
Glad it spoke to what you were feeling! There’s definitely a side to this where the onus falls on us to be careful what we put out there.Report
Yes. I’m definitely picking up on this more and more. Great read!👍🏻Report
Thank you!Report
Uh, we’re all clear that the fake speech linked in the first sentence was not produced by a computer, right? That was part of the joke.Report
Yes! Sorry – I should’ve been clearer. I wanted to get a recent instance of the meme to illustrate the point. You can find the real thing and the reality of training neural nets on data here: https://twitter.com/JanelleCShane/status/1007061610005794817Report
Yes, this will be a real problem.
No, we’re not sure how we’ll deal with it.
No, trust isn’t the answer… it will make things worse… this is basically trust betraying tech.
I’m not sure what kicks off the Butlerian jihad, but we’re experimenting with possible causes.Report
As long as Linda Hamilton survices 1984 we’ll be fine.Report
Here’s a canary in a coal mine for you: Look for groups clamoring for AI’s to be categorized as “White Males” or extensions thereof.
If this doesn’t happen, no problem.
If this does happen, holy cow, we’re on the brink of something.Report
How is it a violation of a person’s privacy to create a fake video of him? It’s reprehensible, and I can feel in my gut that it’s an offense against that person, but I can’t articulate a reason why.Report
Yeah, it seems more akin to slander or libel to me.Report
I’m of course not a legal scholar, but the way I see it, your face and your likeness is intrinsically part of you. The kind of deepfakes that were out there were mainly pornographic, and sex is for most people a private thing. To mix those two things to me really feels like a violation – though, like you, I can’t exactly articulate why!Report
Well, slander/libel would be saying somebody did something like porn when in fact they had not done so. It’s basically a lie that damages that person. This is the same thing but with faked video/audio.Report
True – could someone insulate themselves from that charge by making tacking on a cursory note about how the content they created was fake, even if it then propagated over the internet without that context?Report
and I can feel in my gut that it’s an offense against that person, but I can’t articulate a reason why.
“The right of the people to be secure in their persons.”Report
There we go! Founders thought of everything.Report
There also seems to me an issue in copyright law and fair use. No one seems to be getting terribly upset about using the likenesses of celebrities, at least if it’s obviously meant as a parody. The intent seems relevant.Report
I believe there was also a smaller strain of “revenge porn”, where people were creating these with pictures of their exes or people they knew in real life.Report
The “deep fakes” thing quickly ran into problems because it’s one thing to make a video of putting Nic Cage into a Superman movie…
Quite another to put your favorite actress’s face onto your favorite pr0n star’s body.Report
(serious) I would guess that if realistic but fake revenge pron became a thing, then a lot of people would be able to claim that authentic embarrassing videos were really fakes. It might be a relief.
(silly) Maybe Nicolas Cage died 20 years ago, and some psycho has been cutting and pasting him into increasingly crazy movies.Report
(serious) This is probably true, and maybe a minor upside?
(slly) I refuse to believe that National Treasure is not Nic Cage at his truest.Report
One does not look directly into Nic Cage at his truest. It is best viewed through a hole in a piece of cardboard, like an eclipse.Report
It’s obviously wrong to fake someone’s image, for the simple reason that this will mislead people into thinking person X did Y, when in fact they did not. It’s dishonest, disrespectful, and undermines a person’s dignity.
“Face” is a real thing, which might be defined as social personhood. To attack a person’s “face” is to attack them in a fundamental way, inasmuch as we are social animals and our role in a community is a critical part of our life. To attack a person’s “face” is always an aggressive act. These “deep fakes” are a particularly egregious attack on “face.”Report
Many states have torts analogous to California’s Midler tort. Arises out of a case in which an auto company hired a Midler “sound-alike” to do a commercial. Same thing happened in Waits v. Frito-Lay: that was not actually Tom Waits singing about potato chips. I’ve handled cases involving these rights myself.
It goes further in European and some Asian countries whose law involves a doctrine called “moral right” which extends control of how a piece of art can be used even farther than US copyright law.
My main woman @veronica-d has the core argument of it above: one’s image is an extension of one’s identity, of one’s self. If my image and voice were appropriated and altered, without my consent, to advocate the consumption of Coors Light, the voting-for of Donald Trump, or abandoning one’s grocery cart in the middle of the parking lot, I would feel as though I had been violated, trespassed upon, STOLEN FROM. Mutatis mutandis for you and your own deeply-felt convictions.Report
I feel the same way: I can’t make a convincing legal argument (I am not a lawyer, though, the closest I get to law is teaching about environmental regulation) but it is something that on a very, very deep level repulses me, the thought that someone could (for example) take my sister in law’s face and use it to make a porno or something.
“Who steals my purse steals trash; ’tis something, nothing;
‘Twas mine, ’tis his, and has been slave to thousands;
But he that filches from me my good name
Robs me of that which not enriches him,
And makes me poor indeed.”
Though I guess some of the deepfakes do in some way enrich their dastardly perpetrator…It does seem akin to slander or libel.Report
Remember when you first saw CGI? 30 feet away and on the 20-foot screen, it was awesome. Now, at home, it looks cheap and tawdry. Is this because you are better at spotting it, or the skill of the average practitioner is not very good? This is how I see the level of deep fakes ending up. We will become better at spotting it, while at the same time less credulous of everything we see.Report
Exactly! That’s a great point, and the last thing you said is exactly what makes me nervous. Getting to the point where we become immediately skeptical of whatever we see and hear (though at some level that’s probably wise) could be crippling for aspects of our lives that require a certain level of societal trust and good faith.Report
The evolution of video games has similar.
The earliest upright video games I remember are Space Invaders and Asteroids. I couldn’t believe that we could make machines do this sort of thing.
And *THEN* we had games with color! Centipede! Pac-Man! Galaga!
And buttons! Defender! Stargate! STREET FIGHTER!!!!
And with each new generation of video games, we couldn’t believe how awesome the graphics were.
Now we’ve got Red Dead Redemption 2. Spider-Man.
And it’s hard to believe that, in a mere decade, we’ll see these games as “pretty but the graphics are really dated.”Report
Remember Dragon Slayer from the ’80s? I think that was what it was called… At the time it was nice graphically, but playing it was super chunky.Report
Going back and trying to replay stuff like Bard’s Tale or the original Tomb Raider is always an interesting experience.
“I can’t believe I spent hours playing this, entranced! It’s downright unplayable now!”Report
And I think this is how the whole thing will play out. If it is your bag, say the Pr0n thing you mention or Trump/AOC bloviating, then you are motivated to believe it. If those aren’t your thing, then spotting it will be much easier. For the same reason I wasn’t entranced by Dragon Slayer, but my friend Mark was.
But, maybe I am being to Pipi Longstockingish here.Report
Technological advance trips me out. An episode of ST TOS had Spock doctoring video like this. The assumption was that you would need ST level computing tech and a Vulcan computer specialist to pull it off.Report
Yes – and that assumption being shortcircuited here is what’s chilling, right? Imagine the same effect, but with widely available tech that requires little to no specialized training to pull off. (Also, the sheer number of things Star Trek was ahead of the curve on…)Report
Both ahead of the curve, and far too pessimistic of the time horizonReport
But only in computer tech. The rest of it not so much. Of course, “the rest of it” may be/is probably physically impossible.Report
“The likeliest outcome in the years ahead is not one where fact becomes indistinguishable from fiction and where the ordinary person cannot tell real from not, but one where the idea of what is true and what isn’t is made irrelevant, and the ordinary person simply ceases to care.”
In other words, soon we will be living in a Philip K. Dick novel.Report
And Bradbury too! Poor Orwell may have missed the mark a bit.Report