In the long ago, early days of the internet, when dinosaurs roamed free on the plains of MySpace, it used to be a truism that pornography drove technological progress. Some pundits claim that the US military wouldn’t have got the sizeable user base they did when they invented the internet, without the avid consumers of pornography. The porn industry went on to lead the way in the development of streaming video, webcams, data tracking and online payments, as well as being a major driver in the demand for bandwidth.
A brief flit around the web, and you’ll find a host of examples of the productive threesome that is porn, technology and innovation. As far back as the 90s, self-proclaimed “geek with big breasts” and pin-up star Danni Ashe launched Danni’s Hard Drive, an e-commerce site that racked up a million hits in its first week. “She and other pornographers pioneered the e-commerce and security solutions that paved the way for PayPal, eBay, Amazon, and the commercialisation of the Internet”, according to Patchen Barss, author of The Erotic Engine.
Before we get too laudatory about the technological innovation brought to us by the adult industry, we should remind ourselves that its development is predicated on buckets of money predominantly made out of exploitation. There are other examples of scientific progress that come from ethically compromised places, such as the host of developments that come out of Nazi research. The BBC says that “nerve agents such as Tabun and Sarin (which would fuel the development of new insecticides as well as weapons of mass destruction), the antimalarial chloroquine, methadone and methamphetamines, as well as medical research into hypothermia, hypoxia, dehydration and more, were all generated on the back of human experiments in concentration camps.” They also mention that Fanta was created in Nazi Germany, but I’m not sure that counts as a scientific improvement.
Which brings us to today, and the issue of deep fakes and the role the porn industry is playing in their development. Before we get into that, it’s probably worth spending some time defining what deep fakes are. This is important because, unlike with the relatively meaningless term fake news, we have a chance to define “deep fake” before it becomes a weaponised catch phrase for politicians, propagandists and trolls everywhere, tossed out with careless abandon by those who benefit from leeching meaning from the world. It’s also important that we don’t think of deep fakes as necessarily a new threat, but rather as an evolution of already existing threats. Deep fakes are going to exacerbate the firehose of falsehood that deluges fact-checking organisations, and that contributes to the growing lack of trust in media and other institutions.
Crudely put, deep fakes are video content that is altered so that it presents something that didn’t in fact occur. To illustrate just how new the word is, it was only in July this year that the Merriam-Webster dictionary put “deepfake” on its “Words We’re Watching” list, which is their list of words that they are increasingly seeing in use, but that haven’t yet met their criteria for being included in the dictionary. They say that the term only started appearing in major newspapers and magazines in early 2018.
The term deep fake (or word deepfake – opinions are divided as to how we’re going to spell it), and the technology around it, has come so recently into our popular lexicon that the Webster definition of deep fake uses another term that isn’t even in the dictionary yet. “The fake in deepfake is transparent: deepfakes are not real; they’re fake. The deep in deepfake is a bit murkier. Deep has long-established use to describe what is difficult, complicated, intense, etc., and that use applies here, but deep in deepfake is also likely related to its use in deep learning, another term that has yet to meet our criteria for entry.” Collins dictionary are a little ahead of the game, and are including deepfake as one of their words of 2019.
So how does “deep learning” relate to deep fakes? The best explanation comes from Witness, an international organisation (founded in 1992 by musician Peter Gabriel, oddly enough) that trains and supports people using video in their fight for human rights. “The development of new forms of image and audio synthesis is related to the growth of the subfield of machine learning known as deep learning, which includes using architectures for artificial intelligence similar to human neural networks. Generative Adversarial Networks are the technology used in deepfakes. Two neural networks compete to produce and discern high quality faked images. One is the “generator” (which creates images that look like an original image) and the other is the “discriminator” (which tries to figure out if an image is real or simulated). They compete in a cat-and-mouse game to make better and better images.”
The results are becoming more and more compelling, although there is no example yet of the creation of a deep fake that has had a massive mainstream effect. And pornography, bless its disturbing cotton socks, is still an important player in tech evolution. You’ll be unsurprised to learn that 96% of deep fake videos that have been created are pornographic, and only 4% non-pornographic. There was actually an app released in June this year, called DeepNude, which used the generative adversarial networks referred to above to remove clothing from images of women. It’s since been taken down, but not for lack of customers.
A lot of deep fakes could be categorised under the problematic rubric revenge porn, and here’s one reason we should be worried about the steady improvement in deep fake technology. There is already a lot of harm being done to people in the areas of cyber-bullying and gender-based violence. It’s generally always the case that new technology is used against marginalised groups before it becomes a mainstream threat. A representative story is that of Indian journalist Rana Ayyub, who had her face morphed onto a porn video, in retaliation for her speaking out against the gang rape in India of an eight-year-old Kashmiri girl. A similar strategy was employed by the Guptabots to discredit South African journalist Ferial Haffajee, whohas written about the debilitating effect this had on her.
So what’s to be done, to preemptively deal with the deep fake problems coming down the line? China, as usual way ahead of the rest of the world when it comes to human rights (note to sub-editor: please use sarcasm font here), has released a new government policy designed to stop the spread of deep fakes. The new rule, which goes into effect on January 1, 2020, bans the publishing of deep fakes online without proper disclosure that they were created with artificial intelligence or virtual reality technology. And in October, California became the first US state to criminalise the use of deep fakes in political advertising. The social media platforms are also stepping up, as much as they ever do. Twitter is busy drafting a deep fake policy, and Facebook has begun developing technology solutions to detect deep fakes. Google has released an open-source database containing 3,000 original manipulated videos, to accelerate the development of deep fake detection tools..
There are several obvious ethical and legal conundrums here, of course. How do we differentiate, for example, between satire, parody, and content intended to deceive for nefarious, rather than artistic, ends? And, is it possible to use deepfakes for good?
A website like Generated Photos is a free resource of 100,000 faces produced completely by artificial intelligence. It’s an eerie idea, that you can pick and choose the face you need as a stock photo, and never have to worry about copyright – because none of the people are real. If that’s hard enough to wrap your (real) head around, consider this. There’s currently legislation under discussion in Germany that will allow undercover investigators to produce deep fake child pornography to catch pedophile predators on the dark internet. Many of the child porn darknet websites ask users to upload pictures or videos to gain access to the secure area, which makes it difficult for undercover investigators to enter the criminal forums. But is putting more child pornography out into the world legally and morally defensible?
As with most things on the internet, time, very swiftly, will tell. Or at least will lead us to a place where most of the decisions have already been made for us by tech evangelists and hungry corporates eager to be first to market. But there’s still a brief interregnum where we have a chance to define how we, as consumers and media, can gear up to deal with deep fakes. An essential first step is to avoid the term being defined for us by the shrill edgelords and postulating politicians, whose main aim is always to introduce defensible noise rather than sensible meaning.
(An edited version of this was published in the Financial Mail issue of December 5.)