Filters have existed in some form or another since the invention of the camera. Then we had Photoshop. Now, as technology becomes even more advanced, it becomes harder to tell apart truth and fiction.
The internet is awash with fakes and forgeries. Face swapping tools, vocal and image filters and facial recognition AI have all accelerated a trend in mass-disinformation, which looks set to usurp fake news as the latest buzzword as the battle for online transparency rages on.
Some of them are highly advanced, using the latest tools to make them as believable as possible, and some of them are simpler yet still have the same desired effect.
Recently, a popular livestreamer named Qiaobiluo Dianxia was on the top of the pile. She had over one hundred thousand followers, was constantly showered with gifts and received abundant praise from her viewers.
The livestreamer, known by her adoring fans as “Your Highness Qiaobilou”, regularly streamed gaming content on DouYu, China’s answer to Twitch.
Though she never revealed her face, her viewers would sign in and watch her broadcast computer games as she narrated them with her softly spoken, almost childlike voice. The images of herself she had previously uploaded fit the narrative; a youthful, fair complexion with big eyes and slim cheeks.
That evening, she had received tens-of-thousands of yuan in donations while streaming with her co-host, Qingzi. Shortly after, Qiaobilou’s software began to glitch, revealing a 58-year-old woman distinctly different from her digitally altered persona.
The reveal caused a mass exodus as followers left in their droves. Her viewership went down, and a few days later DouYu banned her for “causing adverse social impact.”
Many viewers hit out at Qiaobilou. They felt duped. Others were quick to defend her, saying that what she looks like is irrelevant; she still entertains her viewers so who cares if she’s middle-aged?
Qiaobilou’s tale is a cautionary one. With the simplest of technologies, she was able to mask her identity and dupe tens-of-thousands. Consider the implications with more sophisticated technologies, and on a global scale.
Deepfakes are the new fake news
There is a growing concern over online privacy and online transparency, especially with the prevalence of AI-powered deepfakes, which use deep learning software to superimpose a digital composite face of one person onto another person’s body.
In an age of fake news, filters and social media curation, altered images of celebrities and online personalities are on the rise.
Recently, DeepNude, an app readily available to anyone with a smartphone, came under fire. The app allowed users to “unclothe” virtually anyone, by superimposing a user-submitted image onto a naked body with eerily realistic results.
Perhaps even more worrying is that it needed little or no understanding of deep learning or AI to use it. The groundwork had already been done, all the user had to do was choose the image. The app has since been removed amid a growing outcry online for regulation.
But more than just pornography, deepfakes have also been used for political purposes and could heavily feature in the 2020 US election. Last time out, in 2016, fake news was a term bandied about often; this time it looks set to be deepfakes.
Deepfakes are becoming the new weapon of disinformation. Previously, biased news outlets reigned supreme. They go one step further than the disinformation of years-gone-by, developed for the specific cause of spreading disinformation or masking the truth. But, by using advanced technologies, these forgeries are harder to detect and more believable.
How do deepfakes work?
Deepfakes are created using generative adversarial networks (GAN). They become so believable because the precise role of a GAN framework is to create something which can trick itself into believing its own authenticity.
The images are created by a generator in the framework, then the discriminator (or adversary) looks for inconsistencies or inaccuracies. The generator learns what the discriminator doesn’t like and adapts with machine learning to create a better forgery.
Eventually, after enough back and forth, the forgery is real enough to fool the discriminator and pass the test.
Where do we go from here?
Just last week, the US House of Representatives asked Facebook and Google what they plan to do to tackle deep fakes during next year’s election.
It’s a question that needs answering. As the technology develops and becomes more readily available regulators will need to step in. The tech giants have an important role in all of this.
The possibilities for the future of deepfakes holds both promise and trepidation in equal measure, but without the cooperation of the tech giants deepfakes designed for mass-disinformation will sneak through undetected, causing untold damage.