A social media epidemic that’s seemingly impossible to counter, Twitter are set to employ deep learning algorithms to cut down on the spread on fake news.
Fake news is a relatively new term for an ancient problem. With social media, it’s even easier to spread lies. Now, Twitter hopes deep learning could end it.
It’s apt that Twitter is now implementing tools against fake news. Most of us first read the term on the site, probably in relation to something Donald Trump tweeted. However, the scourge of fake news extends far further than your Twitter timeline.
The history of online journalism is a complex one. In 2019, we’re basically at a point where newspapers regret ever putting their content online for free and print is struggling to cope with the fact that anyone can share information free of charge online. Paywalls are commonplace. Subscriptions to content are popular. In a saturated market, standing out online is harder than ever.
So how do you make sure people visit your site and read your article? For some, the answer lies in blatant fabrication. Others are a little more subtly misleading. There are a few outlets who post satire, and in a turbulent political climate, this can be misconstrued as fake news.
Twitter is looking to make things a little clearer.
With the acquisition of London-based Fabula AI, the social networking giant is bringing on board a startup that has been developing technology to spot disinformation online. Twitter hopes that this new addition will make its platform more reliable. It’s easy for rumours to catch fire: think of Fabula AI as the extinguisher.
Deep learning targets the spread of news, rather than the content
Fabula AI is going to help build on Twitter’s existing machine learning capabilities. It’s thought that Twitter will employ natural language processing (NLP), recommendations systems and reinforcement learning.
NLP is particularly useful when it comes to searching through written text. It’s already employed by the likes of Spotify to trawl the web for blogs written about musicians and it is improving smart devices and chatbots. In the past, NLP was reliant on specific phrases and words. Chatbots, for example, would rely on particular words in the right order, but NLP is becoming more intuitive now. The technology is getting a lot better at searching for related terms.
With subtle differences between a manipulated story and genuinely good content, deep learning has the power to dig a little deeper.
Often, there are tell-tale signs on whether a source is true or not. According to Claire Wardle of First Draft News, there are seven kinds of fake news. From false connection – when the headline doesn’t quite match the content: think of click-bait style “You won’t believe what happened next” articles – to imposter content – where sources are deliberately faked – there are plenty of ways to hoodwink an audience.
Fake news is a complex business. With subtle differences between a manipulated story and genuinely good content, deep learning has the power to dig a little deeper. Fabula AI looks not at the content, but how it is spread. The differences between how real news is shared organically with those eager for updates and how lies are pumped out are a lot bigger than the differences between the content.
Through reinforcement learning, it’s possible for an algorithm to improve with more information. The AI that Twitter hopes to implement on their platform will take in more and more information, analysing constantly to have a better idea of what is or isn’t fake news.
Fabula likens deep learning to a “disease” spreading through a network and have patented technology to combat this. The algorithms analyse not so much the articles being spread but Twitter itself. NLP is used to find how users are sharing the info and can detect fake news from its marketing.
Could deep learning catch on in journalism?
In January 2018, Aaron Edell posted an intriguing article about a fake news detector he built from scratch.
Edell claims he “almost went crazy”. He encountered the same problems that Twitter will no doubt have with the subtleties between different kinds of lies. Defining fake news became extremely difficult and Edell had to rip up his template and start again at one point. Until he reached an epiphany: that it’s easier to categorise real news than fake news.
Looking out for quality rather than sieving the bad content is perhaps the best method. When it comes to applying deep learning to online journalism, it could well be that we use AI to find the best as well as the worst. As mentioned, it’s an extremely saturated market.
Tools such as Feedly generate the news that you’re interested in. The future of deep learning algorithms could involve analysing a piece for how closely it aligns with your political views or even how many jokes it contains. Deep learning wouldn’t even have to even look through the content to check: it’s more than possible to gauge from the reactions and to find out who’s shared it where.
Fake news may well be the scourge of social media. Short of fitting all users with a lie detector, there’s no way to get rid of it completely.
However, fake news in itself isn’t the issue. It’s the deceit that’s the problem. If social media users are more aware of what’s real and what isn’t, they can know in advance whether to take content with a pinch of salt. Artificial intelligence can’t act out of moral obligation to shield us from party propaganda but it can tip us off that the source may be a little untruthful.