Recently, Laura van Beers interviewed Professor Francesca Toni of Imperial College London (here) about how AI could be used to understand human arguments. Towards the end of the interview, Professor Toni mentioned that her work might one day lead to a way to tackle a topical problem: 'fake news'.
Fake news is a term that originally referred to linked items on social media that were designed to look like they led to authentic news stories to trick people into clicking on them. Each click brought in advertising money for the faker. More recently, the term has come to mean any news item that you don't agree with – but whatever the definition, fake news has been causing problems for Facebook recently because of claims that it can be used to manipulate users.
Mark Zuckerberg, Facebook's founder and chief executive, identified three types of fake news he wants to tackle[1]: spammers who create fake stories to reap advertising money from each click they get; state actors, or "the Russian interference effort," as Zuck described it, who attempt to spread propaganda, manipulate opinion or just cause confusion; and finally, stories from genuine media outlets that are not true.
To what extent could AI help with this effort? And, if it can help, should it?
It appears that Facebook is using AI to an extent already. Zuckerberg says the network uses panels of hundreds of thousands of people to teach algorithms to weed out undesirable content. Anything users would like to see less of gets downgraded in news feeds and, since most users don't want to see fake stories, that includes fake news.
To avoid the problem of people downgrading news they disagree with, Facebook must recruit sufficient people with a range of views. If that's done well, then Facebook can tackle fake news without having to specifically target it.
Tackling the "Russian interference" is harder. Many of the profiles used to spread misinformation had been on Facebook for years, either as 'sleeper' accounts operated by Russia or as genuine accounts that Russia later took over. It's hard to use AI to deal with this because there is little data to train the algorithms. A panel of users would not necessarily see anything wrong with these accounts.
The problem is even harder for stories from genuine media outlets that are not true. AI could determine whether a story is from the outlet it claims to be or whether the outlet is 'trusted' or even trace the first outlet to publish a story. However, assessing whether a story is true requires a level of fact checking and judgement beyond the capabilities of modern AI; no AI can phone someone to check their quote, for example. And even if it could, it would be extraordinarily complex and expensive.
How does Professor Toni's research fit into this? It's too early to say. Her work is at the forefront of AI research and we can't yet model the way humans argue. Once we can, could an AI remove bots from Facebook by identifying clues that they are not real humans?
This would be something like removing spam email, but trickier. Like spam, some genuine messages would get caught by mistake and some fake messages would get through.
The big question is who would decide what content should be removed. There are laws about certain kinds of content that must be removed - incitements to racial hatred or violence, for example - but otherwise social networks are a place where people can express any ideas.
The challenge of removing malicious content from Facebook without shutting down legitimate users is far from easy.
_______________________________________________________________________________________
[1]https://www.vox.com/technology/2018/4/3/17188332/zuckerberg-kinds-of-fake-news-facebook-making-progress