Anything can be automated. At least, that seems to be the industry perspective. Humans are rash, impulsive and tainted by personal bias. So why not just replace them with machines? Apparently, that’s what Facebook thought when they fired their entire team of news editors for political bias, replacing them instead with a new algorithm. That did not go well. It had been a while since Facebook started out on the receiving end of criticisms for having a news team that promoted conservative viewpoints. The news team, which was in charge of deciding which topics were trending on Facebook, were being accused of unfairly promoting conservative news stories and articles online.
While the tech giant initially denied these allegations, they made a complete u-turn later in the year, summarily firing their entire Trending News team and replacing them with an algorithm that ranks articles based on popularity. Without a set of human eyes keeping check of the algorithm’s handiwork, the instant result was a trending story titled: "BREAKING: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary". So not only was this news inaccurate and completely made up, it also backfired against Facebook’s initial stance of trying to get rid of conservative bias from the site. Because that’s what happens when you underappreciate the value of human judgment. Guy Sheetrit, founder of Over The Top SEO, has some interesting thoughts on this.
Facebook has created a great platform for the world to break news, share information, and be reunited from whichever part of the world they may be. In doing that, it has also created an avenue for income creation, which is the sole driver of the hundreds of fake news sites.
Lately, media outlets have been painting a rather unrealistic picture on artificial intelligence. AI can do this. AI can do that. AI can take over the world. What no one really talks about, is what AI cannot do, yet. And that is to think, act and behave like an actual human being. How do you decide if a piece of news is real or fake? Simple. You do your own research. You dig out facts and figures but also emotional standpoints. Based on that research, which is far from following a scripted logical pattern, you decide on whether the piece of news in front of you is real or fake. That, however, is not how artificial intelligence decides the difference between right and wrong.
To begin with, there are three types of artificial intelligence. First, there is artificial narrow intelligence (ANI), also known as weak AI, which is composed of artificial intelligence that specializes in a single area only. That refers to the chess bot that can beat human players in board games, or the autonomous car that has mastered the rules of driving safely. ANI represents the lowest rung in artificial intelligence development and consists of all of the AI developed up till now. Higher rungs of artificial intelligence represent artificial general intelligence (AGI) which is a form of artificial intelligence that is as smart as any average human being, and, artificial superintelligence (ASI) which is the ultimate form of artificial intelligence that is smarter than all of humanity combined. Given that AI in its current form is practically novice, despite all its accomplishments, it has limits. It is one thing to drive a car or beat a human at chess. Both these situations can be plotted into a set of probable scenarios and programmed into a computer. The same, however, cannot be said for fake news.
Surprisingly, despite AI’s breadth of impact, the types of it being deployed are still extremely limited. Almost all of AI’s recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B).
The very nature of fake news is fickle and unpredictable. Sometimes, even real humans have trouble identifying a piece of information as real or fake. Fake news is subject to assessment and interpretation on a very human level, something that AI clearly cannot do, yet. For example, there is the fake news story that it was Donald Trump that won the popular vote and not Hillary Clinton. Now this can be a very confusing situation. We know that Donald Trump did indeed win the presidential elections, but the popular vote was on Hillary’s side this time. A human editor may know and understand that difference, an artificial intelligence may not. Fake news is intended to be misleading. It is intended to get people to read the story and click on the underlying ads in a compulsive manner. It is deceptive at its very core. While artificial intelligence can be really good at plotting a list of possible scenarios and acting according to it, it is not advanced enough yet to be able to handle deception in that level.
Let’s talk about Dean Pomerleau. In 1989, back when you were still trying to navigate the new interweb thingy and I wasn’t even born, Dean designed his very first self-driving car. While still functional, Dean’s prototype was not ready for street deployment thanks to infrastructure at the time. He did, however, score a very interesting piece on the Byte Magazine thanks to his work. Now an adjunct professor at Carnegie Mellon, Dean has put up an open-ended bet worth $1000 for anyone who can develop an artificial intelligence powerful enough to drive fake news out of outlets like Facebook, Twitter and Google. He isn’t, however, very optimistic. As one of the pioneers of the neural network, Dean Pomerleau is well aware of the limitations of artificial intelligence and knows all too well that AI on its own cannot combat fake news. It can, however, help.
Post-facto fake news refers to news items or claims that are already known to be false, either by work from organizations like Snopes and Factcheck or by the general public on social media.
I am no Dean Pomerleau, but I get where he is coming from. AI in itself cannot put an end to fake news. The subject is too complex for it to handle alone. However, it can greatly hasten the procedure by flagging all potentially suspicious news and then letting a human editor decide which is fake and which is real. Among the hundreds of news stories that top the leaderboard at Facebook each day, it takes a human editor hours of time to sort through and flag potentially fake items. However, if someone were to develop a software that could sort through this huge pile of news in a matter of minutes and flag all suspicious content for review, it would considerably boost a human editor’s abilities to track down and combat fake news. That is what Dean is getting at.
Artificial intelligence is still a long way from perfection. An automated world may be efficient, but it is still a fantasy. For now, if AI is to perform complex tasks as battling fake news, it needs human assistance. Artificial intelligence, in its current state, is not a supplement to real intelligence, it is rather a powerful addition to it. What are your thoughts on the matter?