A couple of months ago, I got a few messages from my friends and acquaintances checking in on me and asking how I was doing. As it turned out, there was a string of articles published in April that had been circulating with misinformation about my passing—which, if you are reading this article at the moment, you'll recognize is clearly unfounded and untrue.
I assure you, I am a real person in the flesh behind my computer, composing these very sentences you are making your way through.
And if something as news personal as an individual’s untrue death could create such a stir, imagine the severe risk of larger global issues that, if misconstrued in such circumstances, could cause a great deal of damage.
Fake news has become a real problem in our digital age. But there are ways to combat it with technology.
While propaganda is not a new concept and has been consistently weaponized throughout the history of civilization, we live in a reality that’s all the more influenced by the hyperconnectivity and constant interactions we now experience in the real and digital worlds alike. The rise of fake news in the last decade can be largely attributed to that exact factor—the ease with which digital content can be created, distributed and consumed through a variety of digital platforms, from social media to websites and private messengers.
Due in part to larger shifts in day-to-day life—through broader digital transformation and exacerbated by Covid-19—the rate of global consumption has skyrocketed, particularly in the last five years. According to a study by Nielsen, home-bound consumers have led to a 60% increase in the amount of video content watched globally. Meanwhile, according to Statista, more than 40% of consumers spent longer on messaging services and social media during the pandemic in 2020.
Two things are clear. One, there are more capabilities to consume a variety of media content, from streaming videos to reading niche newsletters and blogs. Two, people worldwide have grown accustomed to the habit of consuming content types across the border and from a variety of sources. But the scary part is that people are naturally prone to trust the transparency of the content they consume rather than taking a step back and evaluating the source prior to diving into the contents.
With this global shift towards a firehose of information online, the battle for consumers’ attention has also intensified. This means that in the endless scrolling in which people engage across all kinds of feeds on which they are active, the ability to be scroll-stopping and attention-grabbing has become the ultimate goal.
No longer is conveying the truthful information the North Star metric for those creating the content. In fact, quite the opposite: The more clickbaity, exaggerated and catchy the content sounds, the better. All this for the hottest commodity in circulation today—user attention.
While this issue has proliferated due to the technological advancements that the digital transformation has spurred, it also means that technological advancements will be the solution to this increasingly dangerous issue.
Artificial intelligence is among the best options designed to supplement our efforts as humans in the fight for the truth and against misinformation. With its ability to train on data samples of various sizes and quickly learn the ability to discern patterns, identify anomalies and predict events in the future, artificial intelligence is perfectly positioned to become an ideal tool for fighting propaganda and minimizing the risk of viral fake information.
One example of an application of such technology is fact-checking. Creating software that’s able to learn based on human fact-checkers’ work ethic in order to verify the information and sources within an article is the AI’s essence in its simplest form, and this can be a powerful starting point in the line of defense against fake news.
Another obvious avenue is fake news detection. AI is easily trainable in identifying examples of news that are factually correct, and by tapping into AI’s ability to discern anomalies or deviations from the norm, it’s possible to develop a solution that can continuously monitor and compare the truthfulness of articles and report back on the results it finds.
Since news spreads at the speed of light today, there is also potential opportunity to develop a solution that can check variations of the same news story as it’s picked up by various sources—from social media to blogs and even reputable publications—in order to continue cross-referencing the facts and identifying the smallest or largest misrepresentations in each piece of content reporting on the same issue.
Of course, there is an argument to be made that the power of AI, in this case, can be used to achieve the exact opposite of what we hope. And in fact, with the rise of deepfake videos online and the ability of AI to learn quickly, there has been an increase in the spur of misinformation used with the wrong motives and for the wrong reasons. This means one thing: Technology is a direct representation of the intentions that we as consumers and the larger corporations hold when consuming or creating content. People must be held accountable for their actions, whether contributing to a problem or a solution.
Technology can be a powerful tool for positive change so long as it is paired with significant efforts on our parts as humans to continue to self-educate and on the part of corporations to assume responsibility for self-governing the quality and reliability of the content they distribute. It’s no surprise, for example, that Twitter has banned political ads on its platform. Or that in September 2019, Facebook and Microsoft announced an initiative to collaborate and launch a contest aimed at identifying videos that use deepfake technology.
Change is certainly underway, but there is no progress unless we leverage technology for the good and back it up with good intentions on our part.
As for me, in the meantime, I am alive and well and very much not dead.
Originally published in Forbes