Filtering out fake news on social media

Conventional wisdom that digital literacy alone can stem tide of misinformation is dangerously simplistic

By |
A representational image showing fake news on a laptop screen. — AFP/File
A representational image showing "fake news" on a laptop screen. — AFP/File

In an age where truth seems negotiable and subject to multiple interpretations, lies and fake news travel at lightning speed. Research from the Massachusetts Institute of Technology (MIT) reveals a disturbing reality: false information spreads six times faster than facts on social media.

While governments loudly denounce this digital epidemic as a threat to democracy and social cohesion, their policies invariably tend to fuel the fire they claim to fight. Welcome to the ‘attention economy’, where viral falsehoods translate into profit and even those tasked with safeguarding truth sometimes become unwitting accomplices in its distortion.

The conventional wisdom that digital literacy alone can stem the tide of misinformation is dangerously simplistic. Recent research in India by LSE professor Shakuntala Banaji and others reveals that many who share false information are not victims of deception, but active participants in the process driven by complex social and political motivations and economic considerations. 

Far from being digitally illiterate, these users often possess sophisticated technical skills that they put to (mis)use and indulge in spreading disinformation and fake news. This is a pervasive problem — and a scalable solution exists.

The study uncovers a troubling ecosystem where mainstream media narratives, political rhetoric, and social media messages reinforce each other, thus creating powerful echo chambers. During heightened political moments or national crises, users consciously prioritize ideological alignment over factual accuracy, viewing the spread of certain narratives as a civic duty.

The role of trust networks is even more revealing — many share information not based on its veracity, but on their relationship with and respect for the sender. This psychological and social dimension of misinformation presents a far more complex challenge than simply teaching people how to spot fake news.

This dynamic becomes even more complex in the realm of digital media, where 'trust' has been commodified. Today's digital news publishers and opinion makers — influencers, YouTubers, TikTokers, podcasters, and social media personalities — operate within an attention economy that often prioritizes virality over veracity. 

Their business model, heavily dependent on views and engagement for monetisation, creates a perverse incentive structure. The race for clicks and shares encourages sensationalism and inflammatory content, while thoughtful, well-researched journalism struggles to find its footing.

Finally, more concerning is the systematic neglect of critical but complex topics — science, arts, technology, and historical analysis — that require extensive research and expertise but may not generate viral appeal. This creates a dangerous knowledge vacuum where shallow, sensationalised content flourishes while deeper, scholarly work remains on the periphery of public discourse.

Imagine a system where digital content creators are rewarded not just for their viewership, but also for their credibility and content diversity. Imagine a credibility rating mechanism that could determine the trustworthiness of a platform and a specialised forum that could prioritise and incentivise underrepresented topics (such as science, history, law for beginners, technical tutorials, etc).

This dual approach would ensure that lower views on complex or well-researched content don't discourage publishers from producing quality work. When credible content and coverage of neglected areas like science, arts and technology carry additional value, digital news publishers will be incentivised to invest in thoughtful, well-researched stories rather than chase viral content alone.

The success of this system hinges on the credible determination of the credibility factor itself. This requires an independent committee comprising academics, local and international media professionals, industry leaders, and researchers. 

The committee must command trust across the spectrum of all stakeholders — political parties, media groups, civil society, and academia. Ideally, it should be constituted through a parliamentary consensus, with representation from all political parties and input from media and civil society groups.

This committee would evaluate digital media operators and assign them a credibility factor ranging from 0.5 to 1.5. Consider its practical impact: a digital news platform with a credibility factor of 1.5 would receive a 50% premium on government advertising rates, while those with a 0.5 rating would face a 50% reduction. Such financial implications would create a strong incentive for maintaining high journalistic standards.

Similarly, the criteria for shortlisting influencers or social media channels should not be based solely on follower count. These criteria should vary depending on the content category. For example, content creators working in specialised domains like science and the humanities, which have significant societal value, should be eligible for government advertising even with relatively lower subscriber numbers. 

This approach would help establish a trend toward more diverse, valuable content for society.

Such a policy would not only create strong incentives for producing accurate and valuable content but also represent a meaningful state investment in citizens' intellectual development. Moreover, it would show a genuine effort on the part of the state which will eventually enhance the state's credibility and help it rebuild public trust, which has significantly eroded in recent times.

Quality journalism and fact-based content creators find themselves in an unfair race against inflammatory content, where the pressure to be the first often overshadows the need to be accurate. As professional publishers struggle to maintain revenue against the tide of unreliable sources, the real casualty is public interest — important government initiatives and public service information receive far less attention than viral, often unverified content.

State regulation of media to 'combat fake news' is a commonly adopted approach. However, such direct control is neither desirable nor practical — nor sustainable.

Instead, a balanced approach using government advertising budgets as both incentive and deterrent could prove more effective. By rewarding credibility and diverse content through thoughtful financial incentives decided by completely independent and competent bodies, we can nurture a healthier social media ecosystem. 

This could sow the seeds for more ethical digital discourse without compromising press freedom.

The path to combating misinformation lies not in controlling the media, but in creating a supportive environment where quality journalism thrives naturally.


Disclaimer: The viewpoints expressed in this piece are the writer's own and don't necessarily reflect Geo.tv's editorial policy.


The writer is a Chevening scholar specialising in media and e-governance, and a technology entrepreneur.


Originally published in The News