NO, AAMIR, RANVEER WEREN’T ENDORSING ANY PARTY: HOW TO FACT-CHECK, IDENTIFY DEEPFAKES DURING ELECTIONS

As the Lok Sabha elections kick off, the digital landscape becomes a battleground for truth versus fiction. Among the myriad instances of misinformation flooding social media platforms, two prominent figures, Bollywood actors Aamir Khan and Ranveer Singh, found themselves unwittingly entangled in the web of disinformation.

In the lead-up to the elections, two manipulated videos featuring actor Khan made waves across cyberspace. These altered versions of a promo of Khan’s renowned TV show, Satyamev Jayate, painted a distorted picture of his political affiliations. One depicted Khan seemingly endorsing the Congress party, while the other cunningly echoed the party’s rhetoric of “nyay” (justice), coinciding with the title of its manifesto.

Adding to the chaos, actor Ranveer Singh fell victim to deepfake technology, where a manipulated video portrayed him lambasting Prime Minister Narendra Modi on issues of national concern. However, the original clip revealed Singh’s praise for the PM, highlighting the insidious nature of manipulated media.

In an age of information blitzkrieg, how do we distinguish the real from the fake?

How is a deepfake created?

The process of crafting deepfake videos involves a sophisticated interplay of artificial intelligence algorithms, particularly the utilisation of “voice swap” technology. Platforms like itisaar.ai, in collaboration with institutions such as IIT Jodhpur, shed light on the intricate steps involved in fabricating these deceptive videos, reported The Indian Express.

Through the manipulation of audio and visual elements, creators can seamlessly alter or mimic voices and facial expressions, resulting in convincing yet entirely fabricated content.

How do you spot deepfakes?

While identifying deepfakes may seem like a daunting task, there are several strategies to arm oneself against the onslaught of disinformation, especially during election season.

Verify sources: Exercise caution when encountering content from unfamiliar sources, particularly if it appears contentious or sensational. Cross-reference with reputable sources to authenticate the veracity of any dubious claims.

Listen for anomalies: Pay close attention to audio content for subtle irregularities, such as unnatural tonal shifts or robotic speech patterns, which may indicate manipulation.

Scrutinise visual content: Deepfake audio often accompanies altered visual footage. Look for inconsistencies between audio and visuals, such as discrepancies in lip movements, as a telltale sign of manipulation.

Stay informed: Keeping abreast of current events and news developments enhances one's ability to discern the authenticity of content amidst the deluge of misinformation.

Harnessing AI detectors: Leverage AI detectors like Optic’s “AI or Not” to analyse suspicious audio or video content, providing valuable insights into its authenticity.

Image used for representational purposes/Pixabay

Misinformation vs Disinformation: What is the difference?

In the realm of online discourse, understanding the distinction between misinformation and disinformation is paramount.

Misinformation refers to the dissemination of false or inaccurate information, often unwittingly shared by individuals who may be unaware of its dubious nature. This can encompass a wide array of content, ranging from innocuous rumours to deliberate hoaxes, and may arise from genuine mistakes, misinterpretations, or the amplification of sensationalised narratives.

In contrast, disinformation entails the deliberate spread of false or misleading information with the intent to deceive, manipulate, or influence public opinion. Originating from malicious actors or coordinated efforts, disinformation campaigns often target specific individuals, groups, or societies, seeking to sow discord, undermine trust, or advance particular agendas. Disinformation tactics may include the fabrication of fake news stories, the creation of manipulated media, or the dissemination of propaganda designed to manipulate perceptions and shape narratives.

Image used for representational purposes/Pixabay

How to spot disinformation strategies?

In the realm of political discourse, disinformation has become a pervasive tool wielded by various entities, including governments and extremist groups. Notably, the Russian government has utilised images of celebrities to promote anti-Ukraine propaganda, while Meta, the parent company of Facebook and Instagram, has issued warnings regarding China's intensified disinformation efforts.

Disinformation campaigns exploit the vast reach of the internet to disseminate misleading content, targeting periods of societal unrest, natural disasters, and geopolitical tensions to sow confusion and manipulate public opinion.

Some of them are:

Making light of serious matters: Disinformation agents employ humour, memes, and political satire to trivialise important issues, deflect criticism, and evade accountability.

Spreading exclusive knowledge: Disinformation thrives on the allure of secrecy, with agents claiming access to hidden truths and urging recipients to share these supposed revelations widely.

Fabricated faces: Disinformation often relies on fabricated personas, including fake experts and sympathetic individuals, to lend credibility to false claims and sway public opinion.

Simplifying complex issues: Conspiratorial narratives oversimplify complex issues, framing debates as battles between good and evil to manipulate public perception.

Oversimplifying choices: False dichotomy narratives present binary choices, stifling nuanced discussion and silencing dissenting viewpoints.

Shifting blame: Whataboutism deflects attention from one's own wrongdoings by pointing fingers at others, exploiting irrelevant accusations to divert scrutiny.

How to navigate misinformation on the web?

With the rise of AI-generated deep fakes, discerning truth from falsehood becomes increasingly challenging. Is it still possible to sift through the noise and find reliable information? While there’s no foolproof method, here are some strategies to minimise the risk of falling for misinformation:

  1. Know your sources: When consuming news online, be cautious about the credibility of your sources. Established news outlets are generally more trustworthy than unknown sources. While citizen journalism can be valuable, verify information from multiple reputable sources whenever possible.
  2. Check the context: Context is crucial when evaluating news stories, photos, or videos. Look for additional information surrounding the content, such as whether it's part of a series or if there are any corroborating sources. Platforms like Facebook may provide false information warnings, while X (formerly Twitter) may offer community notes for added context.
  3. Spot the patterns: Fake news often spreads rapidly on social media, designed to provoke strong reactions and gain traction. Be wary of posts lacking context or attempting to elicit emotional responses without providing credible sources. Exercise skepticism and refrain from sharing content hastily.
  4. Do your research: Take advantage of fact-checking services like Snopes and FactCheck.org to verify the accuracy of news stories and debunk misinformation. These platforms analyse claims and counterclaims, providing comprehensive explanations of what's true and false. While not all content may be covered, it's worth consulting these resources for added assurance.

Image used for representational purposes/Pixabay

2019 General Election vs 2024: Paths of misinformation

The 2019 general election witnessed a battleground shifting increasingly to social media platforms. A significant portion of the campaign strategies employed by major parties incorporated online misinformation tactics, including fabrications about opponents and the dissemination of propaganda, as outlined in a 2022 study. This manipulation extended to sophisticated campaigns utilising forwarded WhatsApp messages and the widespread deployment of IT bots on Facebook.

These bots were instrumental in spreading doctored images, coordinating content, and proliferating fake videos. Notably, the bulk of misleading information originated from both the Bharatiya Janata Party (BJP) and the Indian National Congress (INC), with both parties serving as both sources and targets of misinformation, reports The Hindu.

Additionally, an investigation by the Digital Forensic Research Lab highlighted the role of automated 'bots' in boosting hashtags and attempting to manipulate traffic on Twitter in February 2019.

A Vice investigation revealed how parties leveraged platforms like WhatsApp and Facebook to disseminate inflammatory messages among supporters, raising concerns about the potential for online rage to spill over into real-world violence.

Notably, the utilisation of large language models alongside IT bots has become increasingly sophisticated, imbuing these bots with more human-like characteristics and enhancing their efficiency, as detailed in a 2024 analysis published in PNAS Nexus.

What does Indian law say?

India's IT rules already address deepfakes, providing a mechanism for reporting and handling morphed images. Rule 3(2) of the Information Technology Rules, 2021 stipulates that intermediaries must promptly remove or disable access to content that includes artificially morphed images depicting individuals in compromising situations. This applies to both sexual and non-sexual deepfakes.

Individuals can report deepfakes through social media platforms' grievance offices or India's cybercrime reporting portal. Additionally, the government aims to regulate AI platforms, particularly after incidents involving Google's Gemini AI chatbot generating contentious statements about Prime Minister Modi's actions.

“These are direct violations of Rule 3(1)(b) of [the IT Rules, 2021] and violations of several provisions of the Criminal code,” said Union Minister Rajeev Chandrashekhar on X in February.

Concerns have been raised about potential censorship as the government seeks to regulate AI giants releasing large language models.

Meanwhile, Google is adjusting its algorithms to address election-related queries more cautiously, redirecting users to standard search results with the message: "I’m still learning how to answer this question. In the meantime, try Google Search."

As the battle against disinformation rages on, equipping oneself with the tools to navigate this digital labyrinth becomes imperative. By staying vigilant and adopting proactive measures, individuals can safeguard themselves against the pervasive influence of manipulated media during crucial electoral periods.

With inputs from agencies

2024-04-19T14:19:34Z dg43tfdfdgfd