21.1 C
New Delhi
Friday, February 7, 2025

Deepfakes and AI in Bharat’s 2024 General Election: A Threat to Democracy and Information Integrity

Published:


Paromita Das

GG News Bureau
New Delhi, 7th Feb. The Rise of Deepfakes in Bharat’s Electoral Politics

The rapid advancements in artificial intelligence (AI) have ushered in an era of technological disruption, transforming various aspects of human life. Among these innovations, deepfake technology has emerged as a powerful yet dangerous tool, capable of manipulating audio and video content in ways that are virtually indistinguishable from reality.

Bharat’s 2024 General Election witnessed an unprecedented rise in the use of deepfake videos, sparking widespread concerns about misinformation, voter manipulation, and the ethical use of AI-generated content. Several national and international media organizations extensively covered how AI-powered deepfakes were weaponized during the elections, influencing public opinion and political narratives.

Deepfake technology allows the seamless impersonation of individuals, from celebrities to politicians, by altering their voices, expressions, and body language. This has led to the circulation of manipulated videos featuring prominent personalities, including Bollywood actors and cricketers, falsely endorsing political parties or controversial agendas. As deepfake technology becomes more accessible and sophisticated, the challenge of distinguishing truth from fabricated content has become increasingly complex.

This article delves into the role of deepfakes in Bharat’s 2024 elections, the legal landscape surrounding their regulation, global responses to AI-generated misinformation, and the urgent need for a robust framework to counteract their impact on democracy and public discourse.

Deepfakes in the 2024 Elections: A Political Weapon

In the months leading up to the Lok Sabha elections, deepfake videos became a major tool in shaping voter perceptions. Political parties, interest groups, and even anonymous online entities allegedly deployed AI-generated deepfakes to mislead the electorate.

One of the most alarming aspects of deepfake technology is its ability to fabricate realistic-looking videos where individuals appear to say or do things they never actually did. Some of the most high-profile cases included:

  • Celebrities Endorsing Political Parties: AI-generated deepfake videos of Bollywood actors such as Aamir Khan and Ranveer Singh surfaced on social media, falsely portraying them as endorsing certain political parties. Similarly, legendary cricketer Sachin Tendulkar was depicted in a fabricated video appearing to support a gambling app.
  • Manipulated Political Speeches: Several videos falsely attributed controversial remarks to political leaders, misleading voters and sparking unnecessary communal or ideological tensions.
  • Fake Campaign Messages: Deepfake technology was used to generate campaign speeches that mimicked real politicians, creating confusion among the electorate about the authenticity of their statements.

The implications of such manipulations are far-reaching. Given that deepfake videos often spread faster than fact-checking mechanisms can counter them, the damage is often irreversible, leaving voters misinformed and vulnerable to false narratives.

The Legal and Regulatory Challenges in Bharat

Despite the growing concerns over deepfake technology, Bharat currently lacks a dedicated legal framework to address its misuse comprehensively. The country relies on existing laws such as:

  • The Information Technology Act, 2000 (IT Act): Although the IT Act does not explicitly criminalize deepfakes, it provides legal provisions for penalizing cybercrimes such as identity theft, impersonation, and dissemination of obscene material. Section 66D of the Act penalizes impersonation using digital resources, but its applicability to deepfakes remains ambiguous.
  • The Indian Penal Code, 1860 (Now Bharatiya Nyaya Sanhita, 2023): The law includes provisions against defamation, misinformation, and content that incites violence or communal disharmony. The penalty for spreading false content that promotes enmity between groups includes imprisonment for up to three years, a fine, or both.
  • The Copyright Act, 1957: While the Act protects the intellectual property rights of content creators, it does not offer adequate safeguards against AI-generated deepfakes, as it primarily recognizes human authorship.

Recognizing the severity of the issue, the Ministry of Electronics and Information Technology (MeitY) released an advisory in November 2023, urging social media platforms to proactively identify and remove deepfake content. It warned that platforms failing to comply with these guidelines could lose their legal immunity under the IT Act and be held accountable for the consequences of deepfake-related misinformation.

Additionally, Bharatiya courts have begun setting precedents on deepfake-related legal issues. In the Anil Kapoor v. Simply Life India and Others case, the Delhi High Court granted protection to the actor’s personal attributes—including his face, voice, and likeness—against unauthorized deepfake usage. Similarly, in Amitabh Bachchan v. Rajat Negi and Others, the court issued an interim injunction preventing the misuse of the veteran actor’s name and image for commercial deepfake content.

While these legal actions signal a step forward, they remain limited in scope and applicability. The lack of explicit legislation targeting deepfake creation, distribution, and accountability leaves significant gaps in Bharat’s legal framework.

Global Efforts to Combat Deepfakes

Several countries have taken proactive steps to regulate deepfake technology:

  • United States: The US government has proposed laws such as the DEEPFAKES Accountability Act, which mandates clear disclosures when AI-generated content is used in political campaigns. The state of California has already criminalized malicious deepfake usage in elections.
  • European Union: The Digital Services Act (DSA) requires online platforms to remove harmful AI-generated content and increase transparency in content moderation.
  • China: The Chinese government has implemented stringent regulations, mandating that AI-generated content must bear watermarks and disclose its artificial origin. Platforms failing to comply face legal action.
  • United Kingdom: The Online Safety Bill seeks to hold social media companies accountable for the spread of harmful deepfake content, particularly in cases of political and sexual exploitation.

In November 2023, Bharat joined The Bletchley Declaration, an international agreement signed by 29 countries, including the US, UK, Canada, Germany, and Australia, to collectively combat AI-driven misinformation and deepfakes. However, while international cooperation is growing, domestic enforcement remains a challenge.

The Road Ahead: Strategies to Counter Deepfake Threats

To mitigate the risks associated with deepfake technology, Bharat must adopt a multi-pronged approach:

  1. Legislative Action: Enacting a specific Deepfake Regulation Act that clearly defines and criminalizes deepfake-related offenses while ensuring safeguards for free expression and innovation.
  2. AI-Driven Detection Tools: Encouraging investment in advanced AI detection algorithms that can quickly identify and flag deepfake content before it goes viral.
  3. Public Awareness and Media Literacy: Launching large-scale public campaigns to educate citizens on recognizing and reporting deepfake content.
  4. Platform Accountability: Mandating that social media and content-sharing platforms implement stricter deepfake detection and removal policies, ensuring compliance with government regulations.
  5. Ethical AI Development: Promoting the responsible use of AI while deterring the malicious deployment of deepfake technology.

Conclusion: Balancing Innovation with Responsibility

Deepfake technology represents both an opportunity and a challenge for Bharat’s digital landscape. While AI-driven content creation has the potential to revolutionize various sectors, its misuse—particularly in the political and electoral sphere—poses a grave threat to democracy and information integrity.

The 2024 General Election served as a wake-up call, highlighting the urgent need for a robust regulatory framework that addresses the legal, ethical, and technological aspects of deepfake proliferation. Bharat must act swiftly to strengthen its laws, enhance detection capabilities, and promote media literacy to safeguard its democratic institutions against the perils of AI-driven misinformation.

As the digital age continues to evolve, striking a balance between technological innovation and responsible governance will be key to ensuring that AI serves as a force for good rather than a tool for deception.

 

The post Deepfakes and AI in Bharat’s 2024 General Election: A Threat to Democracy and Information Integrity appeared first on Global Governance News- Asia's First Bilingual News portal for Global News and Updates.



Source link

Related articles

spot_img

Recent articles

×