Font size:
Print
Deepfake Technology
Context: The viral video of Burkina Faso’s president signing a $14 billion deal with India and criticising U.S. President Donald Trump was a complete fabrication—an AI-generated deepfake. This incident underscores the growing power and peril of synthetic media in global affairs.
What is deepfake technology, and how does it work?
- Deepfakes are synthetic media (images, audio, or videos) created using Artificial Intelligence (AI) and Machine Learning (ML) techniques that manipulate or fabricate content to make it appear authentic.
- How it works:
- Generative Adversarial Networks (GANs) – two neural networks (generator & discriminator) compete; the generator creates fake content, and the discriminator evaluates authenticity.
- Autoencoders – encode input data (faces/voices), then reconstruct altered outputs.
- Voice Cloning + Lip-syncing – match speech patterns & facial movements.
- Applications: Can be benign (entertainment, cinema, education, dubbing) or malicious (political propaganda, financial fraud, misinformation).
What are the multidimensional impacts?
- Political & Diplomatic: Disinformation campaigns during elections or international negotiations. Fake speeches or endorsements that can destabilise governments. Synthetic diplomacy, as seen in the Burkina Faso video, undermines trust in global relations.
- National Security: Deepfakes may be weaponised for psychological warfare, propaganda, or operational deception—potentially sparking violence, disrupting investigations, etc
- Governance: Deepfakes challenge biometric and facial recognition systems, affecting digital security across banking, e-governance, and other critical services.
- Economic & Corporate: CEO impersonation scams via deepfaked video calls (e.g., $25M fraud in Hong Kong). Stock manipulation through fake announcements or interviews.
- Personal & Social: Sextortion and blackmail using fake intimate content, especially targeting students and women. Identity theft through cloned voices and facial profiles.
- Media & Entertainment: Fake celebrity endorsements, interviews, or scandals. Erosion of trust in visual journalism and documentary footage.
- Ethical Concerns: Pose ethical dilemmas complicating notions of consent, responsibility, and authenticity.
What are the current methods and tools used to detect deepfakes?
Detection technologies have become more sophisticated, but still face challenges:
- Visual Analysis: Frame-by-frame scrutiny for unnatural blinking, lighting inconsistencies, or facial distortions. AI models trained to spot GAN-generated artefacts.
- Audio Forensics: Detects pitch anomalies, emotional mismatches, and synthetic cadence
- Multimodal Detection: Combines video, audio, and behavioural cues to identify inconsistencies. Used in fraud cases where a deepfaked video is paired with a synthetic voice and documents.
- Popular Tools: Microsoft Video Authenticator, Deepware Scanner, Sensity AI, Reality Defender, Incode’s adaptive detection models that retrain on new deepfake types.
How can the current methods be strengthened?
- Technological Strengthening: Invest in next-gen detection algorithms that evolve with generative AI advancements. Digital watermarking at the point of content creation. Mandating AI-generated content disclosures (synthetic media labelling).
- Policy & Legal Measures: Update India’s IT Act, 2000 and Data Protection Act, 2023, to explicitly cover deepfake harms. Clear criminal penalties for malicious use (e.g., financial fraud, defamation, election interference). International treaties for cross-border regulation of AI-generated misinformation.
- Institutional & Platform Accountability: Global framework for AI ethics & misuse regulation (like EU AI Act). Public-private partnerships for AI fact-checking tools in regional languages.
- Awareness & Capacity Building: Media literacy campaigns to help citizens critically evaluate content. Training for police, judiciary, and election officials on identifying deepfake-related crimes.
- Research & Innovation: Encourage indigenous R&D in Explainable AI (XAI) for detection. Incentivise academic–industry collaborations under MeitY, Digital India, and IndiaAI Mission.