Font size:
Print
Watermarked Content and OTT Monitoring: A Bold Step Towards Safer Digital India
Watermarked Content for Social Media: A Strong Push for Accountability in Digital Platforms
Context: A Parliamentary Standing Committee has flagged gaps in existing cyber laws, stressing the need for MeitY to frame a watermarking system for social media content to tackle emerging threats like deepfakes and AI-driven manipulation.
What is a Safe Harbour Clause?
- The safe harbour clause, under Section 79 of the Information Technology Act, 2000, provides immunity to intermediaries—such as social media platforms, search engines, and OTT services—from liability for third-party content hosted on their platforms, provided they act as neutral facilitators and comply with government directives for removal of unlawful content.
- This provision aligns with global practices such as the U.S. Digital Millennium Copyright Act (DMCA) and the EU e-Commerce Directive, ensuring that intermediaries are not held liable for user-generated content unless they knowingly host illegal material.
How does it help in protecting intermediaries?
- Encourages innovation and growth: By limiting liability, it enables startups and tech companies to operate without fear of litigation over user actions.
- Facilitates free flow of information: It allows intermediaries to function as neutral platforms, enabling citizens to exercise their constitutional right to freedom of speech (Article 19(1)(a)).
- Supports digital economy expansion: According to the Economic Survey 2022-23, India’s digital economy contributes over 12% of GDP, and safe harbour has been critical in attracting investment into social media, OTT, and e-commerce platforms.
What are the concerns associated with protecting intermediaries?
- Weak accountability: Platforms often delay removal of unlawful content such as hate speech, morphed images, or misinformation, leading to social unrest (e.g., 2012 Assam violence rumours spread via social media).
- Deepfakes and AI-generated content: Emerging technologies make detection harder, enabling large-scale circulation of manipulated content. The Parliamentary Standing Committee on Home Affairs (2025) flagged the absence of mandatory watermarking and detection frameworks.
- Child safety risks on OTT platforms: Lack of pre-release certification and weak age verification exposes minors to explicit material, undermining child rights protections under the Juvenile Justice Act.
- Imbalance of rights: Over-broad immunity may compromise citizens’ right to privacy, dignity, and security in cyberspace.
What measures need to be taken to address the concerns?
- Periodic review of safe harbour: The committee (2025) recommended reviewing Section 79 protections to balance liability and accountability, ensuring stronger deterrence.
- Graded penalties: Introduce fines and suspension of operations for persistent non-compliance, while preserving due process.
- Watermarking framework: MeitY should develop technical standards for watermarking AI-generated content, with CERT-In monitoring detection alerts.
- Strengthening OTT regulation: Constitute panels including child development experts, educators, and legal professionals to oversee sensitive content, coupled with robust age-verification systems beyond self-declaration.
- Forward-looking regulation: Adopt flexible guidelines to address metaverse, blockchain, and generative AI challenges, similar to adaptive governance models highlighted in NITI Aayog’s Discussion Paper on AI (2018).