In a decisive move to combat the growing menace of deepfake and AI-generated content, the Indian government has announced a series of amendments to its Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new regulations aim to introduce a more stringent approach to online content moderation, especially in light of the rapid proliferation of synthetic media.
These updated guidelines are set to drastically change how digital platforms handle potentially harmful content, especially material generated or altered by artificial intelligence. The amendments focus on speed, transparency, and accountability, creating new challenges and responsibilities for both content creators and platform providers.
Key Provisions: AI Labelling and Fast Content Takedowns
One of the most significant changes under the new rules is the mandatory labelling of AI-generated content. This means that users and content creators will be required to clearly disclose when content has been created or modified using artificial intelligence tools. The move comes as AI technologies like deepfakes, which can manipulate or fabricate audio, video, and images, have raised serious concerns about their potential misuse in spreading misinformation, causing social harm, and affecting national security.
These tools have become increasingly sophisticated, blurring the line between what is real and what is fabricated. By enforcing AI labelling, the government hopes to help users identify manipulated content and understand the context in which it was created. While this rule addresses the growing concerns about the misuse of AI in content creation, it also calls into question the practical implementation, especially given the rapid pace at which AI technologies evolve.
Alongside AI labelling, the three-hour takedown rule has garnered significant attention. Platforms are now required to remove illegal content within three hours of being flagged. This regulation imposes a stricter timeline than previous rules, urging platforms to act swiftly against content that could pose harm to individuals or society. Social media giants, including Facebook, Twitter, and YouTube, will now face significant pressure to adjust their moderation processes to meet these new demands.
Industry Reaction: Challenges Ahead for Social Media Platforms
The crackdown on deepfake content and AI manipulation is undoubtedly a much-needed intervention, but it also raises practical questions about implementation. Critics argue that enforcing the three-hour removal rule is overly ambitious, especially for platforms dealing with billions of posts daily. The ability to assess the legality of content in such a short time frame, without risking overreach, will require massive investments in AI and human moderation resources.
Moreover, many platforms may struggle to meet these new requirements, given the complexity of content flagged as potentially harmful. Some legal experts have already voiced concerns that this could lead to over-censorship or undermine the principles of free expression, as platforms might err on the side of caution and remove content unnecessarily.
However, the Indian government remains firm in its stance, arguing that these steps are necessary to protect the public from the growing risks posed by synthetic media. The government has pointed to several instances where deepfake videos and manipulated media were used to spread political propaganda, incite violence, and cause reputational damage to individuals.
The Bigger Picture: A Global Trend Toward AI Content Regulation
India’s new rules on deepfake and AI content reflect a broader global trend toward regulating synthetic media. In countries like the United States, the European Union, and Japan, governments are taking action to curb the abuse of AI technologies, especially concerning their role in misinformation and online manipulation.
The rise of AI-driven content creation has prompted governments worldwide to explore the ethical and legal implications of such technologies. In India, the move is seen as a preemptive step to maintain digital trust and ensure that content remains authentic and responsible.
What Lies Ahead?
The amendments to the IT Rules are set to come into effect in 2026, with social media platforms expected to adjust their policies accordingly. As the deadline approaches, it remains to be seen how platforms will respond to these changes, particularly with the potential for challenges in enforcement.
India’s big crackdown on deepfake and AI content marks a turning point in the country’s approach to digital governance. As AI technology continues to advance, these regulations could shape how the global digital ecosystem handles the ethical dilemmas posed by synthetic media.
For now, the digital world is bracing for a future where content is increasingly scrutinized for authenticity, and AI-generated materials must be transparently labeled to maintain trust and accountability online.














