Posting AI videos, images, content, deepfakes will now result in very strict action as government tightens rules; Details inside

When posting content, users will now be required to clearly state whether the video, image, or text they upload is created by AI (Artificial Intelligence).

Feb 11, 2026 - 02:00
 0
Posting AI videos, images, content, deepfakes will now result in very strict action as government tightens rules; Details inside

New Delhi: The central government, on Tuesday, 10 February 2026, imposed stricter regulations on online platforms regarding the management of AI-generated and artificial content, including deepfakes. Under this, platforms like X and Instagram will be required to remove any such content within three hours if directed by a competent authority or the courts.

Government notifies amendments

The government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, formally defining AI-generated and artificial content. These new rules will come into effect on February 20, 2026. The amendments define “audio, visual, or audiovisual information” and “manufactured information,” which includes content created or altered by AI to appear authentic.

Routine editing, content enhancement, and good-faith educational or design work are excluded from this definition. The Ministry of Electronics and Information Technology (MeitY) stated in the notification that key changes include treating fabricated content as “information.” AI-generated content will be treated at par with other information for determining unlawful acts under the IT Rules.

Also read: Rashmika Mandanna Deepfake Case: Delhi Police Tracks Down 4 Suspects Who Uploaded The Video, Hunt On For Key Conspirator

Action must be taken within three hours

Social media platforms must now act on government or court orders within three hours instead of 36 hours. Furthermore, the timeframe for redressal of user complaints has also been reduced. The rules mandate mandatory labeling of AI content. Platforms facilitating the creation or sharing of fabricated content must ensure that such content is clearly and prominently labeled. Where technically feasible, it must be associated with persistent metadata or identifiers.

The notification also states that intermediaries cannot remove or hide AI labels or metadata once they have been applied.

Content has to be removed within 3 hours instead of 36 hours

If a social media platform becomes aware of or receives a complaint about fake, misleading, or objectionable AI-generated content (especially deepfakes), it will now have to remove that content within just 3 hours.

This is significantly shorter than the previous 36-hour deadline, which demonstrates the government’s seriousness about curbing the rapid spread of such content. Any delay will now directly place the platform on responsibility.

Providing information about AI content is mandatory for users

When posting content, users will now be required to clearly state whether the video, image, or text they upload is created by AI (artificial intelligence). The user’s declaration alone will not be relied upon. Social media platforms will be required to use their own technology (such as AI detection tools) to verify that the information provided by the user is accurate. This is crucial to ensuring transparency and accountability.

‘Zero tolerance’ for AI content related to children, personal data, and violence

Immediate and strict action will be taken against AI-generated content that displays objectionable content involving children, private/edited photos or videos of individuals without consent, fake government documents, or content that promotes violence. In such sensitive cases, platforms will not have to wait for a complaint or government order; they will have to remove such content immediately.

Action against social media platforms too

If a social media company fails to comply with these new and stricter IT rules, its legal protections may be lost. This means that not only the user who creates or posts content, but also the social media platform itself, will be directly responsible for legal action. This will increase pressure on companies to strictly comply with the rules.

There was an uproar over the deepfake video case of actress Rashmika Mandanna way back in November 2023.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow