New AI Advisory: Labeling AI-Generated Content No Longer Requires Permit

The landscape of Artificial Intelligence (AI) technology in India is evolving once again as the government drops the permit requirement for untested AI models. In a fresh advisory issued by the Ministry of Electronics and IT, compliance requirements have been fine-tuned according to the IT Rules of 2021. However, the emphasis on labeling AI-generated content remains intact.

The previous advisory, dated March 1, 2024, required firms to seek government approval before deploying under trial or unreliable AI models. The new advisory supersedes this requirement, removing the need for permission during AI model development.

The Ministry has observed that IT firms and platforms often neglect their due diligence obligations outlined in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. To address this issue, the government emphasizes the importance of labeling AI-generated content.

Firms utilizing AI software or platforms are now obligated to label content that is generated using AI tools and inform users about the potential fallibility or unreliability of the output. This inclusive approach aims to prevent the spread of misinformation or deepfakes.

The advisory states, “Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labelled…that such information has been created generated or modified using the computer resource of the intermediary.”

Furthermore, if users make any changes to the content, the metadata should be configured to enable the identification of the user or computer resource responsible for the modification.

As the government aims to maintain transparency and accountability in the AI sector, this advisory comes after a controversy surrounding Google’s AI platform’s response to queries related to Prime Minister Narendra Modi. The government’s swift action highlights the need to regulate social media and other platforms, ensuring the labeling of under-trial AI models and preventing the hosting of unlawful content.

FAQ:

Q: Why did the government drop the permit requirement for untested AI models?
A: The government aims to streamline the development process by removing the need for permission during AI model development and aligning compliance requirements with the IT Rules of 2021.

Q: What is the main focus of the new advisory?
A: The main focus is on labeling AI-generated content to inform users about the potential fallibility or unreliability of the output.

Q: How does the advisory address the issue of misinformation and deepfakes?
A: By requiring firms to label AI-generated content, the advisory aims to prevent the spread of misinformation and deepfakes.

Q: What actions should be taken if users make changes to the AI-generated content?
A: The metadata should be configured to identify the user or computer resource responsible for the modification.

Q: Why is the government emphasizing transparency and accountability in the AI sector?
A: The government’s emphasis on transparency and accountability is to regulate social media and other platforms, prevent the hosting of unlawful content, and maintain trust in the AI sector.

Sources:
– Ministry of Electronics and IT: [URL]

Definitions:
– Artificial Intelligence (AI): Technology that enables machines or computer systems to mimic human intelligence and perform tasks that typically require human intelligence, such as problem-solving, pattern recognition, and decision-making.
– Compliance requirements: Rules and regulations that organizations must adhere to in order to meet certain standards and ensure legal and ethical conduct.
– IT Rules of 2021: Refers to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which outline regulations and guidelines for intermediaries, social media platforms, and digital media.
– AI-generated content: Content, such as text, audio, visual, or audio-visual information, that is created, generated, or modified using AI tools or platforms.
– Fallibility: The potential for error or mistakes.
– Unreliability: Lack of dependability or trustworthiness.
– Metadata: Descriptive data that provides information about other data, in this case, information about the AI-generated content, such as its source or origin.

Related links:
– Ministry of Electronics and IT: Link

The source of the article is from the blog macholevante.com

Privacy policy
Contact