Government Issues Advisory on Labeling AI Models and Preventing Unlawful Content

The government has recently issued an advisory to social media platforms and other intermediaries, urging them to label under-trial AI models and prevent the hosting of unlawful content. The Ministry of Electronics and Information Technology issued this advisory on March 1, emphasizing that non-compliance may result in criminal action.

The advisory calls for all intermediaries and platforms to ensure that their computer resources do not permit users to host, display, upload, modify, publish, transmit, store, update, or share any unlawful content. It further highlights that all platforms, intermediaries, and enabling software will be held accountable for any breaches of these provisions.

This advisory comes in the wake of a recent controversy involving Google’s AI platform, Gemini, and its response to queries regarding Prime Minister Narendra Modi’s policies. The government strongly reacted to Gemini’s comments, referring to them as a violation of IT laws.

In light of this incident, Union Minister for Electronics and IT, Rajeev Chandrashekhar, emphasized the need for platforms to openly disclose and seek consent from consumers before deploying any under-trial or erroneous platforms on the Indian internet. He stressed that accountability cannot be evaded by issuing apologies later on.

The advisory also emphasizes the requirement for platforms to seek approval from the government prior to deploying under-trial or unreliable AI models. It recommends that such models be labeled to acknowledge their possible fallibility or unreliability.

To ensure user awareness, the advisory suggests the use of a “consent popup” mechanism, explicitly informing users about any potential fallibility or unreliability in the output generated by AI models.

Minister Chandrashekhar clarified that the government’s intention is not to take control but to create a healthy and sustainable ecosystem. They believe that allowing any and every platform on the internet without proper regulation is not conducive to the overall well-being of the digital space.

This advisory builds upon a previous advisory issued in December 2023, which focused on addressing deepfakes and misinformation. Through these advisories, the government aims to establish a framework to regulate the use of AI models while safeguarding against the dissemination of unlawful content.

Frequently Asked Questions (FAQs)

1. What is the advisory recently issued by the government regarding social media platforms and intermediaries?
The government has issued an advisory to social media platforms and intermediaries, urging them to label under-trial AI models and prevent the hosting of unlawful content.

2. What will be the consequences of non-compliance with this advisory?
Non-compliance with the advisory may result in criminal action.

3. What are the requirements for platforms and intermediaries mentioned in the advisory?
The advisory calls for platforms and intermediaries to ensure that their computer resources do not allow users to host, display, upload, modify, publish, transmit, store, update, or share any unlawful content. They will be held accountable for any breaches of these provisions.

4. What prompted the issuance of this advisory?
The advisory comes after a controversy involving Google’s AI platform, Gemini, and its response to queries on Prime Minister Narendra Modi’s policies. The government considered Gemini’s comments a violation of IT laws.

5. What does the Union Minister for Electronics and IT, Rajeev Chandrashekhar, emphasize?
Minister Chandrashekhar emphasizes the need for platforms to disclose and seek consent from consumers before deploying any under-trial or erroneous platforms. He stresses that accountability cannot be evaded through later apologies.

6. What is recommended regarding under-trial or unreliable AI models?
The advisory recommends that platforms seek approval from the government prior to deploying under-trial or unreliable AI models. It also suggests labeling such models to acknowledge their possible fallibility or unreliability.

7. How can user awareness be ensured according to the advisory?
The advisory suggests the use of a “consent popup” mechanism to explicitly inform users about any potential fallibility or unreliability in the output generated by AI models.

8. What is the government’s intention behind issuing this advisory?
The government’s intention is not to take control but to create a healthy and sustainable ecosystem. They believe that unregulated platforms on the internet are not conducive to the overall well-being of the digital space.

9. What previous advisory does this new advisory build upon?
This advisory builds upon a previous advisory issued in December 2023, which focused on addressing deepfakes and misinformation.

Definitions:

– Intermediaries: In the context of this article, intermediaries refer to social media platforms and other entities that facilitate the sharing and hosting of user-generated content.

– AI models: AI models are computer programs or algorithms that learn from data and make predictions or decisions without explicit programming instructions.

– Fallibility: Fallibility refers to the possibility of making errors or producing incorrect results.

– Unreliability: Unreliability refers to the lack of consistency or trustworthiness in the performance of an AI model.

Suggested Related Links:

Ministry of Electronics and Information Technology
Office of the Prime Minister of India
Google

The source of the article is from the blog kunsthuisoaleer.nl

Privacy policy
Contact