The Ministry of Electronics and Information Technology (MeitY) in India has issued a statement regarding the usage of AI within the country. The key point is that unstable AI models are still under testing and must be labeled to notify users. Additionally, government permission is required before releasing them to the public.
Critics argue that India’s government may be overly restrictive in testing new AI models. These models need to be tested with a certain level of user engagement to assess their effectiveness. If India’s regulations on this issue remain unique to the country, international AI providers may choose to withhold services in India.
In the full statement from MeitY, it states that under-tested/unreliable AI models, LLM, Generative AI, software, or algorithms should only be deployed with explicit permission from the Government of India, and their limitations must be clearly labeled.
The Indian government’s side maintains that this announcement is meant to address the negative impacts of AI and hold AI providers accountable for their platforms. The AI issue in India is intertwined with political conflicts as evidenced by a recent incident involving Google Gemini and a question about Prime Minister Narenda Modi’s affiliations, sparking political controversy.
Source: The Register
TLDR: India’s MeitY announced regulations for using AI within the country, requiring explicit government permission and labeling of potentially unreliable AI models. The move has sparked debates on AI testing, international AI providers operating in India, and political entanglements surrounding AI discussions in the country.
Leave a Comment