In a recent advisory, the Union Ministry of Electronics and Information Technology in India has mandated that any “under-testing” or “unreliable” artificial intelligence (AI) models must obtain explicit government permission before being accessible to users. This directive emphasizes the need for intermediaries to ensure their AI tools are devoid of bias, discrimination, or threat to electoral process integrity.
Furthermore, the advisory requires intermediaries to label all synthetically created media and text or embed such content with unique identifiers for easy identification. Immediate compliance is mandated, with intermediaries instructed to submit an “Action Taken-cum-Status Report” to the Ministry within 15 days.
🇮🇳 India's Ministry of Electronics and IT mandates government approval for new AI models
🤖 Tech firms must guarantee bias-free AI products
💼 Advisory under IT Act, 2000 & IT Rules, 2021; non-compliance faces penalties
🌐 Industry voices worry over impact on India's global AI… pic.twitter.com/noLTvQNrJx— Mukul Sharma (@stufflistings) March 4, 2024
What is the latest GOI Advisory on AI tools in India?
This advisory follows concerns Minister of State Rajeev Chandrasekhar raised regarding Google’s Gemini AI, which generated a response violating the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. IT Minister Ashwini Vaishnaw stressed the importance of proper training for AI models, stating that biases, including racial biases, would not be tolerated.
The advisory specifies that deploying under-tested or unreliable AI models on the Indian internet requires explicit government permission. It also suggests using a “consent popup” mechanism to inform users of potential fallibility or unreliability. Intermediaries and platforms are instructed to comply with all eleven disallowed content categories.
In alignment with previous meetings with industry representatives, the advisory instructs intermediaries to inform users through terms of service about the consequences of dealing with unlawful information, including potential account suspension or termination and legal repercussions.
Also Read – 550 AMRIT BHARAT STATIONS: Know All About PM Modi’s Biggest Railway Infra Project
The Ministry has considered amending IT rules to remind users of disallowed content every 15 days. In an interview on February 14, Chandrasekhar hinted at possible amendments related to algorithmic bias in the context of the Digital India Act.
Highlighting Gemini’s initial global issues, Chandrasekhar emphasized the need to ensure platforms do not engage in unlawful activities and meet safety and trust responsibilities on the Indian internet.
The advisory also addresses AI responses from Ola’s Krutrim AI, illustrating the broader challenge of AI models generating unusual or incomplete answers. Gemini and Krutrim AI are urged to refine their models to ensure accurate and responsible responses.
Additionally, MeitY advises intermediaries to label or embed synthetically created content to identify potential misinformation or deepfakes. The advisory outlines the desired details, such as the originator and the software used to generate the content.