India has issued an advisory to “significant” and “large” platforms on deploying “untested AI platforms” on “Indian Internet”, saying any such releases should be approved by the central government.
“The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative Al, software(s) or algorithms) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India,” the advisory, issued late on March 1, said, triggering alarm in the country’s tech industry.
On Monday, however, India’s deputy IT minister Rajeev Chandrasekhar clarified on X (formerly known as Twitter) that “permission seeking from Meity is only for large plarforms and will not apply to startups.”
Meity is India’s Ministry of Electronics and Information Technology.
Also on AF: Google Facing Heat in India Over Removal of Play Store Apps
Earlier, on Saturday, India’s Economic Times reported that the advisory was not “legally binding”. Instead, “it is the future of regulation”, ET quoted Chandrasekhar as saying.
“We are doing it as an advisory today asking you [AI platforms] to comply with it… If you do not comply with it, at some point, there will be a law and legislation that [will] make it difficult for you not to do it,” Chandrasekhar added.
Theres much noise and confusion being created, many by people who shd know better 🤷🏻♂️ . I repeat myself here for their benefit
➡️There are legal consequences under existing laws (both criminal n tech laws) for platforms that enable or directly output unlawful content.… https://t.co/mufRQ7Bfcs
— Rajeev Chandrasekhar 🇮🇳(Modiyude Kutumbam) (@Rajeev_GoI) March 4, 2024
The advisory also noted that untested AI tools should also be labelled for the potential to return wrong answers for user queries. It further called on platforms that allow the generation of ‘deepfakes’ to create a mechanism to identify the creators of any modified media.
The advisory called for compliance with immediate effect, asking platforms to submit ”an Action Taken-cum-Status Report to the Ministry within 15 days”.
The advisory came a week after Chandrasekhar lambasted Google’s Gemini AI tool for a response that said Indian Prime Minister Narendra Modi has been accused by some of implementing policies characterised as “fascist”.
These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code. @GoogleAI @GoogleIndia @GoI_MeitY https://t.co/9Jk0flkamN
— Rajeev Chandrasekhar 🇮🇳(Modiyude Kutumbam) (@Rajeev_GoI) February 23, 2024
A day later, Google said it had quickly worked to address the issue and the tool “may not always be reliable”, in particular for current events and political topics.
“Safety and trust is platforms legal obligation. ‘Sorry Unreliable’ does not exempt from law,” Chandrasekhar wrote on X, in response to Google’s statement.
“Our Digital Nagriks [Citizens] are NOT to be experimented on with “unreliable” platforms/algos/model,” he added.
India’s Friday advisory also asked platforms to ensure that their AI tools do not “threaten the integrity of the electoral process”. India’s general elections are to be held this summer, where the ruling Hindu nationalist party is expected to secure a clear majority.
- Reuters, with additional inputs from Vishakha Saxena
Also read:
AI Chiefs Say Deepfakes a Threat to World, Call For Regulation
India Startups Cheer ‘Landmark’ Android Ruling Against Google
Cost Fears Delaying AI Take-Up, Infosys Chief Cautions
Japan Leaders Want Law on Generative AI ‘Within the Year’
UN Chief: Big Tech Chasing AI Profits Ignoring Risks – Guardian