A Closer Look at India's Strategy for Regulating AI and Deepfakes
Examining India’s ongoing legislative response to deepfakes
Indian celebrity Rashmika Mandanna responding to deepfake harassment.
AI is constantly reshaping our digital landscape, and regulators are finding it hard to keep up.
Deepfakes – realistic yet inauthentic AI-generated content – are a growing concern, impacting everyone from celebrities to private citizens, with a high potential for widespread misinformation and abuse.
Deepfakes have already notoriously been used to create pornographic material of celebrities without their consent. In India, Rashmika Mandanna, Katrina Kait, and Alia Bhatt have been victims of such attacks. Beyond pornography, deepfakes pose a threat as deepscams — deepfakes that impersonate individuals with frightening accuracy to exploit personal and financial information, as in the case of beloved cricket star Sachin Tendulkar.
The surge of deepfakes in India is becoming more and more of a central topic in public debate, with Prime Minister Modi calling on global leaders to jointly work towards regulating AI, and raising concerns over the negative impact of deepfakes on society.
India’s proactive approach in demanding global regulation highlights an urgent need for international action. India, known for its prowess in information technology, has acknowledged how vital it is to address these challenges. The forthcoming Digital India Act, aiming to replace the 22-year-old Information Technology Act, is a testament in recognising that AI is a rising threat which governments must actively address, particularly in the form of deepfake technology.
Concern is intensifying worldwide regarding the use of deepfakes as a political weapon — and one which is capable of depicting public figures in false narratives, which can significantly sway public opinion and potentially disrupt democratic processes. Deepfakes can be used to simulate politicians saying something they haven’t, doing something they didn’t, and being something they’re not. Moreover the case of journalist Rana Ayyub, who was targeted with deepfake videos designed to silence and blackmail her, has illustrated the necessity of protection for political figures.
IBM CEO Arvind Krishna put forward a statement at the Confederation of Indian Industry (CII) event underscoring the potential of AI. Krishna said that Artificial intelligence (AI) is poised to match or even surpass the impact of the steam engine revolution, with India set to be a leading player in its deployment.
If we held the whole supply chain responsible for deepfake creation and proliferation, it would mark a sea change in how public policy combats this issue. Such legislation would ensure that AI cannot generate deepfake sexual material or fraudulent content, whilst also guaranteeing that these preventative techniques cannot be easily circumvented.
Currently, India lacks specific laws or regulations that directly address deepfaked content. The nearest relevant laws are in the Information Technology Act, drafted in 2000, which criminalises and penalises the publication or transmission of obscene or sexually explicit material. However, these existing provisions fall short of effectively tackling the broader issue of identifying and preventing the spread of abusive deepfaked content.
The demand for deepfake content, particularly pornographic deepfakes, concerns both creation and distribution. Social media platforms therefore play a crucial role in curbing this menace. As per recent guidelines issued by the Indian government, failure to remove reported deepfake content within 36 hours can lead to criminal proceedings.
India has realised that with increasing technological processing power, the ability to create convincing deepfakes becomes more widespread. This advancement calls for stringent measures to safeguard the control that individuals have over their image, with an approach that criminalises both the creation and distribution of deepfakes, and legal avenues for those harmed to seek damages.
In parallel, AI developers must be held responsible for implementing effective prevention techniques against deepfakes. Developers must be required to prove that their models cannot create harmful content and verify the integrity of their training datasets. India understands that in an era in which AI is undermining our trust in the evidence of our own eyes and ears, emphasising our common need for responsible AI design, development, and deployment – backed by laws regulating the whole supply chain – is our best defence against the spread of deepfakes.
Join our campaign: