Antitrust in Digital Mkts

A Proactive Approach to Dealing with Synthetic AI-Generated Content

TL;DR
A deepfake video depicting a prominent Indian actress went viral recently. In response, the Indian Government highlighted the existing penal provisions that criminalize online impersonation and reminded social media intermediaries of their obligation to take down such content. However, such a reactive legal approach, which relies on individuals to report deepfakes, is inadequate to deal with multi-faceted concerns posed by synthetic content. In contrast, the U.S. strategy, as per the White House Executive Order on AI, adopts proactive measures for detecting, labeling, and authenticating synthetic content. Further, the US Government is expected to take the lead in developing and using such measures, setting a precedent for other stakeholders, including the private sector. India should adopt a similar comprehensive strategy, focusing on proactive detection, leveraging private sector expertise, and coordinating across various government entities.

The proliferation of synthetic AI-generated content presents new challenges for privacy and security worldwide. In India, the circulation of a deepfake video featuring a well-known movie actress on social media on November 6, 2023, sparked widespread concern about the misuse of AI-generated content in the country. Deepfakes are synthetic media in which a person's likeness is replaced with someone else's using deep learning, an AI method that simulates the learning patterns of the human brain. While deepfakes can be used positively, for instance, in creating more engaging educational material, they are often made without the impersonated individual’s consent and present a potent tool to cause emotional, psychological, and reputational harm.  

The Minister of State for Electronics and  Information Technology (Meity) issued a statement after the circulation of the deepfake video on social media, noting the potential of such media to cause harm, especially to women. He further stated that the impersonation of a person for cheating is punishable with imprisonment of up to 3 years under s. 66D of the Information Technology Act, 2000, encouraged individuals impacted by deepfakes to file an FIR and avail remedies under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The Ministry of Electronics and Information Technology (MeitY) also issued an advisory to significant social media intermediaries.[1] The advisory reminded them of their obligation under the IT Rules to remove unauthorized impersonating content in electronic form within 24 hours of receiving a complaint by an individual or within 36 hours of receiving a government or court order. Intermediaries are also required to “make reasonable efforts” to cause users not to upload such content.  

While the measures highlighted by the Indian government can help deter the dissemination of synthetic media by punishing those responsible for their production and distribution, they are unlikely to be sufficient in preventing the initial harm caused by such media. Although punitive and reactive legal action serves as a deterrent, it cannot prevent or repair the reputational damage suffered by individuals who have been targets of deepfake technology. Moreover, the measures highlighted by the government place the onus of identifying and reporting deepfakes on individuals impacted by them. Given the relatively low levels of digital literacy in India, individuals are not best positioned to understand and identify deepfake content. 

The White House Executive Order on Safe, Secure, and Trustworthy AI (AI EO), signed by President Biden on October 30, 2023, reflects an alternative to the punitive approach to deter synthetic content. The EO adopts a multi-pronged approach focused on developing capacities and standards to detect and label synthetic content. Section 4.5 of the EO requires the Secretary of Commerce, in consultation with heads of other relevant agencies, to prepare and submit a report to the Director of the Office of Management and Budget and the Assistant to  the President for National Security Affairs identifying standards, tools methods, and practices for authenticating, labeling and detecting synthetic content and establishing its provenance.[2] It also tasks the Secretary of Commerce with developing guidance on the use of these tools and measures by government agencies.[3]  

The approach espoused in the EO is well suited to address the concerns posed by synthetic content for several reasons. First, it focuses on the proactive detection and labeling of synthetic content, which can provide individuals with important context when they encounter AI-generated deepfakes. Such proactive detection and labeling can help prevent deepfakes from going viral by informing individuals of their synthetic origin.  

Second, the EO requires US government agencies to hold themselves accountable and lead by example in labeling and authenticating synthetic content. Indeed, government agencies are mandated to authenticate and label any synthetic content they produce or publish. This directive not only applies to public agencies but is also expected to serve as a model for private companies.[4] As a result, this government-led initiative is poised to pave the way for heightened public trust in digital content across all sectors.. 

Finally, the EO strategically adopts a whole-of-government approach by requiring the Secretary of Commerce to collaborate closely with the heads of relevant agencies in preparing its report on detection and authentication measures. This approach leverages the unique strengths and perspectives of various departments. This comprehensive strategy is critical in effectively addressing the multifaceted challenges posed by synthetic content. Issues such as privacy, consumer protection, intermediary liability, and intellectual property rights require a nuanced and coordinated response that only a united governmental front can provide.  

The government-led approach under the EO is further complemented by the White House’s engagement with leading AI companies, reflected in the voluntary commitments made by such companies to the Biden-Harris administration in July 2023. Among other things, these AI companies, which include OpenAI, Google, Microsoft and Meta, committed to developing robust technical mechanisms that ensure users know when content is AI-generated. Involving such companies in developing tools and measures for synthetic content detection is vital as they typically possess the necessary domain expertise and knowledge. For instance, OpenAI claims to have developed a watermarking tool that helps identify when images have been created by its AI image generator, DALL-E 3. As per the company’s Chief Technology Officer, the tool is “almost 99 percent reliable.”  

The Indian Government should, therefore, take a cue from the US approach and develop a comprehensive policy response to the concerns raised by synthetic content. Moving beyond a reactive stance, it's crucial to adopt proactive strategies that involve crafting tools and standards that facilitate the swift detection and labeling of synthetic content, empowering both users and platforms to identify and report such material quickly. Coordination is another key element of an effective policy response. Inputs from relevant entities, including MeitY, the Ministry of Information and Broadcasting, and the Department of Consumer Affairs, will help ensure a well-rounded approach to the various challenges posed by synthetic content. Additionally, leveraging the private sector's expertise in developing sophisticated detection and labeling tools is essential. By taking these steps, the Indian Government can establish a robust framework that effectively mitigates the risks of synthetic content, safeguarding the digital ecosystem for its users.

[1] Significant social media intermediaries are social media intermediaries within more than 50,00,000 registered users under Rule 2(1)(v) of the IT Rules 2021

[2] S. 4.5 (a) of the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. 

[3] S. 4.5 (c) of the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

[4] Ibid.