Emerging Tech

Governing the Government’s Use of AI in India

TL;DR
Increased adoption of AI by the Indian government and public sector organizations is poised to revolutionize service delivery through automation, personalization, and enhanced data-driven decision-making. However, the absence of a formal framework governing the government’s use of AI poses significant risks, such as potential discrimination and erosion of public trust. While over 50 countries have implemented such AI governance frameworks, India lacks specific mandates for its public sector.

In contrast, the U.S. has taken a robust approach to AI governance in the public sector, as exemplified by the OMB’s recent memorandum mandated by President Biden’s Executive Order. This memorandum outlines a comprehensive life-cycle approach to AI usage, including pre-deployment impact assessments, ongoing monitoring, and public accountability measures such as plain-language documentation and the ability for individuals to opt out of AI decisions. This structured framework ensures AI is used safely and fairly, setting a precedent that could benefit India by providing a model for responsible AI governance that fosters innovation and public trust while ensuring equitable service delivery.

A recent survey of Indian government and public sector organizations revealed that more than 50 percent are looking to implement generative AI solutions within the next year. Using AI for governance and public service delivery has several potential benefits, including automation of simple and repetitive chores, user-centric personalization of services, and augmentation of predictive data-driven decision-making.[1] 

However, deploying AI comes with its own set of risks. For instance, discriminative AI models, which use machine learning to differentiate between different classes or groups, are used to score or classify individuals to allocate opportunities or impose sanctions.[2] However, such models can produce erroneous or discriminatory results that impact citizens’ ability to access public services or obtain government benefits. The use of discriminative AI models to detect fake marriages by the UK Home Office resulted in a disproportionately large number of Albanian, Bulgarian, and Romanian marriages being flagged. Moreover, citizens’ trust in government and public service organisations may be eroded if they fail to explain how and why the automated decisions were made. [3]

The potential negative consequences of using AI in the public sector necessitate a robust framework to govern its use by government and public sector organizations. More than 50 nations have implemented or are in the process of implementing frameworks that govern the use of AI  in the public sector, as per a World Bank report. In India, the Government has not issued specific guidance or directions governing the use of AI for public service delivery despite its increasing use for the same. While NITI Aayog has published recommendations and policies on Responsible AI for All[4], these are not binding in nature and are not designed specifically for government entities.

In contrast, the US Office of Management and Budget  (OMB) recently released its memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The OMB was required to issue this memorandum by President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI to strengthen the effective use of AI and manage the risks from AI in the federal government. The OMB memorandum prescribes measures federal agencies must follow to ensure that AI is used transparently, safely, and fairly.

The memorandum prescribes a life-cycle approach to ensure the safety of AI. At the pre-deployment stage, federal agencies must conduct an impact assessment that specifies the expected benefits, measurement metrics, risk and mitigation measures, and the data collection and utilization process. The impact assessment is complemented by internal and external testing in real-world settings to identify previously unforeseen risks. Once the AI system is deployed, agencies must monitor its impacts on rights and safety on an ongoing basis. AI posing unacceptable rights or safety risks without effective mitigation must be discontinued as soon as feasible. Finally, agencies are required to provide public notices and plain-language documentation on AI, ensuring users and the public are aware of its uses and associated risks.

Ensuring that using AI in the public sector doesn’t result in discrimination between different groups is another key concern of the OMB memorandum. To this end, federal agencies must evaluate AI models for the potential use of protected attributes and differential impacts on demographic groups when used in real-world settings. They may also consult affected and underserved communities and gather public feedback to inform AI design, development, and usage. Where AI systems result in adverse or negative outcomes, such as the denial of benefits, affected individuals must be notified of the adverse decision along with an explanation of the decision-making process. Citizens must also be given the opportunity to opt out of AI decision-making and use a human alternative.

Accountability of federal agencies deploying AI is ensured by a combination of organizational and reporting requirements. All federal agencies are required to appoint Chief Artificial Intelligence Officers (CAIOs) to oversee compliance with the memorandum and coordinate with other agencies to promote AI innovation and risk management. Additionally, each agency must appoint an AI Governing Body consisting of senior officials to coordinate issues related to the use of AI. The CAIO and the Governing Body are tasked with managing agency-wide collaboration to ensure responsible AI adoption. Agencies must also annually report their AI use cases, associated results, and impacts on human rights to the OMB.

The memorandum reflects the US’ commitment to a government-led approach to AI regulation. While the measures are applicable to federal agencies, implementation by them sets a standard that the private sector and other stakeholders can emulate, fostering trust and safety in AI deployment across the board.[5] Moreover, compliance with the memorandum’s requirements for evaluation, mitigation, and reporting can develop best practices that can be widely adopted across different sectors.

Replicating a government-led approach to AI regulation in India could provide substantial benefits, given the reliance of a large portion of the population on government services and schemes across sectors like health, education, and finance. Moreover, a government-led approach would facilitate the uniform adoption of best practices across the public sector, creating a benchmark for private enterprises and start-ups. Given that many Indian start-ups may lack the necessary resources and expertise to independently develop comprehensive AI governance frameworks, government-led initiatives could provide a foundation for them to build on. This would not only enhance innovation and trust in AI technologies but also promote safer and more inclusive deployment of AI in India.

[1] https://documents1.worldbank.org/curated/en/746721616045333426/pdf/Artificial-Intelligence-in-the-Public-Sector-Summary-Note.pdf

[2] https://arxiv.org/pdf/2303.11196.pdf

[3] https://www.sciencedirect.com/science/article/pii/S0740624X21000137

[4] https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf

[5] https://fpf.org/blog/fpf-statement-on-vice-president-harris-announcement-on-the-omb-policy-to-advance-governance-innovation-and-risk-management-in-federal-agencies-use-of-artificial-intelligence/