The Indian Algorithmic Services: When AI gets to decide who gets welfare

Certain kinds of technologies, colloquially and collectively labelled as artificial intelligence, are gradually replacing bureaucratic agency in making important executive decisions and government functions that have social consequences

Published: Aug 4, 2021 12:19:31 PM IST
Updated: Aug 6, 2021 12:13:04 PM IST

Image: Shutterstock

Chapter 4 of the 2018-19 Economic Survey lays out an ambitious roadmap for the government of India to use data ‘of the people, by the people, and for the people’. Part of the report is dedicated to praising the Samagra Vedika initiative of the government of Telangana, described as a scheme which integrates data across government databases.

The Telangana government claims that the Samagra Vedika system utilises a number of technologies which can be described as ‘artificial intelligence’ or AI—it uses ‘big data’ and complex machine learning systems to make predictions about people’s behaviour, and uses these predictive analytics to ultimately process applications for welfare schemes. More specifically, Samagra Vedika has been used to identify the eligibility of welfare beneficiaries and remove potentially fraudulent or duplicate beneficiaries.

By its own description, in 2016, the use of the Samagra Vedika system for removing so-called fraudulent ration cards led to the cancellation of 100,000 cards. Subsequently, government documents describe ‘public resistance’ to the cancellation, which led to the re-addition of 14,000 cards. While governments across all levels heap praise on the cost-cutting and efficiency of AI-based applications like Samagra Vedika in resource-constrained contexts like welfare administration in India, they are silent on the wider implications of using these systems to make fundamental policy and administrative decisions.

Samagra Vedika is only one of the various kinds of ‘AI’ technologies that are being used in government administration today. From policy decisions about managing urban travel and allocating police patrols to making decisions about individual entitlements for welfare schemes or deliberating income tax disputes, administrative agencies in India are routinely turning to automation and AI to aid, automate or even entirely replace human agency involved in these decisions.

It is important to study the implications of contemporary AI systems owing to their use of multiple sources of data, complex statistical and computational logics to analyse and process this data, and their increasing use in decisions with social consequences. The entanglements of government administration and AI are fundamentally changing the nature of governance and the relationship between the citizen and the state.  

Read More

Fairness, Transparency and Accountability  

Perhaps the biggest concern posed by AI for government administration is with respect to the public values of transparency, accountability and democratic participation. Administrative decision-making is governed by principles of transparency and accountability, intended to keep a check on arbitrary executive actions.  

However, when administrative decisions are usurped by systems that use data and complex algorithmic analysis, for example, in using ‘data-based’ systems for deciding how to allocate policing resources, the system uses a logic of reasoning for making policy decisions that are immediately interpretable or transparent, and which cannot easily be interrogated to analyse its reasonableness or fairness. Even if an AI system is expected to comply with specific rules and logics, these rules are inevitably transformed in the process of encoding them into a software. Machine learning further compounds these problems, as the system’s logic is constantly changing, and the use of multiple data points from various sources obscures an interrogation into whether the use of a particular kind of data was relevant, reasonable or fair. 

In the context of using ‘Big Data’ to decide eligibility for welfare in the Samagra Vedika scheme, for example, the choice of using particular data points to judge a person’s status is a matter of executive policy, but has been decided by the AI.

Besides, the concept of ‘natural justice’ and due process establishes specific procedural safeguards to ensure that decisions made are fair and accountable. These include the requirement to give notice, the duty to provide an explanation and justification for a decision. 

However, the use of AI in decision-making processes again fundamentally alters how natural justice and procedural safeguards should be applied. Decisions made with the use of AI are not always interpretable or explainable in a way that can allow affected individuals to understand and contest them. Further, in the absence of any specific law and policy on automated decisions in India, there is no structural manner in which these obligations can be enforced. Failures of such protections have been observed in a number of cases where government agencies are using AI or algorithmic systems, from the denial of benefits using Aadhaar to the cancellation of voter ID cards using the NERPAP algorithmic system by the Election Commission of India.

It is also necessary to keep in mind the political and economic transformations brought about by the use of AI in administration. Most applications of AI in administration are through the procurement of AI technologies from private vendors—whether the data that is used or the algorithmic processes and software. In doing so, agencies are outsourcing not only the creation of technologies, but also the process of making policy decisions themselves, to private vendors, who are currently not guided by any obligations to make technologies whose outcomes are fair, transparent, accountable and participatory in ways that conform to democratic values.

What does this imply for the future of the automated administrative state? First, governments should tread carefully when considering the application of AI, Big Data and predictive analytics for making consequential decisions, and be attuned to the limitations and consequences of the use of these systems. Second, there is an urgent need to revisit and reframe the application of principles of fair and reasonable decision-making under Indian administrative law, both by courts, as well as through regulatory mechanisms, such as creating notice and due process requirements for the use of AI-based decision-making (for example, the EU’s General Data Protection Regulation), or by creating processes for intervening in the procurement of AI systems (as attempted by the Tamil Nadu Safe and Ethical AI Policy). 

As the government goes forward with developing its policies for ‘ethical AI’, it must keep in mind how the use of AI in this crucial context of the administrative state can ensure that important democratic values are not compromised.  

The writer is a lawyer and researcher studying the intersections of technology, regulation and society. He is a former Mozilla Fellow

X