28 June 2025, NIICE Commentary 11375
Vidushi Sharma
“Where there is power, there is resistance.” — Michel Foucault
Artificial Intelligence is a fundamental tool of political power in 2025. Governments will use AI to facilitate surveillance, manipulate voters, and conduct predictive policing across systems of governance. The use of AI technology, often unregulated and opaque, poses significant human rights and democratic challenges. This commentary investigates a sample of global cases of misuse of AI technology and proposes an internationally binding governance framework that recasts AI as an agent of transparency and accountability, and protection of civil liberties rather than the undermining of civil liberties.
The Political Instrumentalization of AI
AI has unveiled new forms of state power that are unprecedentedly systemic and subtle. States are no longer solely characterized by private-sector technological innovations - states use AI, sometimes in a technological ambiance that advocates efficiency, and sometimes by states for purposes of surveillance, social control, voter manipulation, and even digital warfare. The 2024 general elections in India provide a timely example of the use of AI in general elections. The government used facial recognition software and biometric identification systems in polling stations, specifically for secure and efficient voter verification. However, there were also reports of the use of AI microtargeting tools for disinformation campaigns along caste and religious lines, ideally furthering divisions in a democratic society. These activities raise ethical questions for not only privacy, but also the fairness and integrity of democracy.
In Russia, emergent AI represents central components of hybrid warfare in Ukraine, and deepfakes from AI are at the core of this discourse. Russian state actors used machine learning to generate convincing fake videos about Ukrainian officials surrendering or recommending support for Russian politics, all of which happened at a rupture of confusion for civilians, escalating social trust of any sort. Can this disinformation characterize and/or interrogate the links between post-truth and national security? The blurring of fact and fiction in digital spaces destabilizes national security and the epistemic foundations of democracy.
Similarly, China is still improving its social credit system and deploying AI-assisted surveillance across its towns and countryside. Predictive policing systems such as the Integrated Joint Operations Platform (IJOP) have been utilized to profile and arrest Uyghurs and other Muslim minority communities in Xinjiang using algorithms that left suspects flagged for potential “unusual” behavior, entirely opaque to the suspect and of which they may be unaware. This is arguably the best exhibit of what scholars have called “algorithmic authoritarianism.”
In the US, Senate hearings in late 2024 demonstrated the impact of AI's role in the misinformation economy ahead of the presidential election. Deepfakes portraying fictional scandals, along with AI-curated social media feeds, shifted voter attitudes subtly but pervasively, particularly among younger voters. As the U.S. still benefits from free press and considerable judicial review, the episode identified substantial gaps in regulatory readiness.
In response to these growing threats, the European Union’s AI Act, which came into force in January 2025, sets an important precedent. It establishes a requirement for risk assessments for high-risk systems, requires documentation of training datasets, and prohibits uses directed at vulnerable groups. Yet, it has limited geographic scope and has no enforcement capabilities that extend beyond the EU's jurisdiction.
Human Rights at a Crossroads
Such changes demonstrate a greater structural concern: international human rights law is out of step with technological reality. The Universal Declaration of Human Rights (UDHR), passed in 1948, does not consider digital rights, algorithmic bias, or biometric surveillance. Meanwhile, global instruments such as the UN Guiding Principles on Business and Human Rights and UNESCO’s Recommendation on the Ethics of AI (2021) remain non-binding and largely focus on corporate actors rather than governments.
The lack of any binding legal framework has allowed states to use AI and other advanced technologies with impunity in legitimizing ways, usually in the name of technological sovereignty or defending national security. This absence of oversight is especially concerning in hybrid regimes and nascent democracies where there are unaccountable institutions and where the populations lack digital literacy.
Furthermore, AI tends to magnify existing inequalities. Biased training data often reproduces racial, gendered, and class-based discrimination. Facial recognition systems, for example, have been shown to have error rates as high as 34% for women of color, compared to less than 1% for white males. These biases, when embedded in public systems such as policing or welfare allocation, can lead to systemic exclusion and abuse.
Policy Recommendations for Global Governance
To realign the global trajectory of AI with international human rights, the following recommendations are required:
- Negotiate and bind states to an International Treaty on AI and Human Rights
The world desperately needs a multilateral treaty that is on par in length, scale, ambition, breadth, and impact with the Paris Agreement that legally obligates states to adhere to enforceable standards on their use of AI. This treaty should mandate transparency, algorithmic auditability, a process to appeal automated decisions, and additionally, should preclude certain uses of AI, such as AI-based social scoring and mass predictive detention.
- Implement a Global AI Political Rights Index (GAPRI)
The UN, or another intergovernmental entity, collects data to create and update an international index that aims to track the intersection of AI and political rights across countries and score the ethical standing of each state according to standards we help establish. This index would be similar to the Corruption Perceptions Index or Freedom House rankings and could be housed within the UN Human Rights Council.
- Mandate independent audits of Political AI Systems
All AI used in elections, campaign targeting and advertisement, voter roll management, content curation, etc., should be independently audited by third parties. Electoral commissions should work in a tripartite relationship with civil society and independent digital forensics labs to ensure compliance.
- Align Global Data Protection Laws
There must be cross-border alignment of data protection norms, especially concerning biometric, behavioral, and geolocation data. International collaboration is essential to combat the export of surveillance technologies and data collection techniques that exploit vulnerable people by authoritarian regimes.
- Support Civil Society and Tech Watchdogs
Non-governmental organizations (NGOs), universities, and investigative journalists must be supported and protected in their investigations into the misuse of AI technologies. Laws need to be in place to ensure that datasets and records of algorithmic decisions are made available to the public.
Geostrategic Context for South Asia
There has been little to no substantial cooperation on the important issues of AI ethics in South Asia at present. While regional arrangements like the SAARC and BIMSTEC are viable forums for initiating deliberations on issues surrounding data sovereignty, algorithmic transparency, and cross-border digital activities governed through cyber regulations, current capacities remain low. Enabling this coalescing regional effort will be the ability of India to leverage the growing technological infrastructure it has and its geopolitical influence to create a framework for ethical AI governance in the region.
In conclusion, AI in politics is no longer a thought experiment. It is happening in election campaigns, policy-making, policing, and warfare. Without some form of global structure that pivots on human rights at its centre, AI will only serve power, and not the people.
Vidushi Sharma is a Research Intern at NIICE and is currently pursuing her Master's in Political Science in International Relations from Lovely Professional University, Phagwara, India.