Algorithmic Bias and the Politics of Justice: Toward Equitable AI Governance in the Indo-Pacific

Algorithmic Bias and the Politics of Justice: Toward Equitable AI Governance in the Indo-Pacific

Algorithmic Bias and the Politics of Justice: Toward Equitable AI Governance in the Indo-Pacific

14 July 2025, NIICE Commentary 11484
Vidushi Sharma

Can AI ever be truly fair in a world built on political inequality?

As artificial intelligence systems continue to permeate sectors ranging from public healthcare to national security, the question above becomes more than philosophical—it becomes existential. Algorithmic systems now influence judicial decisions, public service delivery, surveillance, and even warfare. This expanding AI influence is especially pronounced in the Indo-Pacific region, where political regimes vary from liberal democracies to technocratic autocracies. Consequently, the deployment of AI in this geopolitical theatre raises not only technical questions but deeper political ones—about justice, representation, and the distribution of power.

This essay argues that bias in AI systems is not merely a technological flaw, but a political one. Drawing on both classical political theory and modern governance frameworks, it explores the roots of algorithmic injustice, presents real-world cases from the Indo-Pacific, and proposes institutional reforms for a more equitable AI future.

The Political Foundations of Algorithmic Bias

AI systems reflect and reproduce the societies that create them. This is not a new insight. As political philosopher Hannah Arendt warned in The Human Condition (1958), the application of technology to human affairs risks depersonalizing politics unless rooted in collective responsibility. Arendt’s fear is materializing today: algorithmic governance often obscures accountability, giving the illusion of neutrality where inequality is baked into the code.

Classical liberal theorist John Stuart Mill argued that any governance system must be judged not merely by efficiency, but by how well it protects minority rights. Similarly, contemporary thinkers like Virginia Eubanks, in her book Automating Inequality, demonstrate that data-driven systems disproportionately penalize the poor and marginalized, particularly when implemented without adequate safeguards.

Evidence of Harm: Real-World Bias in the Indo-Pacific

Facial Recognition and Ethnic Discrimination – Xinjiang, China

China’s use of facial recognition systems for surveillance of the Uyghur Muslim population in Xinjiang is a prime case of algorithmic bias being used for state control. A 2020 report from IPVM found that Chinese companies like Hikvision and Megvii had built ethnic identification tools into their AI products. These technologies were designed to detect and track Uyghur faces specifically—raising severe human rights concerns.

Credit Algorithms and Economic Exclusion – India

In India, fintech companies increasingly use alternative data (e.g., mobile phone usage, social media behavior) to determine creditworthiness. However, a 2023 study by the Centre for Internet and Society found that such practices often discriminate against low-income individuals and rural populations, reinforcing existing socio-economic hierarchies. The opacity of these algorithms means users have little recourse when denied loans.

Gender Bias in Recruitment AI – Japan and South Korea

In Japan and South Korea, AI tools have been used in corporate hiring processes, from resume screening to video interview analysis. But studies, including one from UNESCO, found that these systems often replicate gender biases—penalizing women for speech patterns or facial expressions interpreted as "less confident."

These examples highlight a key paradox: AI is often sold as objective but operationalizes subjective, biased datasets shaped by unequal political realities.

Algorithmic Bias as a Political Problem

AI bias cannot be understood in isolation from political structures. In most Indo-Pacific states, policymaking around AI governance lags behind deployment. This policy-tech gap allows private companies and state actors to wield algorithmic power with minimal oversight.

Political theorist Michel Foucault’s concept of biopolitics is useful here: governments increasingly use digital technologies to regulate populations—not through coercion, but by making certain lives legible, trackable, and governable. Biased AI systems exacerbate this tendency by creating what Ruha Benjamin calls “coded inequity”—a condition where technological systems systematically marginalize already disadvantaged groups.

The Role of Geopolitics in Indo-Pacific AI Governance

In the Indo-Pacific, AI governance is deeply entangled with geopolitics. The U.S. and its allies promote “trusted AI” aligned with liberal democratic values, while China exports surveillance-heavy “AI governance with authoritarian characteristics.” The Quadrilateral Security Dialogue (Quad)—comprising the U.S., India, Australia, and Japan—has discussed AI cooperation but has yet to adopt shared norms around ethical AI. Without alignment on equity and bias mitigation, Quad-led initiatives risk becoming merely strategic counterweights to China rather than principled frameworks for just AI. Moreover, ASEAN countries have adopted a pragmatic, sovereignty-first approach to AI governance, emphasizing “responsible innovation” while often overlooking structural bias. The lack of region-wide enforceable frameworks allows biased AI tools to proliferate unchecked.

Toward Solutions: Political Reforms and Technical Safeguards

Mandatory Bias Impact Assessments

Governments must legally require AI systems used in public services or critical infrastructure to undergo Bias Impact Assessments—similar to environmental or privacy assessments. These should evaluate training data, intended use, and potential disparate impacts on vulnerable populations. Such mechanisms already exist in limited form in Canada’s Algorithmic Impact Assessment, and can serve as a model for the Indo-Pacific.

Institutionalized Algorithmic Auditing

Independent audit bodies—possibly under national human rights commissions—should be empowered to investigate high-risk AI deployments. These auditors must have cross-disciplinary expertise (law, data science, political science) and a mandate to publicly disclose findings. Recent legislation like the EU AI Act offers a blueprint for such auditing regimes.

Participatory Governance and Algorithmic Democracy

Building on Jürgen Habermas’ theories of deliberative democracy, AI governance must be inclusive. Marginalized communities should have a say in how AI systems that affect them are designed and deployed. This could involve citizen juries, public consultations, or co-design workshops—particularly in linguistically and culturally diverse Indo-Pacific contexts.

Conclusion: Toward an Ethical AI Future Anchored in Justice

Algorithmic fairness is not just a technical goal—it is a political imperative. In the Indo-Pacific, where histories of colonialism, state surveillance, and structural inequality shape present-day governance, AI bias is a mirror of deeper societal fractures. To address it, we must treat AI not as a neutral tool but as a site of political struggle. Justice, as John Rawls insisted, is the first virtue of social institutions. In the era of AI governance, this means designing systems where justice is coded not as an afterthought, but as a foundational logic.

If the Indo-Pacific is to lead the world in democratic, equitable technology governance, it must begin by asking not merely what AI can do—but for whom, by whom, and at what cost

Vidushi Sharma is a Research Intern at NIICE and is currently pursuing her Master's in Political Science in International Relations from Lovely Professional University, Phagwara, India.

NIICE

NIICE

Close