27 November 2025, NIICE Commentary 11991
Vidushi Sharma
"Technology alone cannot secure a society; ethics must guide its use, and justice must define its purpose." — Mahatma Gandhi
Global security studies are undergoing a radical change due to the use and application of artificial intelligence (AI). AI has transformed the way we think of defence strategies, how we collect intelligence, and how we manage crises. In addition, the use of AI has the potential to greatly enhance our ability to conduct surveillance, develop predictive analytics, create autonomous military systems, and combat cyber threats. However, as these technologies develop and proliferate at a rapid pace, they bring to bear many ethical challenges, especially in the Global South (i.e., developing nations), where a history of inequities, reliance on digital infrastructure, and problems of governance render the populations in developing countries particularly vulnerable. By integrating AI into security structures in developing nations, we are able to build their capacity for both national security and regional stability. However, without an emphasis on ethical standards when integrating AI into security structures, the potential exists for a continuation and even exacerbation of asymmetries, further increasing the ability of states to overreach and limiting personal freedoms. Understanding the kinds of ethical principles and issues relative to AI in a security context requires an awareness and acknowledgement of multiple individuals' rights to access AI, as well as issues concerning accountability, bias, transparency, privacy, and sovereign rights regarding data management. Additionally, there are issues to be addressed regarding the overall socio-political framework and the basis of Digital Swaraj.
Countries in the Global South are increasingly using AI to improve their security. Countries in the Global South (e.g., Brazil, India, Kenya) are implementing AI-based security measures, including border security and counterterrorism; examples include using drones and facial recognition software in border management (India), and predictive policing algorithms (Kenya, Brazil, etc.). While using AI for these types of security measures represents an important advancement in the use of technology to enhance a nation's ability to defend itself against threats, there are also significant moral concerns regarding the privacy, consent, and proportionality of using this technology. In addition to the importance of these issues, research in Latin America has shown that the use of predictive policing algorithms has resulted in the disproportionate targeting of marginalised groups; therefore, using AI for security-related purposes can reinforce many existing systems of societal inequality. Additionally, due to the diverse social structures of the Global South and the often fragile state of democratic institutions within the region, there is a greater risk of algorithmic injustice than exists elsewhere, making it imperative that ethical governance of AI for security-related decisions be viewed as a social imperative, rather than simply a technical issue (Noble, 2018; Buolamwini & Gebru, 2018).
AI's ethical issues are complex due to multiple layers of accountability. When an AI system or algorithm creates a life-or-death choice based on automated actions or algorithms, then determining who is accountable for mistakes (from errors in judgment) or who is harmed by the actions of the AI is no longer clear. This ambivalence around accountability is complicated when AI systems have a "black box" design; thus, it is difficult for people to understand how an AI system has made its decisions or to challenge those decisions legally if the decision is wrong or unfair. The lack of clarity in the decision-making process is especially troubling in regions where there are limited laws and regulations governing AI. For example, most countries lack an established legal process to obtain redress for the illegal use of AI by a government entity or private industry. In addition, AI systems use biased datasets based on historical data and underrepresented populations that create a statistically biased predictive model, which produces outcomes with bias and creates additional social tensions.
Another key element of ethical issues about privacy. By analysing data and using AI in security studies, the security sector has access to a large volume of data, both for its own use as well as for others. Without strong data protection laws in place, there is the possibility of government overreach on personal privacy rights, loss of civil liberties, and state intervention in private lives. Countries or regions in the Global South often face a double threat: One the one hand, a lack of strong government data protection; On the other hand, there exists a growing trend for private sector entities (such as technology companies) to exploit the data disparities between them and their customers/clients as a source of revenue, thereby creating a vacuum of protection for individuals' rights.
When developing ways to meet the challenges of AI, Digital Swaraj presents a unique opportunity for ethical AI Governance in the Global South. Digital Swaraj is based on the principles of Gandhiji, which include self-sufficiency and independence as well as decentralised empowerment. Digital Swaraj supports a digitally sovereign society governed by democratic accountability and the use of technology for the public good rather than to concentrate power within elites. Within the security studies community, Digital Swaraj supports the integration of technology with democratic values (i.e., ethics, openness/transparency, and community involvement). Digital Swaraj also highlights the importance of not just reproducing Western-centric security models but rather adapting the security model to the unique cultural, societal, and human rights values of local communities. Through decentralising control of the AI infrastructure and enabling citizens to have a role in overseeing the ethical use of AI in relation to security, Digital Swaraj will help to stop the concentration of digital power into the hands of governments and corporations, thereby enabling citizens to influence how AI is ethically used in relation to security matters.
Equity is an important consideration in the ethical use of artificial intelligence (AI) in security systems, particularly in the Global South, which is frequently reliant on AI technologies created in the Global North. Implementing these types of technology, without first adapting them for a particular culture, can result in a technologically advanced AI system, but still violates social norms and reinforces neocolonialist power relations both in cyberspace and in physical security measures. Ethical AI development frameworks based on the principles of Digital Swaraj promote building local capacity through collaborative governance, participatory design, and open-source development.
Military utilization of AI has raised more ethical questions regarding the use of drones, autonomous weapons, and threat assessment systems, among other issues. Transparent, explainable AI with an auditing process will provide the means to hold those who deploy it accountable. Brazil has seen how civil society has helped eliminate bias from predictive policing algorithms through intervention by bringing attention to its problems and successes (Salles et al., 2021). Investments in resources for developing capacity and collaboration between countries and developing nations, along with focusing on people's needs first, will minimize the potential for misuse. Digital Swaraj encourages the promotion of local innovations, technological sovereignty, and democratic governance, so that the Global South can effectively use AI technologies ethically to uphold human rights and provide security for their societies in ways that are socially and contextually appropriate (UNIDIR, 2020).
Vidushi Sharma is a Research Intern at NIICE and is currently pursuing her Master's in Political Science in International Relations from Lovely Professional University, Phagwara, India.