Global Tech Diplomacy: Regulating AI in a Fragmented World

Date

19 Jun 2025
Expired!

Time

12:30 pm - 1:00 pm

Global Tech Diplomacy: Regulating AI in a Fragmented World

Event Report:

On 19 June, the Nepal Institute for International Cooperation and Engagement (NIICE) hosted an insightful webinar led by Marietje Schaake, former Member of the European Parliament and prominent voice in digital policy on examining the global challenges and political dynamics surrounding AI governance and regulation.

Schaake opened the session by reflecting on the initial global response to the emergence of AI and generative AI, noting that many governments were quick to recognize its potentially transformative, and disruptive power. This recognition triggered a rare moment of global convergence, where states aligned briefly on the urgency of regulating this fast-evolving technology. Among the key milestones discussed was the Bletchley Park AI Summit, where nations signed a Global Joint Declaration expressing commitment to coordinated governance of AI.

However, the momentum of this global consensus is now faltering. With the re-election of Donald Trump, the United States appears to be retreating from multilateral regulatory commitments, taking a divergent stance at the third Bletchley Park Summit, where it reportedly declined to endorse the renewed declaration. Schaake noted that the U.S. has effectively called for a ten-year moratorium on binding AI regulations, a move that could trigger ripple effects among allied countries, particularly the G7.
This shift is situated within broader geopolitical tensions. According to Schaake, AI governance has increasingly taken a backseat amid escalating global conflicts, with the U.S. administration favouring deregulation to maintain a competitive edge over China, viewing technology development as a strategic arena of geopolitical rivalry.

In contrast, the European Union has taken decisive regulatory steps with the adoption of the AI Act, a comprehensive legal framework that categorizes AI applications into risk tiers, ranging from low-risk to high-risk use cases such as CV screening, predictive policing, and border control systems. The Act imposes tailored requirements based on the level of risk posed to fundamental rights and public safety. However, Schaake acknowledged that significant challenges remain in implementation, especially given the opaque and unpredictable nature of generative AI itself. She emphasized that rather than only addressing the external effects of AI, regulators must also confront deeper technical questions surrounding algorithmic opacity and explainability.

During the Q&A session, participants posed pressing questions regarding the feasibility of establishing global AI regulations in the face of ideological divergence, particularly with authoritarian regimes such as Russia and China. Schaake, drawing on her experience advising European institutions and her current work in Silicon Valley, shared insights on the emerging landscape of “AI diplomacy.” She examined the tensions and strategic manoeuvring among the U.S., EU, and China, and reflected on whether democratic nations should strive for regulatory alignment to counter authoritarian models or embrace a pluralistic democratic approach to AI governance.

She concluded that while the loss of U.S. alignment is a setback for the EU, it may also create space for innovation and redirection in policy. Divergence among democracies, she argued, is not inherently a weakness, but rather an opportunity to explore diverse regulatory models grounded in democratic values and human rights. Another important question raised during the Q&A session addressed the growing threat of AI-driven disinformation campaigns and how they compare to more traditional methods of electoral interference by state actors such as Russia and China. In response, Schaake emphasized that regulating election-related content, particularly from non-state actors, has become a critical concern, especially in the context of regional and national elections.

She cited ongoing investigations into platforms such as TikTok’s role in influencing electoral discourse in the European Union region, and inquiries into X’s (formerly Twitter’s) involvement in the German general elections, as examples of how generative content and algorithmic amplification on social media platforms pose complex challenges. Schaake argued that context-specific regulations are urgently needed, as the regional impacts and vulnerabilities vary widely and demand tailored policy responses.
The session closed with a call for sustained multilateral dialogue and cross-sectoral engagement, underscoring the urgent need for trustworthy, transparent, and accountable AI governance in a rapidly polarizing world.

Close