Microsoft’s Office of Responsible AI teamed up with the Stimson Center’s Strategic Foresight Hub to gather Global South experts on AI risks and benefits. After over a year of virtual talks, some met face-to-face at the February 2025 AI Action Summit in Paris. Their mission: push for AI governance that includes voices from developing nations.
The Paris AI Action Summit marked a clear pivot from AI safety to AI opportunity and fast innovation. The U.S. revoked Biden’s AI Safety Executive Order just before the event. The UK dropped “Safety” from its AI institute’s name, now the AI Security Institute. U.S. Vice President J.D. Vance declared, “I’m not here this morning to talk about AI safety… I’m here to talk about AI opportunities.” The U.S. and UK also refused to sign the summit’s final inclusive and sustainable AI declaration.
This came despite new research warning of serious AI risks. Professor Yoshua Bengio, a pioneer in deep learning, co-led the First International AI Safety Report, which flagged dangers like AI deceiving human programmers. His safety concerns clashed with the summit’s race-to-innovate vibe.
At a side event hosted by IRIS and the Stimson Center, fellows discussed AI governance and global inclusivity. They called for more diverse participation and better regulatory frameworks, especially as the next summit moves to India.
Natasha Crampton, Chief Responsible AI Officer at Microsoft and UN AI advisory member, stressed AI governance needs regulatory interoperability, risk oversight, and inclusivity. Her views echo in Microsoft’s “Global Governance: Goals and Lessons for AI” report. Crampton warned AI development pace varies globally, so governance must give all stakeholders a say.
The fellows gave examples from different countries showing why AI fairness means addressing infrastructure and labor issues first. Kyrgyzstan slashed internet costs and boosted digital payments for almost full financial inclusion using smart policy. But 2.5 billion people still lack internet access and electricity, limiting AI adoption in the Global South.
Julia Velkovska from Orange Research Labs highlighted how AI labeling work often exploits poor labor in the Global South. Language bias in AI leaves many communities digitally excluded.
Thailand’s Narun Popattanachai summed up AI governance’s core struggle: balancing human rights and privacy advocates against fast AI market adopters.
India is raising its voice in AI governance, moving from social empowerment projects like RAISE to a broader global regulatory role. Jibu Elias, an early IndiaAI architect, sees India’s inclusive AI governance model as a blueprint for other countries with similar challenges.
Ibrahim Sabra warned AI in Global South courts risks bias, privacy issues, and opaque justice. He insists human judges must keep authority, transparency must improve, and training is crucial.
Branka Panic called for conflict-affected communities to shape AI tools affecting them. She praised grassroots groups giving marginalized voices a platform on AI’s future.
The fellows then shared recommendations for the India AI Summit:
Branka Panic:
“India has a crucial opportunity to refocus the global conversation on Trustworthy AI, on existing risks, and the collective efforts needed to address these challenges effectively. My wish is for India to foster more inclusivity by creating space for vulnerable communities facing disproportionate AI risks, particularly those affected by conflict and violence. By prioritizing voices from these regions, India can ensure the Summit not only addresses challenges but also highlights AI’s potential in peacebuilding, human rights protection, and conflict prevention. This approach would set a global precedent for truly inclusive AI governance.”
Jibu Elias:
“As someone who helped architect INDIAai and has worked at the intersection of policy, people, and power, I believe India’s AI moment isn’t just about tech—it’s about trust. At this Summit, we have the chance to do something historic: put communities who’ve long been left out—whether due to language, labor, or lack of access—at the center of the global AI agenda. If we can create an inclusive governance framework that works for India, we can spark a new model for the world—one built not just on ambition, but on empathy.”
Ibrahim Sabra:
“Law and the judiciary are deeply embedded in society and cannot be isolated from broader governance trends. The next Summit should address the uneven adoption of AI in judicial systems, which reflects fragmented governance worldwide. International and regional organizations, in collaboration with key legal associations (e.g., AIDP, IBA, IAJ, ICJ), should advance UNESCO’s Guidelines for AI Use in Judicial Systems as a global framework. Countries should follow Colombia’s lead in formally adopting these guidelines, setting a vital precedent for ethical AI integration in justice systems. As both justice and AI are transnational, harmonized standards are essential to safeguard judicial integrity, prevent misuse, and ensure AI promotes fairness.”
Watch the full event: The Future of AI Governance: Ensuring Global Inclusivity.