27 November 2025 – Humanity has never faced a greater intelligence than itself; we will, and possibly within five to ten years. We are not taking that seriously. There are two UN AI resolutions, three UN Security Council sessions on AI, even AI laws for the EU, China and other countries, plus the Council of Europe’s AI Treaty, but none address the big gorilla in the room: artificial general intelligence (AGI). There are three kinds of AI: narrow, general and super AI, with some grey areas in between.
Artificial narrow intelligence (ANI) ranges from tools with limited purposes, such as diagnosing cancer or driving a car, to the rapidly advancing generative AI that answers many questions, generates software code, pictures, movies and music, and summarises reports. In the grey area between narrow and general are AI agents and general-purpose AI becoming popular in 2025.
Artificial general intelligence (AGI) would be able to learn, edit its code to become recursively more intelligent, conduct abstract reasoning and act autonomously to address many novel problems with novel solutions similar to or beyond human abilities.
Artificial super intelligence (ASI) will be far more intelligent than AGI and likely to be more intelligent than all of humanity combined. It would set its own goals and act independently from human control and in ways that are beyond human understanding and awareness.
Without national and international regulations, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10% or greater chance that humans will go extinct from their inability to control future AI. But, if managed well, AGI could usher in great advances in the human condition—from medicine, education, longevity and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks? The new book Global Governance of the Transition to Artificial General Intelligence addresses these issues in four parts with clear language for politicians.
Part 1 distills insights from 55 AGI experts such as Sam Altman, Bill Gates and Elon Musk, who address 22 key questions about AGI development and governance. Claude AI also answers these same 22 questions for comparison.
Part 2 assesses five global governance models and 40 regulations and guardrails for developers, governments, users and the United Nations, by an international panel of experts from 47 countries, including futurists, diplomats, international lawyers, philosophers and scientists.
Part 3 presents a detailed global scenario: If Humans Were Free: The Self-Actualisation Economy, about how this all could turn out okay and benefit humanity.
Part 4 includes conclusions, recommendations and remaining issues.
“This book is an eye-opening study of the transition to a completely new chapter of history,” writes Csaba Korösi, 77th President of the UN General Assembly.
In the past, technological risks were primarily caused by human misuse. AGI is fundamentally different. Although it poses risks stemming from human misuse, it also poses potential threats caused by AGI itself without human involvement.
We can think of current ANI as our young children, whom we control—what they wear, when they sleep and what they eat. We can think of AGI as our teenagers, over whom we have some control, which does not always include what they wear or eat or when they sleep. And we can think of ASI as an adult, over whom we no longer have any control. Parents know that if they want to shape their children into good, moral adults, then they must focus on the transition from childhood to adolescence. Similarly, if we want to shape ASI, then we have to focus on the transition from ANI to AGI. And that time is now.
We should call for a UN General Assembly specifically on AGI as soon as possible to discuss:
- A global AGI observatory to track progress in AGI-relevant research and development and provide early warnings on AI security
- An international system of best practices and certification for secure and trustworthy AGI to identify the most effective strategies and provide certification for AGI security, development and usage
- A UN Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution and
- A feasibility study on a UN AGI agency.
Governing AGI is the most complex, difficult management problem humanity has ever faced. During the Cold War, it was widely believed that nuclear World War III was inevitable and impossible to prevent. The shared fear of an out-of-control nuclear arms race led to agreements to manage it. Similarly, the shared fear of an out-of-control AGI race should lead to agreements capable of managing that race.
This article has drawn on parts from: Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly: Report for the Council of Presidents of the United Nations General Assembly (UNCPGA) chaired by the author, State of the Future 20.0 of The Millennium Project led by the author and Global Governance of the Transition to Artificial General Intelligence by the author.



