Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
In 2025, there will be a course correction in AI and geopolitics, as world leaders increasingly understand that their national interests are best served through the promise of a more positive and cooperative future.
The post-ChatGPT years in AI discourse could be characterized as something between a gold rush and a moral panic. In 2023, at the same time as record investment in AI, technology experts including Elon Musk and Steve Wozniak published an open letter calling for a six-month moratorium on training AI systems more powerful than GPT-4, while others compared AI to a “nuclear war” and a “pandemic.”
This has understandably clouded the judgment of political leaders, taking the geopolitical conversation about AI to some disturbing places. At the AI & Geopolitics Project, my research organization at the University of Cambridge, our analysis clearly shows the growing trend towards AI nationalism.
In 2017, for example, President Xi Jinping announced plans for China to become an AI superpower by 2030. The Chinese “Next-generation AI development plan” aimed for the country to reach a “world-leading level” of AI innovation by 2025 and become a major hub of AI innovation by 2030.
The CHIP and Science Act of 2022, a US ban on semiconductor exports, was a direct response to this, designed to leverage US domestic AI capabilities and constrain China. In 2024, following an executive order signed by President Biden, the US Treasury Department also published draft rules to prohibit or restrict investments in artificial intelligence in China.
AI nationalism describes it as a battle to be won, rather than an opportunity to be seized. However, those who favor this approach would do well to learn deeper lessons from the Cold War beyond the notion of an arms race. At that time, the United States, while striving to become the most technologically advanced nation, managed to use politics, diplomacy, and statecraft to create a positive, aspirational vision for space exploration. Successive American governments also managed to gain support at the UN for a treaty that protected space from nuclearization, specified that no nation could colonize the Moon, and guaranteed that space was “the province of all humanity.”
That same political leadership has been missing in AI. However, in 2025 we will begin to see a shift in the direction of cooperation and diplomacy.
The AI Summit to be held in France in 2025 will be part of this change. President Macron is already reorienting his event away from a strict AI risk “safety” framework and toward one that, in his words, focuses on more pragmatic “solutions and standards.” In a virtual speech at the Seoul Summit, the French president made clear that he intends to address a much broader range of policy issues, including how to ensure that society truly benefits from AI.
The UN, recognizing the exclusion of some countries from the AI debate, also published its own plans in 2024 aimed at a more collaborative global approach.
Even the United States and China have begun to get involved in tentative diplomacyestablishing a bilateral consultation channel on AI in 2024. While the impact of these initiatives remains uncertain, they clearly indicate that, in 2025, the global AI superpowers will likely pursue diplomacy over nationalism.