The trends that shaped the Paris AI Action Summit. And what comes next!

The Paris AI Action Summit resulted in a generic communique that seemed to highlight divides among Western countries and deprioritise the focus on AI safety established by the first AI Safety Summit in Bletchley Park.  

For casual observers, this may look like a diplomatic failure on the part of the French Government. In reality, the Summit and its conclusions have been shaped by wider forces that have been at play almost since the ink dried on the more concrete ‘Bletchley Declaration’ signed at the UK’s AI Safety Summit back in November 2023.  

These trends not only set the parameters for what was possible in Paris but tell us much about the likely future of the global AI Safety process (for the next few years at least). 

At Public First we believe in looking at the data and evidence that drives policy change, so below we’ve laid out the trends that shaped the Paris AI Action Summit and what they tell us about what comes next for the global governance of AI.  

The road to Paris, how the global AI Safety movement developed:  

The UK’s Bletchley Park summit was seen by many as a surprise success, achieving broad global agreement for collaboration to address potential extreme risks posed by future AI systems (China, the USA and EU all signed).  

After the summit, a wide range of countries followed the lead of the UK and USA and established AI Safety Institutes committed to exploring the risks posed by AI systems and creating global networks to share expertise and evidence.  

Some of these institutes (most notably the UK’s) attracted highly talented researchers able to keep up with the developments coming out of the world’s leading AI companies.  

However, tech and AI skills shortages are global, meaning AI Safety Insititutes need to compete with businesses for the same talent. Consequently, many AI Safety Institutes have struggled to recruit the technical experts needed to have the same level of impact with their own respective Governments and industries as the UK’s and other better funded institutes.  

AI’s potential begins to be realised: 

Alongside the growth of the AI Safety process, AI and its potential benefits have become clearer and clearer. Since the UK Summit the number of available consumer and business applications has exploded. Public First’s polling with the Centre for Data and Innovation in the UK shows both a significant increase in the public’s usage of AI tools such as Chat GPT between 2023 and 2024 while over one third of workers are using the technology at work and reporting high satisfaction rates.  

Fig 1. Have you personally used Chat GPT?

Fig 2. Using and Large Language Model (LLM) at work and perspectives on use: 

Our polling in the USA shows similar increases in usage as well as satisfaction with AI technology at work. 

This growing use and understanding of the power of AI technology has led to greater evidence about the value the technology can deliver. For example, Public First’s research with Microsoft shows that digital technologies and the systems that support them such as cloud computing could increase GDP by over £550 billion by 2035.  

While our research with Google Cloud shows that AI tools have the potential to create significant efficiencies in public services for example allowing an extra 3.7 million GP appointments, a 16% increase in the teacher to student ratio and freeing up the equivalent of over 160,000 police officers. 

AI for the national interest:  

The confluence of the development of these technologies alongside a period where Governments around the world have increasingly become motivated by domestic or geopolitical pressures to boost economic growth and innovation, as well as improving public services, has changed the political calculation around AI.  

Globally, politicians have moved from being concerned about extreme or existential risks to being focused on the opportunities AI presents for economic, social and geopolitical reasons. The USA, UK and even the EU have all shifted their approaches along these lines over the past year. 

Public First is currently carrying out some major research across the Asia-Pacific (APAC) to quantify AI opportunities there, which is likely to show countries and businesses in the APAC region identifying and prioritising AI opportunities. 

Deep Seek and the Global South: 

This focus on rewards and competition has been further enhanced by the launch of the Chinese challenger AI chatbot Deep Seek.  

While Deep Seek might not outperform its American counterparts, its most significant impact has been to reduce the perceived barriers to entry to advanced AI development, leading to a reappraisal of US and Chinese tech markets.  

Countries such as India (who co-chaired the Paris summit) have built on this development focusing heavily on promoting AI product development in the global south. Publicly warning against heavy handed global regulation that could stifle new entrants to the market and taking active efforts to support challenger companies by investing in open-source technologies and data access.  

Safety concerns remain, if different:   

Despite the significant shift away from the focus on safety, concerns over AI risks do remain prevalent among voters. Our polling in the UK and the US shows that risks to the labour market as well as the manipulation of information are most prevalent among the British and American publics. 

Similar concerns were aired at the Paris summit, particularly when it came to the potential impact of AI on workforces while countries in the global south raised specific concerns about the capacity of Governments to tackle misinformation and deepfakes that could be used to disrupt elections and spark unrest in less cohesive societies.  

Additionally, while existential risks faded into the background, Governments continue to invest, often via their AI Safety Institutes, into analyses of what risks advanced AI models could pose to national security.  

So, what comes next?  

Because of these trends and an increasingly geopolitical and self-interested approach to AI from the world’s major Governments, it is no surprise therefore that the already very broadly framed Paris Summit failed to reach a widely accepted agreement.  

However, while the Paris Summit’s final conclusions have not significantly updated the global approach to governing AI, the trends behind the summit can tell us a lot about where global conversations on AI are likely to go over the next couple of years.   

  • AI opportunities are the main game in town: globally, governments are now much more motivated by AI opportunities. These motivations are found in geopolitical competition, the need to boost economic growth or driven by a need to address social challenges (or a combination of all three).  
  • AI nationalism: governments are keen to secure their own AI industries, in the USA this has seen the new administration push back against EU regulation, and in the UK and EU there has been a shift to focus on competitiveness. Outside the West, countries have been inspired by Deep Seek’s success to channel investment into open-source and alternate model providers.  
  • AI Safety Institutes remain important: while the initial focus at Bletchley on existential risks has been reduced, the process to establish the network of AI Safety Institutes and the expertise built up in a few of these Institutes will see them continuing to play a role. This will mainly be restricted to advancing the science on model evaluations and helping Governments assess the risks to national security posed by the most powerful new technologies.  
  • Other AI risks will come to the fore: however, discussions on AI safety are likely to become more diverse and led by a wider range of actors such as think tanks, not for profits and non-governmental organisations with a focus on how to prepare for more day-to-day risks such as job security and the impact of misinformation.  
  • Regulation is on the back burner: with the pivot to focus on AI opportunities almost all the major countries will dial back their plans for regulation. Most interestingly, some EU Member States will attempt to soften the blow of the already enacted EU AI Act.  

As AI continues to affect our lives and becomes an increasingly important technology for policy, understanding what the public thinks and the impact AI has on our economies and public services will become more and more vital for those trying to affect policy decisions.  

At Public First we specialise in both public opinion research and economic modelling and will be working with our wide range of clients to help them better understand, and shape this technology and policy revolution.  

Make sure to check back here for more insights and if you have a particular project in mind where you would like to work with us, get in touch!