Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

What happened at the AI summit in Paris?

The debate over the future of the new technology was between the US and EU. The danger for Britain is that we begin to look irrelevant

In Paris, when it comes to AI the word “Safety” had been replaced in the title of the summit by “Action”. Photo: AURELIEN MORISSARD/POOL/AFP via Getty Images

The Paris Artificial Intelligence Summit, hosted by president Emmanuel Macron in the splendour of the Grand Palais on the Champs-Élysées, was different to its predecessors. Just over a year ago, Rishi Sunak organised the first AI Safety summit, attended by heads of government and global technology leaders, at Bletchley Park, where Alan Turing cracked the German enigma code during the Second World War. 

The purpose of that summit was to understand the risks and opportunities that lie at the frontier of AI. But in Paris, the word “Safety” had been replaced in the title of the summit by “Action” and with it a promise from president Macron that the large data centres needed to power the latest AI models were welcome to plug into France’s ample supply of nuclear generated electricity. 

The real change of mood though was brought by the US delegation, led by the newly inaugurated vice president, JD Vance. Vance is really the first leading politician of the era of big tech, and not just because he is forty years old. Vance has worked for and been supported by Peter Thiel, the founder of PayPal and Palantir Technologies. He is also closely aligned with the Trump administration’s desire to push back against global tech regulation, particularly Europe’s new safety and competition standards.

In his speech to the summit, Vance warned that, “the AI future is not going to be won by hand-wringing about safety,” and complained about the “massive regulations” contained in the EU’s online safety legislation. Throughout the summit and its fringe events, the same sentiments were echoed by tech executives and policy makers, warning about the “tensions” that exist between AI regulation and innovation, with the emphasis firmly on the importance of the latter. 

Criticism of the new laws, which regulate the use and development of AI in the EU, were also widespread. The tone of the representatives of the big tech companies was that the future regulation of AI should be based on a consensus position, to be negotiated and agreed by them, as if they themselves were sovereign entities. As one of their representatives put it at a fringe meeting, AI regulation was an area where governments would be “lagging not leading”.

Within these conference halls, the voice of the people can often seem distant, which can lead to the mistaken belief that tech regulation is the result of meddling bureaucrats rather than genuine public concern. The fears that have grown about online safety have been born out of experience. The great experiment of social media has brought many benefits, but has also left users exposed to threats of violence, promoted self-harm, and become a breeding ground for dangerous conspiracy theories. It has enabled disinformation to become a weapon used against us by hostile foreign states and been used to extort people through frauds and scams.

The questions about how our laws are enforced in the age of AI, and how companies are accountable for the machines they’ve created are real and legitimate. The basic rule should apply that you can’t use AI to break the law, whether that’s through failing safety and competition standards, or ripping off someone’s copyright protected content. 

Experience has shown that we cannot just leave these decisions to the people who are building and profiting from new tech products. Furthermore, these concerns aren’t just shared by safety campaigners, but also many leading figures from the tech sector.

At the Paris summit, Eric Schmidt, the former CEO of Google, warned that he feared the emergence of an Osama Bin Laden of AI, who would weaponise tools previously thought safe, and turn them against us. He also backed the launch of the “Roost” initiative for “Robust Open Online Safety Tools” which will offer free, open-source, and easy to use tools to detect and report illegal material, and make safety technologies easier to use. 

This idea of “tools not rules” as the answer to AI regulation has some appeal, but only if developers take advantage of the options available to create safe and positive experiences for users.

So much of the discussion at the summit reflected the different positions of the USA and EU, that it was hard to see where the UK fitted in. Outside the EU AI Act, and with a tradition of regulators working with companies to agree safety standards, the UK has some latitude in dealing with developers. But we will do so in the knowledge that the EU law will still apply to their services. 

At the end of the summit, both the UK and USA declined to sign the non-binding declaration that called for the development of “inclusive and sustainable Artificial Intelligence for people and the planet”. That was a surprise. 

Whatever the official reason it looks like an attempt to show sympathy for the deregulatory zeal of the Trump administration without actually showing a way forward. Rather than being a balancing force at the centre of this debate, there is a danger that the UK just becomes adrift.

Damian Collins is a former minister for tech and the digital economy and was MP for Folkestone and Hythe from 2010 to 2024

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.