Trump, Stargate, DeepSeek: A new, more unpredictable era for AI?

As the AI summit in Paris approaches, the US administration is taking a comparatively brakes-off approach on AI regulation. Can the UK and EU forge a leadership role on AI safety?

Expert comment Published 7 February 2025 Updated 7 March 2025 4 minute READ

2025 is already proving a whiplash year for leaps and investments in artificial intelligence. On 19 January, China announced an AI investment fund, viewed as a response to tightened US export controls on chips.

On 21 January, US President Donald Trump announced the Stargate Project, a company that he said would invest an unprecedented $500 billion in developing US AI infrastructure, backed by technology companies OpenAI and Oracle, Japanese bank SoftBank and the Emirati sovereign wealth fund, MGX.

The following week, Chinese DeepSeek’s low-cost, ‘low’-chip and open-source AI model with reasoning capabilities caused market chaos, after it became clear it posed a challenge to a US-based OpenAI rival, o3. Leading chipmaker NVIDIA lost $600 billion of market value when the markets opened on 27 January.

DeepSeek has disrupted assumptions about who gets to develop powerful AI, and reignited doubts about the effectiveness of stringent US chip export controls.

But beyond intensifying competition between the US and China, these events signal a tectonic shift in US tech governance, toward a concentration of private-sector power in technology development and agenda-setting, characterized by division and unpredictability.

This unpredictable new era represents a significant challenge for European policymakers – but there may be limited opportunities to demonstrate leadership, too.

Reversing course

On entering the Oval Office, President Trump torpedoed Biden-era approaches to regulating AI, revoking an executive order on safe and trustworthy AI that had been understood as the blueprint for the US approach to domestic and global risk-based AI governance. Trump also used executive authority to pause the so-called TikTok ban and issued a slew of other executive orders (many of which will be challenged in the courts). Predictability in US tech policy is suddenly at an all-time low.

US-based companies will be emboldened by their gilded status, and by Trump administration attacks against Brussels overregulation.

Meanwhile, Stargate’s scale hints at the growing convergence of national security, infrastructure and Silicon Valley interests. OpenAI’s economic blueprint for American re-industrialization – seemingly a key shaper of Stargate – appears to wield exceptional influence. It calls for classified facilities for evaluating model security as well as new energy resources to cool data centres.

Trump’s announcement of a private company also circumvents legislative hurdles associated with developing a publicly funded project of that scale, particularly given the urgency of ramping up data centre capacity due to rising demand.

Internationally, this emerging order has caused major ripples. While Brussels clarifies new rules on AI and enforces its digital services package, US-based companies will be emboldened by their gilded status, and by Trump administration attacks against Brussels overregulation, such as Vice President Vance’s threats against NATO involvement if Europe doesn’t respect ‘free speech’.

Brief, brittle alliances?

Demands for Big Tech accountability had grown steadily in the past decade, with a wave of regulatory action and many companies publicly reinterpreting their roles in governance beyond technology.

This time last year, more than 20 technology companies pledged to tackle deceptive AI election content like deepfakes. Together, they promised to uphold the integrity of 2024’s elections and work together to improve public trust. The declaration was weak on publicly measurable, collaborative actions. But it painted an optimistic picture of rivals coming together to protect democracy from AI-enabled information threats.

This optimism is now a distant memory, with technology moguls vying for influence in a political order that pledges to dismantle guardrails on the deceptive and unsafe content they committed to tackling.

Onlookers may call this shift a ‘mask-off’ moment; others the natural manifestation of profit-driven opportunism and US fear of losing ground to China. Either way, it adds context to the striking image of technology moguls – some former Democratic Party donors – seated at Trump’s inauguration.

The ‘united front’ presented at the inauguration likely falls short of genuine alliance, instead capturing temporary alignments of interests.

That apparent unity soon proved illusory. Stargate’s launch pointed to an initial buildout alliance including Arm, Microsoft and NVIDIA. Notably absent was Musk, who took to X to strongly criticize Stargate’s financial backing. In response, OpenAI’s CEO Sam Altman called for Musk to put US interests above those of his company. Microsoft CEO Satya Nadella noted that ‘all I know is I’m good for my $80 billion.’

Such public bickering may prove to be the tip of the iceberg. The policies and investments of some of the world’s biggest technology companies could increasingly be mediated by the moods of rival CEOs. The ‘united front’ presented at the inauguration likely falls short of genuine alliance, instead capturing temporary alignments of interests.

Winning the technology race with China appears to be one of these interests. But DeepSeek’s breakthrough model and its market impact signals China is both snapping at the heels of US competitors and potentially changing the strategic environment in which they operate.

The view from across the pond

For former President Joe Biden’s administration, the UK and EU were generally allies on AI governance. Despite tensions – with long-standing debates about Brussels stifling US companies – they shared risk-based approaches and actively participated in the emerging global governance architecture.

Article 2nd half

Now, European governments must grapple with the new US administration’s comparatively brakes-off approach to AI development and ‘America-First’ lack of interest in international cooperation. They must also navigate an emboldened coalition of regulation-averse US-owned Big Tech operating in their markets, while seeking investment from these same companies.

Former European Central Bank President Mario Draghi’s 2024 report cast a damning spotlight on the EU’s ability to attract investment, while the UK, a hub for spinouts and global talent, struggles to retain capital. UK Prime Minister Keir Starmer’s $14 billion AI Opportunities Action Plan is ambitious but may take years, if not decades, to yield real results and attract sufficient investment.

The UK has a rare advantage on AI safety, having coordinated the 2023 Bletchley Declaration – including both China and the US.

Yet the case of DeepSeek reveals an important, if often overlooked, fact about the global AI race: it is possible to do more with less. Stargate’s half-trillion-dollar investment in proprietary AI infrastructure already risks appearing disconnected from reality.

Further, as the US de-prioritizes global technology cooperation and potentially powers down its AI Safety Institute, there will be a vacuum in global AI safety leadership.

This is where the UK and EU can carve out a role. The UK has a rare advantage on AI safety, having coordinated the 2023 Bletchley Declaration – including both China and the US – and launched the world’s first AI Safety Institute. 

It should leverage this blueprint, working closely with the EU’s AI Office to bring other national institutes into the fold and push for interoperability, certainty and shared language: essential for safe and sustained innovation.

This can start with the Paris AI Action Summit next week. While it has lost its sole focus on AI safety, policymakers must not forget its symbolic importance.

An unpredictable global scramble to develop AI is underway, as the US turns inward and China boasts new capabilities. The EU and middle powers like France and the UK must act in tandem at the summit to strengthen global expert and scientific networks to advance AI safety, information-sharing, monitoring and reporting of AI incidents.

Scientist-led venues on AI safety have borne fruit – such as the world’s first International AI Safety Report – and can bring unlikely partners with different values to the table. This spirit of continued dialogue is now more important than ever.