The drama at OpenAI shows that AI governance remains in the hands of a select few

Who’s in and who’s out can change overnight.

Expert comment Published 21 November 2023 3 minute READ

On Friday night, OpenAI’s board unexpectedly fired its CEO, founder and evangelist-in-chief Sam Altman – one of the best-known public faces of the recent artificial intelligence (AI) boom.

The past four days have been chaotic and without conclusion: on Monday, it was announced Altman would be joining Microsoft, the tech giant behind a $10 billion investment in OpenAI. But a staff revolt at OpenAI against the board and their decision means the bedlam is set to continue a little while longer.

Whatever happens next, the weekend’s events point to no let up in the pace of AI development – and a stark reminder of who decides where and how fast we are going.

Winners and losers

Since its release a year ago to the day, ChatGPT and similar generative AI tools have dominated the news agenda. Warnings of the threat posed by AI to humanity are headline news, AI tools are entering mainstream working practices, and more decisions are being made with the help of computer models.

The scramble to both regulate the technology and provide a safe business environment to those few companies leading the charge is a government priority from Washington to Delhi. One of those companies is OpenAI.

The biggest winner looks to be Microsoft, whose CEO Satya Nadella’s quick thinking has turned a potential disruption into a crushing victory.

It is not yet apparent what the consequences of this weekend’s drama might be: the biggest winner looks to be Microsoft, whose CEO Satya Nadella’s quick thinking has turned a potential disruption into a crushing victory.

Whether Altman begins a new venture at the company, or returns to a restructured OpenAI, what might have looked to the markets as a serious wobble at a key strategic investment has been masterfully rescued, perhaps even strengthening Microsoft’s claim to AI leadership. Altman’s popularity, both with his staff and Silicon Valley’s venture capital Twitterati, remains strong.

The biggest losers might be the OpenAI board, trapped between the second biggest technology company in the world on one side and mutiny on the other. Or perhaps the biggest losers are everyone else.

Principles and profit

The eruption at OpenAI may have been bubbling under the surface for some time. OpenAI’s unique corporate structure – its for-profit arm answered to a not-for-profit board – was devised to help the company navigate its stratospheric rise towards an $88 billion valuation while ensuring the safety and mission of the technology it was developing.

Ilya Sutskever, OpenAI’s chief scientist and reportedly a key figure in the decision to oust Altman, led the company’s safety-oriented AI alignment research, and the interim CEO announced by the board has spoken about slowing the pace of AI development.

This appears to have been in conflict with the views of Altman, himself a former venture capitalist, and with the aims of investors who had banked on rapidly integrating the company’s technology into their suite of products.

It would not be the first time, either: Anthropic, an AI company with its own generative AI tools, was born of former OpenAI employees reportedly concerned about the relentless pace of development and the pivot to profit-making – though Anthropic’s own commercial interests cannot be denied. Building AI systems is expensive.

Industry voices might reject this, pointing to multiple public commitments to safety and responsible development, including at the UK’s AI Safety Summit earlier this month. Altman himself has been vocal in calls for regulation.

But to outsiders, the weekend’s story could be seen as a fight between an idealistic governance model (designed to put mission before profit) and the realities of the tech sector.

But to outsiders, the weekend’s story could be seen as a fight between an idealistic governance model (designed to put mission before profit) and the realities of the tech sector – a fight in which the idealistic model appears to have lost.

Or perhaps it is simpler: personal rivalries, jeopardised stock sales, or ideological schism could each have contributed to the bombshell decision.

Either way, an AI company supposedly with a foot on both brake and accelerator might have just made its mind up to favour the accelerator.

No time to waste

Alarm bells are now ringing for the many people concerned about the rapid development of AI risks.

Self-proclaimed technology accelerationists have come out in robust defence of Altman and poured scorn on the OpenAI board. Their opponents point to the need for rapid regulation in the wake of the weekend’s events.

The risks of AI are real, today and tomorrow. In the breakneck race to lead on AI and roll it out as widely as possible, getting things wrong will be costly.

The risks of AI are real, today and tomorrow. In the breakneck race to lead on AI and roll it out as widely as possible, getting things wrong will be costly.

Even if fears of a human-hostile superintelligence are premature, the near-term risks are real. There are countless examples of AI-enabled tools making decisions with life-changing consequences for their subjects, from recruitment algorithms automatically rejecting certain demographics to self-driving car accidents.

Article 2nd half

Getting these tools right promises great things; getting them wrong will be catastrophic. This weekend’s events ask whether humanity is giving itself the time to get things right.

Or perhaps humanity won’t get a say. The other thing the events show us is that it is not humanity that gets to influence what might be the defining technology of the century. Rather it is a tiny cadre of technology leaders, billion-dollar companies and venture capitalists in control while regulation and public oversight lags behind.

OpenAI themselves have at times tried to shift that balance: using a unique governance model, calling for regulation, and even supporting research into democratising AI, including a three month project here at Chatham House.

But as the dust begins to settle, it is clear that the gap between those driving AI development and everyone else is only getting wider.