Would You Prevent Superintelligence?
DeepMind’s CEO says he’d support a pause if everyone else would. That seems very doubtful. Governments need to step in.
Welcome to the ControlAI newsletter! This week, we’re going into Google DeepMind CEO Demis Hassabis’s recent comments that he’d support a pause on AI advancement if others agreed, and providing you with a brief digest of other AI news we thought you might find interesting.
Ending the Race to Superintelligence
At the World Economic Forum in Davos, Demis Hassabis made an interesting comment that we thought was worth discussing.
In an interview with Bloomberg’s Emily Chang, the CEO of Google DeepMind — one of the largest AI companies — is asked the question of whether he’d support a “pause” on AI if every other company and country agreed.
Context: A pause is something that’s been widely talked about since 2023, when leaders in the field and notable figures such as Elon Musk backed an open letter calling for a pause for at least 6 months. More recently, last October, a call to ban the development of superintelligence was made by a similarly vast coalition of experts and leaders, including godfathers of AI, Nobel Prize winners, top media voices, and national security people, citing the risk of extinction posed by smarter-than-human AI.
Note that the concept of a pause is about restricting development of the very most powerful AI systems, not narrow and specialized AI models which can be used for many positive applications.
Hassabis, who along with other experts and industry leaders has previously stated that AI poses a risk of human extinction, replied that he would support such a halt, under those circumstances, and that he’s always wished that in the final stages countries would collaborate to develop AI in a proper, rigorous and scientific way, for example, via a “CERN” for AI.
It’s good to see Hassabis publicly support the principle of halting development towards superintelligence.
This alone, however, means little. The circumstances under which he says he’d support a pause — where every other relevant actor agrees — will not arise automatically. In fact, Hassabis’s company DeepMind is racing along with Musk’s xAI, ChatGPT-maker OpenAI, Anthropic, and others to be the first (and potentially last) company to develop superintelligent AI.
In doing so, participants in this race believe they can do some variation of obtaining immense power and wealth, fulfilling certain utopian ideals, preventing a “worse guy” from getting there first, or satisfying a level of personal ambition that would make Alexander the Great blush.
Besides the danger intrinsic in developing superintelligence — AI developers do not know how to ensure that smarter-than-human AI is safe or controllable, and the technology could lead to human extinction (we wrote about how: here) — the dynamic of a race between companies, or indeed countries, only compounds the risk. In order to eke out a lead on their competitors, participants are incentivized to prioritize rapid development over taking the time to ensure that what they’re building is safe.
Despite comments like those of Hassabis or past comments of Musk’s, there is no sign of a deliberate slow-down on the horizon. In fact, quite the opposite, barely a month goes by without an ever-increasing, eye-watering amount of spend being announced on new AI datacenters — the facilities used to train and run AIs.
And so contained within the question of whether companies will all agree to halt development is not the solution, but a problem. They won’t all agree to that. Fortunately, there is a solution: government. Governments are meant to solve these sorts of coordination problems and set red lines.
We can halt, or rather prevent, the development of superintelligence by having governments legislate to prohibit it. This should be done both by individual countries, and at an international level.
This has been done before, and we can do it again. In the ‘80s scientists noticed a hole developing in Earth’s ozone layer. Without the ozone layer, we would have been met with catastrophe, possibly including the sterilization of the surface of Earth. It was discovered that this hole was developing due to the use of chlorofluorocarbons (CFCs), and later that decade the world came together and agreed the Montreal Protocol, which phased out the use of these chemicals. The ozone layer is still healing today.
That’s what we’re focused on making happen to prevent the threat from superintelligence at ControlAI. So far, over 110 UK politicians have backed our campaign for binding regulation on the most powerful AI systems, acknowledging the risk of extinction posed by superintelligence. This has been achieved both through the hard work of our team in London, who’ve now briefed over 150 politicians, and thanks to you, our readers. Many MPs who’ve signed up to our campaign have done so after receiving messages from the public sent via our contact tools.
If you want to help out, here you can find our contact tools, which enable you to send a message to your MP, or senators or representative if you live in the States, in mere seconds!
https://campaign.controlai.com/take-action
Our team’s also working on directly informing American lawmakers about the danger of superintelligence too. Recently, Mathias and Max briefed a series of US lawmakers in DC about the problem.
And it doesn’t end in DC. Just this week, our founder and CEO Andrea (also a coauthor of this newsletter!) testified to a Canadian House of Commons committee about the risk and what we can do about it. Andrea particularly emphasized the way that middle powers such as Canada can make a significant difference, and have done so when tackling global issues such as nuclear proliferation before. You can watch Andrea’s opening testimony here:
So we can prevent this threat, and we’re on a path that could lead us there. But this is urgent, and we might not have much time. We can’t rely on AI companies all voluntarily pausing, that won’t happen.
Many AI experts believe that artificial superintelligence could arrive within just the next 5 years. In fact, Demis Hassabis said in the same interview we referenced earlier that he believes there’s a 50% chance that AI that exhibits all the cognitive capabilities of humans will be developed by 2030. That’s less than 4 years from now. Anthropic’s CEO Dario Amodei, also in Davos, had an even more aggressive timeline, and simply said “I think this moment will come in the 2020s”. Amodei also said he was worried about the risk of AI causing human extinction.
We need to act now, and prohibit the development of superintelligence.
AI News Digest
Here are some other news items we thought you might find interesting.
South Korea’s AI Basic Act
South Korea’s AI Basic Act came into force last Thursday, making South Korea the first country to put some safety requirements on frontier AI systems — though similar laws have been passed in the EU, California and New York.
Among other provisions, the new law establishes an AI safety research institute and requires AI companies to implement measures to identify, assess and mitigate risks throughout the entire AI lifecycle, in cases where the amount of computation used to train an AI is above a set threshold. It also requires them to build risk management systems to monitor and respond to AI-related safety incidents.
The Doomsday Clock
The Bulletin of the Atomic Scientists, founded by Manhattan Project scientists, which every year publishes a number representing how close they believe the world is to global disaster, has updated their “Doomsday Clock” to 85 seconds to midnight, where midnight represents catastrophe. This is the lowest time the Bulletin has ever published, and they’ve been doing this since 1947. In their statement they released, they cited “the potential threat of artificial intelligence” as contributing to the risk.
Geoffrey Hinton
One of the godfathers of AI, Geoffrey Hinton, has said he’s very sad that what he put his life’s work into developing is now becoming extremely dangerous, and people aren’t taking the risks seriously enough.
Hinton has been consistent in warning of the risk that superintelligent AI could lead to human extinction, and recently, along with thousands of other experts and leaders backed an initiative to ban the development of the technology.
Take Action
If you find this article useful, we encourage you to share it with your friends! If you’re concerned about the threat posed by AI and want to do something about it, we also invite you to contact your lawmakers. We have tools that enable you to do this in as little as 17 seconds.
And if you have 5 minutes per week to spend on helping make a difference, we encourage you to sign up to our Microcommit project! Once per week we’ll send you a small number of easy tasks you can do to help. You don’t even have to do the tasks, just acknowledging them makes you part of the team.
We also have a Discord you can join if you want to connect with others working on helping keep humanity in control.




I want to know WHY "Every" other company,etc has to agree? WHY can't he See that the MAJORITY of people DON'T WANT IT and STOP?!
I will forward this to my local MP in UK.