Are We Close to an Intelligence Explosion?
Top AI company cofounder Jack Clark puts the odds of AIs training themselves at 60% by the end of 2028, the UK's AI kill-switch amendment, and a new documentary getting people involved.
The people building AI increasingly think we’re close to a point when AIs can train their own successors, the start of an intelligence explosion. This week, we’ll go over what they're saying, a new legislative proposal in the UK for AI kill-switches, and a documentary getting people thinking about the issue.
If you find this article useful, we encourage you to share it with your friends! If you’re concerned about the threat posed by AI and want to do something about it, we also invite you to contact your lawmakers. We have tools that enable you to do this in as little as a minute.
Intelligence Explosion
Jack Clark, a cofounder of top AI company Anthropic, recently predicted that there’s a 60% chance of an AI fully training its successor by the end of 2028, warning that techniques to try to ensure AIs are safe today may break under recursive self-improvement.
Initiating such a process, known as an intelligence explosion, could quickly result in uncontrollable superintelligence — AIs vastly smarter than humans. In recent months and years, top AI scientists, leaders, and CEOs have been warning that the development of superintelligent AI could lead to human extinction.
Currently, AI companies like Anthropic and OpenAI, which are working to build superintelligence, are actively pursuing this particularly dangerous research avenue. Last year, OpenAI announced that it plans to build “a true automated AI researcher by March of 2028”, while Jesse Mu, a researcher at Anthropic, wrote on Twitter that they “want Claude n to build Claude n+1, so we can go home and knit sweaters”.
The idea behind it follows from the two key inputs to AI capabilities advancement: algorithmic progress and compute scaling. Scaling the amount of computation used for AIs is one way to improve their capabilities; this relies on getting more and better AI chips. The other main method, algorithmic progress, happens when researchers find ways to develop AIs more efficiently. Historically, they’ve been able to do this at a rate equivalent to a 3X compute scaling every year. In other words, every year the amount of computation needed to train an AI of a particular capability level has dropped by a factor of about three.
It’s thought that a key bottleneck on algorithmic progress is the number and competence of AI researchers and engineers at AI companies, so automating coding and research could massively speed up AI R&D. Advances that would otherwise take years could happen in mere months.
In the AI 2027 scenario forecast, some of the best-researched thinking on how AI could take off, it takes less than a year to go from AI companies developing superhuman coders through to artificial superintelligence.
The reason why AI companies are pursuing the ability to automate AI R&D is clear: it’s the shortest plausible path to get to superintelligent AI. However, it’s also thought to be exceptionally dangerous. This is for two reasons:
1. We don’t really understand modern AIs or have the ability to ensure that they’re controllable. This relates to the “black box” problem of AI. While we know how to reliably build more powerful AIs, the way they’re developed is via a simple learning algorithm adjusting hundreds of billions of numbers (neural weights) based on data it’s fed. These numbers form an AI that we can use, but we understand very little about what the numbers actually mean and how the AI works internally. We can’t really verify the goals, preferences, and behaviors the AIs learn, let alone set them.
To sum it up, nobody knows of a way to ensure that what likely emerges at the end of this process — artificial superintelligence — is safe or controllable.
2. The process of an intelligence explosion itself is fraught with danger. It could all happen so fast that humans have no practical way to even attempt to oversee it, and researchers could lose control of the process. The AI companies’ plan to solve all of this is to hope that, amid the chaos, they’ll be able to get AIs to solve the problems of safety and controllability. The UK AI Security Institute’s Chief Scientist, Geoffrey Irving, recently described this plan as flawed, and said we can’t have a lot of confidence in it working, while ex-OpenAI researcher Daniel Kokotajlo has said it’s an obvious chicken-and-egg problem.
Importantly, Clark isn’t a lone voice in his prediction. In terms of what AI insiders are anticipating, it’s actually quite unremarkable. In February, Jimmy Ba, cofounder of Elon Musk’s AI company, wrote that “Recursive self improvement loops likely go live in the next 12 months”, upon quitting the company.
Most of the public and most lawmakers are completely in the dark about this, however, so we think this should serve as a wake-up call.
AI companies are well aware of the danger, both in building superintelligence and in automating AI R&D, but have shown themselves to be unwilling or unable to correct course. Jared Kaplan, Anthropic’s chief scientist, another cofounder of the company, told The Guardian a few months ago that the moment when AI companies hand over R&D to AIs would be the “ultimate risk”.
So, are we close to an intelligence explosion that rapidly leads to artificial superintelligence? Insiders seem to think so, but they don’t have a credible plan to control what they’re building.
There is only one known method to prevent the risk of extinction posed by superintelligence, which is to prohibit the development of the technology.
The Kill Switch Amendment
The UK’s Cyber Security and Resilience Bill is currently making its way through Parliament, and Alex Sobel MP, one of our campaign supporters, has introduced an amendment that would give the Secretary of State powers to shut down data centres or hosted AI systems in the event of an AI security emergency.
The amendment specifies causes and potential causes of such an emergency as including adversarial use by state actors, autonomous cyberattacks, and superintelligent AI escaping human oversight.
This is really great to see! The amendment was tabled just before Mythos — Anthropic’s recent AI that they’ve said is too dangerous to release due to its advanced cyberhacking capabilities — showing exactly the foresight we need to tackle this issue.
The amendment is especially notable as it’s the UK’s first-ever legislative proposal on superintelligent AI, recognising it as the national security threat it is. ControlAI is proud to have worked with Alex on the amendment!
At ControlAI, our focus is on preventing the risk of human extinction posed by superintelligent AI. AI kill-switches won’t solve this problem alone. Fundamentally, if humanity is faced with an entity that we do not control and that is far more intelligent and powerful than we are, we should not expect things to go well.
In particular, if we let it get to that stage, one would expect superintelligent AI to be able to avoid the possibility of an effective kill switch being used, whether that is by hacking to render it ineffective, by persuading or coercing people via methods like blackmail not to use it, or by some other method! We shouldn’t expect to be able to anticipate how something much smarter than humans would beat us, even if, like playing chess against Magnus Carlsen, we’re quite sure we’d lose.
Nevertheless, we think this is a common-sense step forward, and it does help in a number of ways. Having this kill-switch provision would enable the government to intervene in scenarios where UK data centres are being used by threat actors to conduct cyberattacks. It could also help in cases where a rogue AI proliferates across UK data centres, and it could help in cases where superintelligence is being developed on British soil — which would pose a threat to the UK’s national security.
However, to really prevent the threat posed by superintelligence, we need action internationally. Governments should agree to prohibit the development of superintelligence both within their own countries and internationally with each other.
The AI Doc: Or How I Became An Apocaloptimist
At ControlAI, a big focus of our efforts is informing the public about AI, superintelligence, and the danger, making it easy for you to make your voice heard. We think that awareness is the biggest bottleneck in getting action on this issue, so we’re always glad to flag other efforts pushing on this.
Inspired by the new documentary “The AI Doc: Or How I Became An Apocaloptimist”, more than 8,000 people from around the world have signed up with the team behind it to demand a seat at the table as our future is decided. They’re now offering personalised action plans focused on what you care about most - whether that’s family, work, health, or the risk of extinction posed by AI.
The documentary features ControlAI’s US Director Connor Leahy, and is now available to rent at home. We think it’s great to see the threat posed by superintelligence get this level of attention, as building common knowledge is the first step to addressing the problem.
We invite you to check out their site and join them at TheAIDocGetInvolved.com!
Take Action
If you’re concerned about the threat from AI, you should contact your representatives. You can find our contact tools here that let you write to them in as little as a minute: https://controlai.com/take-action
And if you have 5 minutes per week to spend on helping make a difference, we encourage you to sign up to our Microcommit project! Once per week we’ll send you a small number of easy tasks you can do to help.
We also have a Discord you can join if you want to connect with others working on helping keep humanity in control, and we always appreciate any shares or comments — it really helps!




Fuck AI
Glad to see the UK Parliament moving toward concrete safety mechanisms like the Kill Switch Amendment. While a kill switch might not be a silver bullet against a vastly superior intelligence, establishing the legal precedent that the state can and should intervene in a security emergency is a great first step. All roads must lead to an international prohibition.