The Greatest Threat
Sam Altman’s latest warning that superintelligence could cause human extinction.

Sam Altman has restated his warning that the development of superintelligence is the greatest threat to the existence of humanity, AI can now design functional viruses, plus other developments in AI.
Table of Contents
If you find this article useful, we encourage you to share it with your friends! If you’re concerned about the threat posed by AI and want to do something about it, we also invite you to contact your lawmakers. We have tools that enable you to do this in as little as 17 seconds.
And if you have 5 minutes per week to spend on helping make a difference, we encourage you to sign up to our Microcommit project! Once per week we’ll send you a small number of easy tasks you can do to help. You don’t even have to do the tasks, just acknowledging them makes you part of the team.
Superintelligence
OpenAI’s CEO Sam Altman has just issued his latest warning about the threat from superintelligence, AI vastly smarter than humans, a technology his company is actively working to build.
In an interview last week with Mathias Döpfner, Sam Altman reiterated his 2015 comment that the development of superhuman machine intelligence is the greatest threat to the existence of mankind.
This isn’t the first time he’s restated his belief that powerful AI could have disastrous consequences, but it’s particularly notable as since mid-2023 he’s consistently sought to underplay the risk when asked about it.
Journalist and author Garrison Lovely, who’s writing a book on the race to build machine superintelligence, commented that he will now have to change his book to account for this.
Altman has probably underplayed the risk in the last couple of years to avoid adding pressure for regulation of powerful AI — OpenAI has lobbied against AI regulation in California and the European Union. Regulation of powerful AI is something that is sorely needed if we want to avoid this threat.
It’s unclear why he seems to have changed his tune now, but whatever the reasons, the risk he highlights is one that policymakers and the public should be acutely aware of.
Despite Altman’s warnings, the development of artificial superintelligence is his company’s top focus, with Sam Altman declaring in a recent blog post that OpenAI is “before anything else” a superintelligence research company.
He’s not alone in his belief that superintelligence could cause human extinction; countless more industry leaders and experts have publicly stated this too. Just a few weeks ago, Anthropic’s CEO Dario Amodei said that he thought there was a “25% chance that things go really, really badly” with AI.
Sam Altman made some other interesting comments in the interview, outlining the three main ways he sees that the technology he’s building could go “super wrong”.
1. AI systems do exactly what their developers tell them to do, but people misuse them horribly, for example, using them to create a weapon or hack into nuclear weapons systems.
We’d point out that nobody even knows how to make AI systems vastly smarter than humans actually do what we want. It’s probably the biggest and most pressing unsolved problem of the field — but that shouldn’t be any relief, because that means we have no way to ensure that superintelligence doesn’t lead to catastrophically bad outcomes, like human extinction.
2. His second category of risk is where he outlines where the AI develops agency and doesn’t want to be shut down. It tries to accomplish a goal, and it realizes that humans could prevent it from accomplishing its goal.
This is the classic “alignment failure” threat model, where a superintelligence could decide to exterminate humans to make sure that we don’t get in the way of its goals. Though that isn’t actually required, it could simply be that it transforms the world for its own ends, for example, covering the Earth in solar panels for electricity, or warming up the atmosphere, and like ants on a human building site, we could just find ourselves in unsurvivable conditions — without any actual malintent on the part of the AI.
We wrote about one way this could happen here:
3. His third category is an interesting one, where billions of people rely on ChatGPT for more and more increasingly important decisions. In this scenario, the AIs perform as intended in each individual use, but systemic effects end up causing catastrophic outcomes.
“ You’re like if I wanna be competitive, if I wanna succeed in the world, if I wanna live my best life, I kind of have to follow the advice of the model and so does everybody else. So do the billions of other people talking to them all, so now we’re all just doing what an AI system tells us to do.”
He says we should “absolutely” take the risk of superintelligence causing human extinction seriously.
What’s his solution?
Altman says that we should push for global governance of AI to prevent its worst risks, referencing the IAEA and prevention of nuclear risk as an example.
We think an IAEA for AI would be a very good thing to have, depending on the details. We propose exactly this in our policy plan for humanity to survive AI and flourish, A Narrow Path, among the establishment of other international institutions and red lines to prevent the development of superintelligence.
To prevent the extinction threat of superintelligence, there is a simple thing we can do. We can prohibit its development internationally. We’re running a campaign on this right now!
It would be great if Sam Altman communicated that we need global regulation of AI to OpenAI’s lobbyists, but we won’t hold our breath. The AI industry has been increasing its efforts to avoid regulation, with the establishment of new super PACs funded with over 100 million dollars to oppose rules being set on AI. OpenAI’s president and co-founder, Greg Brockman, has funded one of these super PACs.
Whether we get the necessary regulations to prevent AI being developed that causes human extinction ultimately shouldn’t be up to the AI companies. We all have a stake here. At ControlAI, we believe that it’s crucial that the public and policymakers are informed about this risk, which is why we’ve launched our new campaign to do this.
With our new campaign to ban superintelligence, we’re also providing the public civic engagement tools that enable you to make a difference on this in seconds. For example, we have tools that enable you to write and send a message to your elected representatives in as little as 17 seconds!
The problem of what people should do once they are aware of the problem has been a perennial problem in AI safety, so we’re trying to solve that! Another project we’ve started to help with this is Microcommit. Sign up and we’ll send you a small number of easy tasks to help that take just 5 minutes per week.
To prevent this threat, which we agree is the greatest threat humanity faces, we encourage you to get involved!
AI Biological Risk
AI is continuing to make rapid advances in the domain of biology. A new paper describes the use of an AI system called Evo — which is a language model trained on genomic data — in the creation of new viruses.
The researchers used two versions of Evo to generate whole-genome viral sequences in order to produce bacteriophages, a kind of virus that attacks bacteria and can be used for medical therapies. They then synthesized the viruses and tested them experimentally and found that 16 of them were viable bacteriophages with “substantial evolutionary novelty”. They found that a cocktail of the AI-generated phages rapidly overcame phage-resistance in three strains of E. coli.
This kind of technology has obvious positive use cases — bacteriophages are often used in medical treatments, but it also clearly demonstrates the potential for AI to design novel pathogens that kill humans, which bad actors, or a misaligned AI, could use to cause untold damage to humanity.
There are a bunch of companies that will synthesize genomes by request over the internet. They do have some screening procedures, but AI might open up ways for threat actors to bypass these measures. Just last week, researchers found a “striking vulnerability” in such screening software where they were able to get past the checks by using AI to reformulate the sequences of known hazardous proteins while maintaining their dangerous functionality.
The screening software was patched by all but one of the companies selling the nucleic acids, but even then about 3% of the proteins most likely to retain their harmful functionality passed the screening.
The potential for AI to be used to develop bioweapons is something that has been an ongoing topic of discussion. OpenAI’s most powerful AI systems have been classified by OpenAI as “High” on biological and chemical capabilities under their Preparedness Framework, meaning OpenAI believes they could be used to meaningfully assist a novice in creating a known biological threat.
Anthropic’s most powerful AIs have received a similar classification, called ASL-3, which they summarize as referring to AIs that substantially increase the risk of catastrophic misuse compared to non-AI baselines (e.g. search engines or textbooks) OR that show low-level autonomous capabilities.”
More AI News
Extinctionists
David Price wrote an interesting article in the Wall Street Journal about those in Silicon Valley who cheer on the extinction of humanity by AI.
Sora 2
OpenAI has launched a new AI that generates extremely realistic video and audio. This was launched together with a new video reels-based social media platform.
Computer Use
Google’s Gemini AI can now use a computer like a human, surfing the web, clicking buttons, and so on.
Malicious Use
OpenAI has published a report on ways their AIs are being maliciously used today, detailing use across hacking operations, scams, and covert influence campaigns.
Help Prevent The Threat!
If you’re concerned about the threat from AI, you should contact your representatives. You can find our contact tools here that let you write to them in as little as 17 seconds: https://campaign.controlai.com/take-action.
If you have 5 minutes per week to spend on helping make a difference, we encourage you to sign up to our Microcommit project! Once per week we’ll send you a small number of easy tasks you can do to help. You don’t even have to do the tasks, just acknowledging them makes you part of the team.
We also have a Discord you can join if you want to connect with others working on helping keep humanity in control, and we always appreciate any shares or comments — it really helps!
We need to pause on developing advanced AI until we have developed robust and effective alignment techniques.
Actually, it seems that lack of intelligence is in reality threatening human extinction,.