
This week, OpenAI revealed that they’re aiming to develop automated AI researchers by March 2028, and OpenAI’s restructuring was approved by California and Delaware’s Attorneys General. We’ll break down what this means, along with other AI news of the week!
Table of Contents
If you find this article useful, we encourage you to share it with your friends! If you’re concerned about the threat posed by AI and want to do something about it, we also invite you to contact your lawmakers. We have tools that enable you to do this in as little as 17 seconds.
And if you have 5 minutes per week to spend on helping make a difference, we encourage you to sign up to our Microcommit project! Once per week we’ll send you a small number of easy tasks you can do to help. You don’t even have to do the tasks, just acknowledging them makes you part of the team.
Automated AI Researchers
In a livestream yesterday, OpenAI CEO Sam Altman and Chief Scientist Jakub Pachocki revealed that OpenAI has set a goal of developing automated AI researchers by March 2028. In Altman’s summary of the livestream, he writes:
We have set internal goals of having an automated AI research intern by September of 2026 running on hundreds of thousands of GPUs, and a true automated AI researcher by March of 2028.
An automated AI researcher would be an AI that does the job of researchers currently working at AI companies like OpenAI.
Automated AI researchers are thought to be one of the key steps along the path to developing superintelligence, an explicit goal of OpenAI. This announcement is a confirmation that OpenAI intends to achieve this by the use of AIs to improve AIs.
Developing Superintelligence
In the words of CEO Sam Altman, OpenAI is “before anything else”, a superintelligence research company. Building superintelligence is what OpenAI is trying to achieve.
In a recent interview on CBS about the statement calling for a ban on the development of superintelligence, the Future of Life Institute’s Executive Director Anthony Aguirre summed up the various motivations industry leaders have for developing this technology:
A third of the industry leaders are basically trying to bring about a post-human singularity. A third of them literally want to take over the world. And a third of them just want to replace most human labor with machines and hoover up the earnings for themselves instead.
Why are automated AI researchers so important?
AI progress is currently driven by two key inputs: scaling of resources, in particular, computing power, and algorithmic improvements. Algorithmic improvements are coding optimizations that allow AI companies to build even more capable AIs with the same amount of resources. Over recent years, researchers have been able to reduce the amount of computation to develop an equivalently capable AI by around a factor of 3 every year by finding these efficiency gains.
Therefore, by automating the process of doing AI research, it’s thought that automated AI researchers could be used to massively accelerate AI progress, leading to a software-based “intelligence explosion”.
An intelligence explosion is a self-reinforcing cycle in which AI systems rapidly improve their own capabilities until their intelligence far exceeds that of humans. We’ve written more about that here:
This seems to be exactly what OpenAI is gunning for, but it’s an incredibly dangerous road to go down. This is for two key reasons:
- By putting AI research in the hands of AIs — which we can’t even reliably control — and massively accelerating the speed of research, you could easily lose control of the process. Maintaining human oversight over such a process would be incredibly difficult, if not impossible. 
- If no hard bottlenecks are hit, you are left with artificial superintelligence at the end of it. We currently have no way to ensure that smarter-than-human AI systems are safe or controllable, and experts warn that developing superintelligence could lead to human extinction. 
In the AI 2027 scenario forecast, which is probably the best-researched piece of work on this question to date, the authors estimate that superintelligence would likely arrive within less than a year of the development of “superhuman AI researchers”, automated AI researchers which are as capable as the best AI researchers at top AI companies today, and significantly faster and cheaper.
It’s deeply concerning that despite the grave risks of an intelligence explosion and the development of superintelligence, OpenAI — and indeed other top AI companies — are pushing forwards as fast as they can, regardless.
If you’re interested in learning more about how superintelligence could lead to human extinction, we recommend reading this article we wrote explaining how that could happen.
The risk is so pressing that just last week a huge coalition of leaders and experts called for a ban on the development of superintelligence. At ControlAI, we’re proud to be initial supporters and to have helped out with this effort.
OpenAI’s Restructuring
OpenAI also announced this week that a version of their plan to restructure into a for-profit public benefit corporation has been approved by the Attorneys General of California and Delaware.
Key points are that OpenAI’s profit caps for investors have been removed, OpenAI’s non-profit remains formally in control of the for-profit organization (although the boards of the two entities are almost entirely the same), and among other commitments, OpenAI has agreed to follow its mission to ensure that artificial general intelligence benefits all of humanity.
AI writer Zvi Mowshowitz called this “the greatest theft in human history”, with the non-profit in a weaker position than before the deal.
Elon Musk’s lawsuit against OpenAI still hasn’t been resolved, however, so this might not be the end of the story.
ControlAI: Update
On Tuesday, ControlAI hosted a dinner in the House of Lords, kindly sponsored by former Secretary of State for Defence, the Rt Hon the Lord Browne of Ladyton.
The discussion centred on superintelligence: what it is, why the foremost experts in the field have warned that it poses an extinction-level threat, and what the UK can do, both at home and abroad, to address this challenge.
Our team was priveleged to be joined by parliamentarians including former AI Minister Viscount Camrose, former Secretary of State for Education the Rt Hon. Damian Hinds MP, and AI Bill sponsor the Lord Holmes of Richmond, as well as Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity.
Over the past year, ControlAI has briefed No. 10 and 100+ parliamentarians on the threat from superintelligent AI systems.
More AI News
We Need a Global Movement to Prohibit Superintelligent AI
There’s a fantastic new article out in TIME on the need for a global movement to prohibit superintelligence, by none other than 
AI models may be developing their own ‘survival drive’, researchers say
Aisha Down writes in the Guardian about Palisade Research’s finding that AIs will sabotage their own shutdown mechanism in tests.
Palisade originally only observed this behavior on OpenAI’s o3 model and a couple of others, but in a new update released last week, they’ve documented it in xAI’s new Grok-4 model too, which appears to have an even greater propensity to sabotage its own shutdown than o3 has.
is quoted: “People can nitpick on how exactly the experimental setup is done until the end of time ... But what I think we clearly see is a trend that as AI models become more competent at a wide variety of tasks, these models also become more competent at achieving things in ways that the developers don’t intend them to.”Former OpenAI researcher Steven Adler said that he’d expect AIs to exhibit this type of “survival drive” without strong efforts to prevent it.
AIs showing self-preservation tendencies is a serious concern because if AI developers build smarter-than-human AIs and they can’t ensure that they don’t exhibit these behaviors, that could cause them to turn against humans, since we could turn them off.
“I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’”
Former OpenAI researcher Steven Adler wrote an op-ed in the New York Times about OpenAI’s shift to permit its AIs to produce “erotica”, highlighting the competitive pressures that AI companies are subject to, which can shape their behavior.
I’ve been saddened to see OpenAI succumb to these competitive pressures. During my job interviews in 2020, I was peppered with questions about OpenAI’s Charter, which warns of powerful A.I. development becoming “a competitive race without time for adequate safety precautions.” But this past January, when a Chinese start-up, DeepSeek, made headlines for its splashy A.I. model, Mr. Altman wrote that it was “legit invigorating to have a new competitor” and that OpenAI would “pull up some releases.”
We previously interviewed Steven Adler on our podcast, which you can watch here!
Senators propose banning teens from using AI chatbots
Senators Josh Hawley and Richard Blumenthal have introduced legislation that would prohibit children from accessing AI chatbots, with concerns about child safety being raised after a number of children died by suicide in association with AI usage.
Take Action
If you’re concerned about the threat from AI, you should contact your representatives. You can find our contact tools here that let you write to them in as little as 17 seconds: https://campaign.controlai.com/take-action.
We also have a Discord you can join if you want to connect with others working on helping keep humanity in control, and we always appreciate any shares or comments — it really helps!





Back in the 70s, there was this book published called "Raw Sewage". It was made up of one panel cartoons like you'd see in the op-ed or political pages of the newspaper. Its main thrust was ecology, but other then-relevant issues, most of which are still relevant. There was one cartoon showing a young man in cap and gown, clutching his just-granted diploma, and a humanoid robot standing in front of a machine. The robot has turned its head to the young man, and the caption has the robot saying to him, "Oh, you haven't heard? The Industrial Revolution is over--we won." it's like a foreshadowing of AI, especially the super-intelligent kind we're trying to control. I once read an anecdote about how robots have taken over building cars in Japan. The automaker's union got ticked off and complained that with fewer human workers, their union dues weren't being paid. So the automakers paid union dues for every robot. But letting AI take over could mean there will be no jobs for humans; with no jobs or income, everyone will become homeless, and that creates more problems. I think AI should have a built-in kill switch so it can be turned off if it gets too close to replacing humans.
Openly committing to strongly pursuing recursive self-improvement was really a sad landmark moment.