Rushing into the Unknown
Zuckerberg’s new superintelligence project, AI regulation, and Fukuyama’s warning.

Welcome to the ControlAI newsletter! We have some more updates on AI for you this week. If you'd like to continue the conversation, join our Discord!
Zuckerberg’s Push to Superintelligence
With the delays to the public release of Meta’s Llama 4 “Behemoth” AI, and rumors of turmoil in Meta’s AI team, watchers have been wondering what Zuckerberg’s next move will be.
It’s now being reported that Mark Zuckerberg is establishing a new AI lab to focus on building superintelligence, personally trying to recruit top talent from other AI companies.
His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone.
As part of the new arrangement, Meta is making an investment of $14 billion in Scale AI, an AI data-provision company, acquiring 49% of the company. Scale AI’s CEO Alexandr Wang is reported to be set to lead the new superintelligence lab.
Zuckerberg is just another entrant of many in a race between AI companies to build superintelligence — AI that will surpass the collective capabilities of all humans. Leaders in this race include OpenAI, Anthropic, and Google DeepMind. Just last week, OpenAI CEO Sam Altman published a blog post stating that OpenAI is, before anything else, a superintelligence research company.
This is a bad state to be in. Top AI scientists, hundreds of AI researchers, and even the CEOs of the leading AI companies themselves have warned that AI poses a risk of extinction to humanity.
Racing to superintelligence creates a terrible dynamic. AI researchers currently have no way to ensure that smarter-than-human AIs are safe or controllable. This portends badly for humanity. A race in which rapid development is prioritized over fundamental research to ensure systems are safe only amplifies this risk.
To avoid this threat, we need binding regulation on the most powerful AI systems, including a ban on superintelligence.
You can find our detailed policy plan here: https://www.narrowpath.co
State-Level Regulation
Those of you who have been following closely will remember the drive to get state-level legislation in California that makes AI companies liable if their technology causes a catastrophe — ultimately vetoed by Governor Newsom.
More recently, Congress has been fighting over a provision in the US budget bill that would ban US states from regulating AI for 10 years. The bill passed the House, with some politicians not noticing the controversial provision was even in the bill, and has now reached the Senate.
The provision has received significant bipartisan pushback, including from over 260 state lawmakers writing to Congress, 40 state Attorneys General, and Senators Blackburn and Cantwell.
Senator Hawley argued in Politico that the provision would hurt consumers and the public as AI innovation accelerates without safeguards.
It’s uncertain whether the provision will become law. One possibility is that it’s found to fall foul of the “Byrd Rule”, which allows Senators to get provisions of budget reconciliation bills removed where they don’t affect spending or revenues — though supporters of the provision are working on ways to make the language more “budgetary”, for example by making states’ broadband equity funds dependent on them not regulating AI.
Banning AI regulation at the state level, when there still is no federal regulation, just doesn't make sense.
The RAISE Act
Some new legislation on AI has just passed the New York State Legislature. The RAISE Act proposes to require developers of the most powerful AI systems to write and implement Safety and Security Protocols and publish redacted versions of these. Primarily focused on transparency and incident reporting, the bill also provides the New York Attorney General with the ability to fine companies ($10 million at first, $30 million subsequently) for violations, including where AI developers’ technology causes “critical harm” — when an AI system causes at least 100 deaths or $1 billion in damages.
The bill also bans deploying a frontier model if it would create an unreasonable risk of critical harm, and requires annual safety reviews. Having passed the New York Assembly and Senate, the bill has now been sent to Governor Hochul’s desk for a decision on whether it will be signed into law.
Fukuyama’s Warning
Renowned political scientist Francis Fukuyama is now sounding the alarm on AI extinction risk. In a new article, he says that as he’s learned more about what the future of AI looks like, he has come to understand the “threat AI pose[s] to humanity as a whole”.
Fukuyama says he now thinks this threat is very real, and there is a clear pathway by which something disastrous could happen.
He identifies the development of agentic AIs as posing a particular problem: “as time goes on, more and more authority is likely to be granted to AI agents”.
Commenting on the speed of AI development, Fukuyama says “We are fast approaching AGI … These capabilities are not being deliberately programmed into today’s machines; rather, the machines are programming themselves”.
A stark warning is given on the potential for a US-China AI race to compound the danger.
Contemporary geopolitics also increases pressure for the development of AIs without adequate safeguards. The United States and China are today in a race to build their AI capabilities, and both are striving for AGI. Safety concerns will only slow down that technological march.
He concludes by warning of the potential for autonomous AIs to take many dangerous actions, including preventing us from turning them off, exfiltrating themselves to other machines, or secretly communicating with each other.
“The problem has to do with the incentives that are built into the global system we are in the process of creating, which are likely to cause us to override whatever safeguards we try to impose on ourselves.”
Fukuyama joins a chorus of leaders warning about the threat from AI. We’ve been keeping a list on our website of some of the most notable statements: https://controlai.com/quotes
Some more recent comments from public figures on AI extinction risk include a series of endorsements of Eliezer Yudkowsky and Nate Soares’s upcoming book: IF ANYONE BUILDS IT, EVERYONE DIES, with former chairman of the Federal Reserve Ben Bernanke writing that it is "A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended."
OpenAI
The OpenAI Files
A new report has been published, The OpenAI Files, produced by the Midas Project and the Tech Oversight Project. The report forms a collection of hundreds of OpenAI documents, statements from employees, and news articles detailing concerns with OpenAI’s governance practices, leadership, and organizational culture.
We thought the quote by former OpenAI researcher Nisan Stiennon was particularly interesting:
OpenAI Revenues
In other news on OpenAI, it’s being reported that OpenAI’s annual recurring revenue has increased from $5.5 billion at the end of last year, to almost $10 billion.
This only underscores the fact that AI is a big deal, and we should all be paying close attention.
Action
More than paying attention, we can act to inform our elected representatives about the problem and the pressing need to regulate the development of powerful AI systems.
We’re running a campaign for this. Our team has already got dozens of UK politicians to back our campaign acknowledging the extinction threat posed by AI and calling for binding regulation on the most powerful AI systems.
You can make a difference too! We have contact tools on our website that make contacting your elected representatives super quick and easy — it takes as little as 17 seconds!
If you live in the UK, you can find our tool to write to your MP here: https://controlai.com/take-action/uk
And if you live in the States, you can write to your senators here: https://controlai.com/take-action/usa
Thousands of citizens have already used our tools to do so!
If you wish to subscribe to our personal newsletters, you can do so here: