Preemption Prevented: AI Regulation Ban Struck Out
The twists and turns of the Big Beautiful Bill’s provision to ban states regulating AI, and more.

Welcome to the ControlAI newsletter! This week we’ll be informing you on the biggest developments in AI in recent days, with an emphasis on the fight in Congress over whether to ban states from regulating AI for 10 years. If you'd like to continue the conversation, join our Discord!
Table of Contents
The Senate Strikes Out AI Regulation Ban
In a dramatic twist of events, the provision in the Big Beautiful Bill (US budget reconciliation bill) that would have banned US states from regulating AI for 10 years has been stricken out. As of today, Congress has still not regulated AI, so restricting states from regulating it didn’t make any sense. This means that states recognizing the need for regulation to protect their citizens from the risks of powerful AI will be free to do so.
If you want to help get necessary regulation in place to prevent the grave risks of AI, you should call your representatives and let them know! Here you can find phone numbers for the offices of your Senators (US), House Representatives (US), and Members of Parliament (UK). If you do this, let us know in our feedback form!
We first reported on the provision banning states regulating AI a couple of weeks ago, here’s how it all went down:
At the time, the bill had just passed the US House of Representatives. Representative Marjorie Taylor Greene said House members didn’t get the full copy of the text until shortly before they had to vote on it, and missed it in her reading of the bill — if she had known it was in there she says she would have voted against the 1000+ page bill.
The provision, sometimes known as “preemption”, but also confusingly referred to as the “AI moratorium”, has faced significant bipartisan opposition.
Over 260 state lawmakers and 40 state Attorneys General wrote to congress to oppose it.
Having passed the US House, the bill was then introduced to the Senate. The possibility was raised that the provision might not pass what’s known as the Byrd Rule, which provides a procedure where Senators can get provisions removed from budget reconciliation bills where they don’t affect spending or revenues.
Senator Cruz found a way to change the provision to make it ‘budgetary’, by tying compliance to access to a $42 billion federal broadband funding programme, and it was then reported that the provision passed the Byrd Rule.
In a brief twist, the Senate parliamentarian asked the Commerce Committee to alter the language, and later reporting confirmed again that it was compliant with the Byrd Rule, after Cruz re-drafted the provision to only ban states regulating AI if they want access to a $500 million AI package.
The most recent language banned states from regulating AI models and systems if they want access to $500 million in AI infrastructure and deployment funds.
However, the parliamentarian voiced concerns about the provision when she met with Cruz and Sen. Maria Cantwell (D-Wash.), the top Democrat on the Senate Commerce Committee, on Wednesday night, Cantwell told reporters Thursday.
Democrats had argued the measure would impact $42 billion in broadband funding in violation of the Byrd Rule.
In light of this, it looked very difficult to predict whether or not the provision would ultimately make it into law, with Senators Blackburn (R), Hawley (R), and Cantwell (D) having spoken against the ban on states regulating AI.
Senator Cruz then made a deal with Senator Blackburn, where among other changes the period the ban would last for would be shortened from 10 to 5 years. The new version of the provision also sought to exempt laws addressing CSAM, children’s online safety, and individuals’ rights to their likeness.
Legal experts Mackenzie Arnold and Charlie Bullock said that the amendment seemed to fail to protect the laws its drafters intended to exempt.
A day later, Senator Blackburn announced that the deal was off, stating:
While I appreciate Chairman Cruz’s efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most … Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.
Blackburn then co-sponsored Senator Cantwell’s amendment to remove the preemption provision from the bill, and got Senators Cantwell, Collins (R), and Markey (D) to co-sponsor an amendment she put forward.
In the course of the Senate “vote-a-rama”, a procedure in which senators can propose an unlimited number of amendments to a budget bill, Blackburn’s amendment came up for a vote.
Shortly before the vote, Senator Cruz spoke, asking senators to back Blackburn’s amendment to strike the provision from the bill.
This was followed by a brief speech by Senator Blackburn making the case for why it should be struck down, making the point that we should approach the issue with AI with more seriousness.
When the votes came in, 99 Senators had supported Blackburn’s amendment to remove the provision from the bill, with just one voting against.
While the preemption clause has been struck out by the Senate, the bill will now go back to the House, where it could be further amended, and there could be separate bills introduced in the future that attempt to ban states from regulating AI.
As Senator Blackburn said in her speech, we should approach the issue of AI with the seriousness that it deserves.
Simply banning US states from regulating AI, when there is still a lack of federal regulation on AI, doesn’t make any sense. So we’re glad to see that this provision was scuppered.
This issue is about as serious as it gets. Nobel Prize winners, hundreds of AI experts, and even the CEOs of the top AI companies have warned that AI poses a threat of extinction to humanity.
This risk comes from the development of artificial superintelligence, AI smarter than humans. That’s because nobody knows how to ensure that smarter-than-human AIs are safe or controllable. The state of research in this field — known as alignment — is lamentable.
AI companies are racing towards building superintelligence, and state this openly. Just last month, Sam Altman wrote that OpenAI is “before anything else … a superintelligence research company”. Leading experts estimate that superintelligence could arrive within the next 1 to 5 years.
In order to avoid this threat, we urgently need regulation on powerful AI systems. We need to prevent the development of superintelligence and monitor and restrict precursor technologies.
But to get this done, politicians must be informed about the risks. At ControlAI, our team regularly briefs politicians and finds that 80-85% are only somewhat familiar with AI.
That’s why it’s so important that we alert them to this problem.
You can make a difference here!
We have contact tools available on our website that make writing to your elected representatives super quick and easy, it takes as little as 17 seconds. Thousands of citizens have already used our tools to do this.
But there’s even more you can do!
You can literally just call up your representative’s office and tell them you’re concerned about the risks of AI and ask them to regulate it. Senators, Members of Congress, MPs, and others all have phone numbers for their offices. It’s a completely normal and expected part of the democratic process for citizens to call their representatives and tell them their concerns!
Here you can find phone numbers for your:
Senators (US): https://www.senate.gov/senators/senators-contact.htm
House Representative (US): https://www.house.gov/representatives/find-your-representative
Member of Parliament (UK): https://members.parliament.uk/members/Commons
If enough of us do this together, we can move AI regulation up the agenda and get the necessary regulations in place to keep humanity secure from AI.
So call your representatives! If you do this, we’d love to hear how it was received. We have a Google form here that you can quickly fill to let us know: https://forms.gle/UGDTAKbKQm4sdptv6
Five Levels of General AI Capabilities
Kylie Robison reports in Wired that an unreleased AGI paper might complicate OpenAI’s negotiations with Microsoft. Microsoft, OpenAI’s largest investor ($13 billion), is reported to be trying to get a clause removed from their contract which says that if OpenAI declares it has developed Artificial General Intelligence, Microsoft would not have rights to their future AIs.
Wired says that an internal research paper “Five Levels of General AI Capabilities” might complicate OpenAI’s ability to declare it has achieved AGI, and having this ability gives them leverage in negotiations.
The article also reports that a source with knowledge of the negotiations says OpenAI is “fairly close” to achieving AGI.
We noticed that the outline of the “Five Levels of General AI Capabilities” appears to have been reported by Bloomberg last July.
300 Million Dollars
We recently reported on Mark Zuckerberg’s new superintelligence project and the aggressive hiring process he’s using to recruit top talent.
Rushing into the Unknown
Welcome to the ControlAI newsletter! We have some more updates on AI for you this week. If you'd like to continue the conversation, join our Discord!
“His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone.”
It’s now being reported that Zuckerberg is offering pay packages to potential new hires of up to $300 million over four years, with over $100 million in compensation in the first year. Meta is reported to have made at least 10 such offers.
“That’s about how much it would take for me to go work at Meta,” says one OpenAI staffer who spoke with WIRED on the condition of anonymity as they aren’t authorized to speak publicly about the company.
To put this into perspective, it’s estimated that it cost OpenAI around 80 to 100 million dollars to train GPT-4 back in 2022.
Wired writes that Meta’s CTO Andrew Bosworth attempted to play down the offers in a Q&A with employees:
“Look, you guys, the market's hot. It's not that hot. Okay? So it's just a lie,” he said. “We have a small number of leadership roles that we're hiring for, and those people do command a premium.” He added that the $100 million is not a sign-on bonus, but “all these different things” and noted OpenAI is countering the offers.
This comes after a leaked internal Slack memo by Sam Altman, where he wrote that he felt Meta is acting in a distasteful way, and that he assumes “things will get even crazier in the future.”
If you’re concerned about how crazy things might get, you should contact your representatives! You can find our contact tools here, that let you write to them in as little as 17 seconds, or even better, you can call your Senators, Member of Congress, or MP’s office. If you call them, let us know here!
Thanks for reading our newsletter!
,If you want to also subscribe to our personal newsletters, you can find them here: