Ctrl+Z: California’s Second Swing at Regulating AI
California tries again with SB 53, Europe resists pressure to pause the AI Act, and Joe Rogan calls out those saying they can control superintelligence as gaslighting us.

Welcome to the ControlAI newsletter! This week we’re bringing you some more updates from the world of AI.
To continue the conversation, join our Discord — and if you’re concerned about the threat of AI and want to do something about it, we invite you to contact your lawmakers. We have tools that enable you to do this in as little as 17 seconds.
Table of Contents
AI Regulation in California
Last week, we wrote about the removal of the provision banning states from regulating AI for 10 years from the Big Beautiful Budget Bill. It’s now being reported that legislators in California are working on a new bill to place transparency requirements on the largest AI companies, SB-53.
This comes after the failed push for regulation in California last year, where a bill called SB-1047 that would have made AI companies liable for causing a catastrophe made it through the California Senate and Assembly — ultimately vetoed by Governor Newsom after intense lobbying by AI companies and VC firms.
Why does California matter? As the fourth largest economy, and the home of top AI companies OpenAI, Anthropic, Google, and xAI, California is uniquely well positioned to regulate AI companies.
In the aftermath of SB-1047, a California AI policy working group was established, led by Fei-Fei Li, an ally of Marc Andreessen whose a16z VC firm lobbied against SB-1047. Perhaps surprisingly, this working group did make some recommendations to regulate AI, including recommendations on transparency, which SB-53 has taken up.
Given that this policy group was established by Governor Newsom, it could mean that he’d be more willing not to veto the new bill.
The EU AI Act
Big Tech companies, including Alphabet/Google and Meta, have been lobbying the EU to delay rolling out the EU’s AI Act, claiming it will hurt Europe’s competitiveness.
Notably, the AI Act places some requirements on developers of the highest risk AI systems. The act requires developers of general-purpose AIs to keep up-to-date technical information and provide this to the EU’s AI Office and national authorities upon request.
Where the AIs are considered to pose systemic risk, the act also requires developers to perform standardized model evaluations, assess possible risks, report incidents, and ensure adequate cybersecurity protections.
The provisions on developers of general-purpose AI models are due to come into force on August 2nd, which is why AI companies have been focusing on this.
But the European Union has just confirmed there won’t be any pause on the implementation of the act, with Commission spokesperson Thomas Regnier telling a press conference:
I've seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause,
Yampolskiy on Rogan
AI expert Professor Roman Yampolskiy appeared on Joe Rogan’s podcast, which got significant attention online. Yampolskiy highlighted the extinction threat posed to humanity by superintelligence, with Rogan agreeing with him on many of his points.
As a reminder, Nobel Prize winners, hundreds of top AI scientists, and even the CEOs of the leading AI companies have warned that AI is an extinction threat. We still don’t have any regulation that addresses this problem, something that is sorely needed.
In a clip of the podcast we posted on Twitter, Rogan, who has met many CEOs of the top AI companies, said he feels like he’s being gaslit when people tell him they can control superintelligence, “I don’t believe them. I don’t believe that they believe it, because it just doesn’t make sense.”
On what to do about this threat, Yampolskiy advocated for AI CEOs, the US, and China to agree not to build superintelligence, to involve government, and to contact your politicians.
We have contact tools that make contacting your lawmakers super quick and easy. It takes as little as 17 seconds to write to them using our tools. Check them out here: https://controlai.com/take-action
More AI News
One story that got significant attention this week was that of an Xbox producer telling laid-off workers that they should use AI to deal with their emotions about being fired. This came just after Microsoft (Xbox’s parent company) confirmed it would be laying off 4% of its global workforce, amid $80 billion of investments in AI infrastructure.
Nvidia’s market cap briefly touched $4 trillion dollars, the first company to achieve this. This valuation of the chip-design company, whose AI chips are used by top AI companies reflects the tremendous amounts of investment pouring into the AI industry in anticipation of ever more powerful AI systems.
Meta has poached Apple’s head of AI models. We’ve previously reported on Meta’s aggressive hiring tactics in their pursuit of artificial superintelligence, offering compensation packages of up to $300 million (for a single hire!) over 4 years.
xAI’s Grok 4 model was launched today, alongside a $300 monthly subscription, SuperGrok Heavy. In recent days xAI has attracted significant negative attention after a period where its Grok AI made a significant number of racist comments, dubbing itself ‘MechaHitler’. Some have made the point that if AI companies can’t even achieve aims like avoiding this kind of language with current generation AIs, it strains credulity to expect that they will be able to control the superintelligence, which they’re explicitly aiming to build.
Contact Your Lawmakers
If you’re concerned about the threat from AI, you should contact your representatives! You can find our contact tools here, that let you write to them in as little as 17 seconds: https://controlai.com/take-action
Even better, you can call your Senators, Member of Congress, or Member of Parliament’s office. If you call them, let us know here!
Thank you for reading our newsletter!