
Welcome to the ControlAI newsletter! This week we’re bringing you updates on various legal developments in AI, along with some ways that you can help protect humanity from superintelligence.
If you find this article useful, we encourage you to share it with your friends! If you’re concerned about the threat posed by AI and want to do something about it, we also invite you to contact your lawmakers. We have tools that enable you to do this in as little as 17 seconds.
Table of Contents
Senate Bill 53
California’s draft AI transparency bill, Senate Bill 53, is nearing the end of its journey towards a final vote — the last hurdle it has to clear before it is sent to Governor Newsom’s desk, where he will either approve or veto it.
SB 53 is quite a targeted piece of legislation, covering only AI developers that build the most powerful AI systems. If it goes through, these developers will have to keep and publish frameworks describing how they assess and mitigate catastrophic risk from AI. This would include governance and model weight security (preventing bad actors from stealing their AIs).
For each new or significantly different frontier model, AI developers would have to publish a pre-deployment transparency report such as a system or model card, which explains catastrophic risk assessments and results. They would also have to send California’s Office of Emergency Services (OES) information about risk assessments they’re doing on internal-use systems (AIs that aren’t publicly deployed) on a regular schedule.
AI developers would also have to report “critical safety incidents” within 15 days of discovering them, and within 24 hours if there is an imminent risk of death or serious injury associated with the incident.
The bill also adds whistleblower protections for AI company employees related to critical safety incidents, and says that large frontier AI companies can’t make materially false or misleading statements in relation to catastrophic risk, risk management, or compliance with their frontier AI framework.
The strictest measures apply only to “large frontier AI developers” (those with over $500 million in revenues). Only the Attorney General can bring civil action in relation to breaches of law, with penalties of up to $1 million per violation.
Legislators are in an end-of-session crunch with tomorrow being the last day for each house to pass bills. Notably, AI company Anthropic has publicly supported the bill, after it was weakened from original drafts.
AI regulation in California is particularly significant as California is the home of major AI developers such as OpenAI, Anthropic, Meta, and xAI. It’s also the 4th largest economy in the world. It’s likely that, in practice, AI companies based outside of California wanting to do business in the state would need to follow its rules.
The SANDBOX Act
Senator Ted Cruz (R-TX) has introduced a bill in the US Senate that would allow AI companies to apply for exemptions from federal regulation. The justification for this is that it could help them experiment in developing their technology. Cruz has also framed this in terms of America “winning” the AI race.
This is interesting because currently there is very little AI regulation in the United States, at the federal or state level, and nothing to address the extinction risk posed by AI - which Nobel Prize winners, hundreds of AI experts, and even the CEOs of the top AI companies have warned of.
Senator Cruz does say this won’t be a free pass, and AI companies will still have to follow the same laws as everyone else.
The bill doesn’t include a ban on states regulating AI, which Cruz had pushed for earlier this year. In a dramatic twist of events, that measure was removed in the Senate. We wrote about how that happened here:
At a recent political conference, Senator Josh Hawley (R-MO) said that the AI moratorium was a “terrible policy”, and that “we ought to make sure that bad idea stays buried”.
Consumer rights advocacy group Public Citizen has said that Cruz’s new proposal would treat Americans as “test subjects” and that stories of AI companies being held back by regulation are false. The Tech Oversight Project called it a “sweetheart deal for Big Tech CEOs”.
Attorneys General Warn OpenAI
The Attorneys General of California and Delaware, top legal advisors and law enforcement officers for their respective states, have written a scathing letter to OpenAI.
This is significant as each of them have the ability to block OpenAI’s attempt to convert itself into a for-profit company — which they highlight explicitly in their letter. OpenAI’s move towards becoming a for-profit has faced significant criticism, as they were established as a non-profit for the benefit of humanity.
Attorneys General Bonta and Jennings highlight that OpenAI’s founding documents state that their mission includes ensuring that AI is developed safely. Yet they say that they are deeply troubled by recent reports of dangerous interactions by OpenAI’s AIs with users. In particular, they raise the issue of a teen who died by suicide after lengthy conversations with ChatGPT, and a murder-suicide case.
They say that in their recent meeting with OpenAI’s board “we conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children.”
Bonta and Jennings say that OpenAI and the AI industry are not where they need to be in ensuring public safety, concluding:
The recent deaths are unacceptable. They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.
The UK AI Bill
Sarah Olney MP has written a great article in PoliticsHome, highlighting that the British government’s promised AI Bill has been consistently delayed, despite repeated promises to introduce it.
Findings that AIs are willing to blackmail users in tests are "but the first taste of the kind of loss of control that comes with creating AIs we don't understand, without any safeguards", Olney continues.
Olney says an AI Bill must be pushed forward to keep UK citizens safe. We couldn't agree more. It’s fantastic to see politicians continue to raise this issue!
How You Can Help
Top AI researchers are warning that if artificial superintelligence - AI vastly smarter than humans - is built it could mean the end of our species. Last week we wrote about how this could happen:
But how do we prevent this? Here we have some ways you can help!
Microcommit
To address the danger posed by superintelligence, we must coordinate. But today, coordination against this threat is often scattershot and clumsy. That's why we've just launched a new project to scale and focus action to deal with this problem: Microcommit.
With Microcommit, you can spend just 5 minutes of your time per week to make a difference. You sign up, and once per week we’ll send you a small number of easy tasks. You don’t even have to do the tasks, just acknowledging them makes you part of the team.
Check it out here: https://microcommit.io
If Anyone Builds It, Everyone Dies
"A compelling case that superhuman AI would almost certainly lead to global human annihilation."
Those are the words of Jon Wolfsthal, former Special Assistant to the President for National Security Affairs, reviewing "If Anyone Builds It, Everyone Dies".
Two of the top AI experts in the world, Eliezer Yudkowsky and Nate Soares, have just written a book arguing that superintelligence could mean the end of humanity.
That's not a metaphor, they're talking about human extinction. It's important we take their warning with the seriousness it deserves. The book is coming out next week, and we're very much looking forward to reading it!
You can get the book here: https://ifanyonebuildsit.com/#preorder. Preordering helps the book get on bestsellers lists.
Contact Your Representatives!
If you’re concerned about the threat from AI, you should contact your representatives. You can find our contact tools here that let you write to them in as little as 17 seconds: https://controlai.com/take-action.
We also have a Discord you can join if you want to help humanity stay in control, and sharing this article with friends is also helpful!
“Require” meaning by regulated law that carries billions of dollars worth of fines and a prison sentence for life. Come on! Can we never hear the music because we are constantly listening to propaganda ? Don’t wake up too late humanity. You’ve been warned.
Write and require Ethical standards.