ControlAI Weekly Roundup #3
UK pledges to legislate against AI risks, what a Trump victory means for tech, and the risks of a shrinking gap between open source and closed source AI
Welcome to the ControlAI Weekly Roundup! Each week we provide you with the most important AI safety news, and regular updates on what we’re up to as we work to make sure humanity stays in control of AI.
Join our Discord to continue the conversation.
What we’re reading
UK will legislate against AI risks in next year, pledges Kyle
Source: Financial Times
Peter Kyle, the UK’s Secretary of State for Science, Innovation, and Technology, pledged at the Future of AI summit on Wednesday that the UK’s upcoming AI bill will be focused on turning voluntary commitments by AI companies on testing their models into legally binding rules.
The FT writes that the AI bill will exclusively focus on frontier AI models, the most-advanced AI systems, made by a small number of top AI companies.
Kyle said that “citizens need to know that we are mitigating the potential risks”.What a Trump Victory Means for Tech
Source: New York Times
Kevin Roose observes in the New York Times that the US presidential campaign had very little discussion of artificial intelligence.
Roose points out that Elon Musk, who strongly backed president-elect Trump in the campaign, could play a role as a wild card.
On the one hand, Musk runs a frontier AI company, xAI, that might benefit from light regulation, but he has also been a prominent public voice on the extinction risk of AI, and was a supporter of California’s AI bill SB-1047 — which would have made top AI companies liable for damages if they caused a catastrophe.The Gap Between Open and Closed AI Models Might Be Shrinking. Here’s Why That Matters
Source: TIME Magazine
Tharin Pillay writes in TIME about the risks that open sourcing powerful AI models creates, including producing child sexual abuse material, and use by rival states.
Pillay also highlights that while Meta describe their models as open source, they do not even meet the Open Source Initiative definition, whose new definition also specifies training data and code used to train models as part of open source.
The article quotes a recent Epoch AI report, where the authors write that “The best open model today is on par with closed models in performance, but with a lag of about one year”.
Pillay points out that “If Meta’s next generation AI, Llama 4, is released as an open model, as it is widely expected to be, this gap could shrink even further.”, and this would pose a unique regulatory challenge:Because of the lack of centralized control, open models present distinct governance challenges—particularly in relation to the most extreme risks that future AI systems could pose, such as empowering bioterrorists or enhancing cyberattacks. How policymakers should respond depends on whether the capabilities gap between open and closed models is shrinking or widening. “If the gap keeps getting wider, then when we talk about frontier AI safety, we don't have to worry so much about open ecosystems, because anything we see is going to be happening with closed models first, and those are easier to regulate,” says Seger. “However, if that gap is going to get narrower, then we need to think a lot harder about if and how and when to regulate open model development, which is an entire other can of worms, because there's no central, regulatable entity.”
The rise of AI: When will Congress regulate it?
Source: Fox News
“It is said that predicting the future isn’t magic. It’s really just artificial intelligence.”
Chad Pergram suggests in Fox News that Congress be asked when it might pass a bill to regulate AI, before it spirals out of control, and provides an overview of current thinking on Capitol Hill regarding AI regulation.
What we’re watching
In light of the US presidential election, we thought it’d be helpful to highlight our mashup of clips of president-elect Donald Trump, and Kamala Harris, speaking about AI risks:
You can find the clip here (Twitter).
What we’re working on
This week we attended a parliamentary roundtable in London to discuss AI regulations and the upcoming UK AI Bill. Our policy team has been meeting with relevant people to discuss a range of topics, including whistleblowing processes, AI extinction risk, and potential synergies with our work.
In case you missed it, last week our director
also co-authored The Compendium, which can be paired with A Narrow Path to provide a comprehensive guide to AI risk.If you want to get in touch with us you can do so at hello@controlai.com or join our community on Discord.
See you next week!
I would very much like to know how Control Ai views Yoshua Bengio's proposals and efforts for Ai safety!