ControlAI Weekly Roundup #5: US-China Detente or AGI Suicide Race?
Biden and Xi agree AI shouldn’t control nuclear weapons, a US government commission recommends a race to AGI, and Yoshua Bengio writes about advances in the ability of AI to reason.
Welcome to the ControlAI Weekly Roundup! Each week we provide you with the most important AI safety news, and regular updates on what we’re up to as we work to make sure humanity stays in control of AI.
Join our Discord to continue the conversation.
What we’re reading
Presidents Biden and XI agree that humans, not AI, should control nuclear arms
Source: Reuters
US President Joe Biden and Chinese President Xi Jinping have agreed that AI should not make decisions over the use of nuclear weapons.
The White House said in a statement that:The two leaders affirmed the need to maintain human control over the decision to use nuclear weapons … The two leaders also stressed the need to consider carefully the potential risks and develop AI technology in the military field in a prudent and responsible manner.
AI researcher Eliezer Yudkowsky said the following in reaction to the news:
China is perfectly capable of seeing our common interest in not going extinct. The claim otherwise is truth-uncaring bullshit by AI executives trying to avoid regulation.
China Hawks are Manufacturing an AI Arms Race
Source: Garrison Lovely
Garrison Lovely writes on his substack about the manufacturing of an AI arms race.
On Tuesday the US-China Economic and Security Review Commission published a report, with its top recommendation:Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task.
Lovely highlights ControlAI’s Dave Kasten’s observation on twitter that they chose to call it a Manhattan Project, and not an Apollo Project.
Lovely points out that one of the USCC Commissioners, Jacob Helberg, told Reuters that “China is racing towards AGI … It’s critical that we take them extremely seriously.”, but writes that despite the report filling 793 pages, there is no evidence in the report to support Helberg’s claim.AGI Manhattan Project Proposal is Scientific Fraud
Source: The Future of Life Institute
“An AGI race is a suicide race.”
Max Tegmark has written an article on the suggested AGI Manhattan Project proposal. In it, he observes that the world’s top AI experts agree that we have no way to control such an AI system, and writes that the US-China Economic and Security Review Commission’s report is committing scientific fraud by suggesting that AGI would almost certainly be controllable.
Tegmark characterizes the suggested race against China to AGI as a “hopium war – fueled by the delusional hope that it can be controlled.”
Tegmark also suggests that AGI advocates are disingenuously dangling benefits of AI such as poverty reduction, writing that the report reveals a deeper motivation: the pursuit of power by its creators.AI can learn to think before it speaks
Source: Financial Times
AI godfather Yoshua Bengio writes in the Financial Times about advances in the ability of AI to perform reasoning.
Bengio argues that substantial advances in the ability of AI systems to perform reasoning are being realized, observing that while OpenAI’s GPT-4o scored only 13% on the 2024 US Mathematical Olympiad, their latest o1 model scores 83%, placing it among the top 500 students in the country.
Bengio points out that if these efforts are successful, we may be faced with major risks: “We don’t yet know how to align and controlAI reliably”.
He writes that:It is also concerning that the ability of o1 in helping to create biological weapons has crossed OpenAI’s own risk threshold from low to medium. This is the highest acceptable level according to the company (which may have an interest in keeping concerns low).
Bengio ends with a warning: “Advances in reasoning abilities make it all the more urgent to regulate AI models in order to protect the public.”
Anthropic CEO Says Mandatory Safety Tests Needed for AI Models
Source: Bloomberg
Anthropic CEO Dario Amodei, speaking at an AI safety summit in San Francisco, has called for AI companies to be subject to mandatory testing requirements to ensure their technologies are safe:There’s nothing to really verify or ensure the companies are really following those plans in letter [or] spirit. They just said they will … I think just public attention and the fact that employees care has created some pressure, but I do ultimately think it won’t be enough.
What we’re watching
We’ve published a clip of Connor Leahy speaking about AI risks on BBC News this week [Twitter].
A segment of a livestream Elon Musk did where he said he expected AGI by 2026 at the latest, a few months ago, resurfaced on twitter. You can find our clipping of it here [Twitter].
We’d also like to highlight our recent mashup of statements by President-elect Trump on the dangers of AI [Twitter].
What we’re working on
This week our policy team has been meeting with more parliamentarians in the UK’s House of Commons and House of Lords.
We released a new patch (V1.0.3) for the Compendium, fixing a few minor issues. If you’d like to read the Compendium (and also give feedback) you can find it here.
We also launched our Bluesky account which you can check out here.
If you want to continue the conversation, join our Discord where we also post a daily version of the Roundup.
See you next week!