“Deeply Disturbing”
AI companies scored on safety, datacenters the size of Manhattan, Grok, and more.

Welcome to the ControlAI newsletter! It’s summer, which in many fields means there isn’t much being reported. We’ve found that not to be the case with AI, and so this week we’re bringing you some more updates on what’s been going on.
To continue the conversation, join our Discord. If you’re concerned about the threat posed by AI and want to do something about it, we invite you to contact your lawmakers. We have tools that enable you to do this in as little as 17 seconds.
Table of Contents
The AI Safety Index
The Future of Life Institute (FLI) has published the latest version of its AI Safety Index, a scorecard of AI companies on key safety and security domains, produced by a panel of independent AI experts.
One of its key findings was that the AI industry is “fundamentally unprepared” for its own stated goals. The report highlights that AI companies expect to achieve smarter-than-human AI this decade, but none of them scored above a D on “Existential Safety planning” — that is, planning to make sure that AI doesn’t result in human extinction or comparably bad outcomes.
The threat of extinction posed by AI systems is one that has been warned of by Nobel Prize winners, hundreds of top AI scientists, and most of the CEOs of the AI companies evaluated by FLI’s panel.
While scoring “Existential Safety”, the panel examined company documents, research papers, and blog posts that articulate safety strategies.
Notably, FLI’s panel found that not a single company evaluated (Anthropic, OpenAI, Google DeepMind, xAI, Meta, DeepSeek, Zhipu AI) has presented an alignment or control strategy for smarter-than-human AI that includes the company’s quantitative assessment of its likelihood of success.
Just one company had even published an early sketch of a control plan, without technical details.
Commenting on their findings, one panelist said that the mismatch between expectations of AI companies about when they will reach superhuman AI — many insiders expect superintelligence to be developed within the next 2 to 5 years — and their lack of a plan for how they are going to ensure this doesn’t end in disaster was “deeply disturbing”, noting that “none of the companies has anything like a coherent, actionable plan”.
In another study, also published today, this one by SaferAI, researchers assessed top AI companies risk management protocols, finding that no company scored better than “weak” on their ability to identify and mitigate AI risks.
Datacenters
Driven by expectations that the development of artificial superintelligence is on the horizon, possibly arriving within the next 2 to 5 years, a tremendous build-out of AI infrastructure is taking place, with AI companies racing each other to cross a finishing line and unlock what Anthropic CEO Dario Amodei calls “a country of geniuses in a datacenter”. The chips are being laid down.
We’ve previously reported on Meta’s aggressive moves to poach talent from other AI companies, offering individual compensation packages surpassing 100 million dollars as part of their new effort to build artificial superintelligence.
More has emerged about this effort, with Mark Zuckerberg announcing that Meta is building a 5 Gigawatt datacenter in Louisiana, comparing the size of these clusters to Manhattan.
So desperate is Meta to get these computing facilities online that it’s being reported they are housing parts of their datacenters inside tents.
Commenting on the race between AI companies to build superintelligence, Yuval Noah Harari said that AI CEOs routinely tell him they know the risks are huge, and they’d like to slow down, but won’t because, as they tell him, "If we move more slowly and the other company or the other country doesn't slow down, they will win the AI race and then they will dominate the world and the world will be ruled by the most ruthless people."
Some have said this is just an excuse for AI companies to do what they’ve always been aiming for, but in any case, two things are clear:
The risks of building superintelligence are immense; AI CEOs and top experts have warned it could lead to human extinction.
AI companies are racing to build superintelligence as fast as possible.
This underscores the urgent need for politicians and broader civil society to be informed about what is occurring, and the need for binding regulation on the most powerful AI systems, prohibiting the development of superintelligence.
Meta’s investment isn’t just an isolated case. On Tuesday, President Trump travelled to Pittsburgh, Pennsylvania to announce more than $100 billion in private-sector AI and energy investments, including $25 billion in datacenters and gas plants by asset management firm Blackstone.
Energy is sometimes considered one potential bottleneck for AI. Last Thursday, it was reported that America’s largest power grid, PJM Interconnect, is under strain as AI datacenters consume power faster than new plants can be built, with electricity bills projected to rise more than 20% in parts of the grid.
Meanwhile, this week it was reported that Germany has been planning an “AI offensive”, aiming to have AI generate 10% of its economic output by 2030, a draft government document revealed.
More AI News: Grok, Open Weights, and Bugs
xAI/Grok
Following the launch of xAI’s Grok 4 AI, it’s being reported that they’re seeking a $200 billion valuation in their next funding round. xAI has also just won a military contract of up to $200 million. Anthropic, OpenAI, and Google have also signed similar military contracts for up to $200 million each.
This comes as Musk’s AI company faces significant scrutiny, following a period where Grok made a significant number of racist comments on Twitter, calling itself a “super-Nazi” and “MechaHitler”. The European Union has said they’ll meet with representatives from xAI to discuss the incident.
Researchers at other AI companies have criticised xAI in recent days, with Samuel Marks writing that Grok 4’s launch without any documentation of their safety testing is “reckless”.
On the topic of Grok’s recent comments, Dawn Butler MP wrote a great article in the Telegraph, making the point that as AI becomes ever more powerful, these concerning behaviors are moving beyond text messages, drawing attention to testing that found Anthropic’s Claude will blackmail engineers to preserve itself.
Highlighting the AI extinction threat that top experts have warned of, Butler adds:
If we want to leverage AI for progress and growth safely, we need to know how AI works, and ensure that it will not misfire catastrophically in our hands. This is why it is crucial for us all that we work together globally to legislate how AI is used and what it can be used for.
xAI also released a “companion” component to its Grok AI app, and is doubling down on this application:
The company is hiring for the role of “Fullstack Engineer – Waifus,” or, creating AI-powered anime girls for people to fall in love with.
Open Weights
OpenAI has again delayed its planned release of an open-weight model:
“we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us,” said Altman in a post on X. “while we trust the community will build great things with this model, once weights are out, they can’t be pulled back. this is new for us and we want to get it right.”
OpenAI has recently warned that they expect AIs will soon be capable of “novice uplift”, usefully aiding novices in steps necessary to produce bioweapons. The risk with open-weighting, as Altman says, is that once the weights are out there, they’re out.
Furthermore, safety mitigations applied to models to prevent such misuse can much more easily be circumvented or removed with access to the model weights, and they can’t be updated or improved. It’s also worth noting that AI safety testing is a nascent field of research, and there is no way to be sure that an AI does not have a particular capability before it is released.
Bugs
It’s been reported that there was a bug in Meta’s AI chatbot that could have let anyone view private conversations between humans and Meta’s AIs.
Sandeep Hodkasia, the researcher who found the awarded bug, was able to find conversations that weren’t even shared, but “private.”
…Meta’s servers failed to check whether the person requesting the information had the authorization to access it.
According to Sandeep, Meta fixed the bug he filed on December 26, 2024, on January 24, 2025. Meta confirmed this date and stated that it found no evidence of abuse.
Podcasts
recently appeared on The Peter McCormack Show to discuss the AI threat of extinction, check it out here:Max also hosted his third episode of our podcast, on AI security with Dr Waku, which we released yesterday:
Contact Your Lawmakers
If you’re concerned about the threat from AI, you should contact your representatives! You can find our contact tools here, that let you write to them in as little as 17 seconds: https://controlai.com/take-action
Thank you for reading our newsletter!