ControlAI Weekly Roundup #7: AI Accelerates Cyberattacks
AI is assisting hackers mine sensitive data for phishing attacks, Google DeepMind predicts weather more accurately than leading system, and xAI plans a massive expansion of its Memphis supercomputer.
Welcome to the ControlAI Weekly Roundup! Each week we provide you with the most important AI safety news, and regular updates on what we’re up to as we work to make sure humanity stays in control of AI.
Join our Discord to continue the conversation, where we also post a daily version of the Roundup.
What we’re reading
AI, huge hacks leave consumers facing a perfect storm of privacy perils
Source: Washington Post
The Washington Post reports that hackers are using AI to mine unprecedented troves of information dumped on the internet to trick American consumers and even sophisticated professionals into giving up control of their bank and financial accounts.
Ashkan Soltani, executive director of the California Privacy Protection Agency, is quoted as saying:There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,
Google DeepMind predicts weather more accurately than leading system
Source: The Guardian
Google DeepMind announced a new AI system, GenCast, which performs up to 20% better than the state of the art ENS forecast.
Traditional weather forecasting systems take many hours to run on supercomputers, GenCast takes only 8 minutes on a single Google Cloud TPU.
Deepmind also announced Genie 2, which they say can generate an endless variety of playable 3D worlds, from a single prompt image.Musk’s xAI plans massive expansion of AI supercomputer in Memphis
Source: Reuters
xAI has announced plans to expand its Memphis supercomputer to house at least 1,000,000 GPUs. The supercomputer, called Collosus, already has 100,000 GPUs, making it the largest AI supercomputer in the world, according to Nvidia.
Used to train xAIs Grok models, this would represent a huge expansion of the cluster.Sam Altman Downplays the Dangers of A.I. and Musk
Source: New York Times
OpenAI CEO Sam Altman recently told the New York Times’s DealBook conference that OpenAI would release increasingly powerful technologies over the next year.
The New York Times reports that Sam Altman played down the threat posed by AI, saying that the technology would reach human-level capabilities sooner than people realize, but that this would “matter much less” than many predicted.
Variety quoted Sam Altman, speaking at the same event, as saying that eventually artificial superintelligence will be “more intense than people think”.
What we’re watching
In relation to Sam Altman’s statements downplaying the dangers of AI, we’d like to highlight our recent mashup of clips by tech and world leaders speaking about the grave risks of AI [Twitter]. In it, Sam Altman says, only last year, that “the bad case, and I think this is important to say, is lights out for all of us”.
We also recently published a mashup of Trump and incoming Trump administration officials speaking about the dangers of AI, you can find it here [Twitter].
Cate Blanchett has been speaking out about the risks of AI, saying “You can totally replace any person … I’m worried about us as a species.”
What we’re working on
This week our policy team has been continuing meetings with policymakers to discuss AI extinction risk and how to address it.
We’ve released an update to The Compendium (V1.3.0), adding explanations for how open-source AI increases risk, and adding detailed discussion on the ideology behind accelerationists and open-source AI, and their impact on AI racing. This is the biggest update to The Compendium we’ve published so far, since its launch.
If you want to continue the conversation, join our Discord where we also post a daily version of the Roundup.
See you next week!