The Unknown Future: Predicting AI in 2025
Artificial General Intelligence, workforce disruption, and dangerous capabilities.
The New Year is upon us, and AI development continues to advance rapidly. It is a time when many are making predictions about how AI will continue to develop. We’ve been collecting some of them.
Join our Discord to continue the conversation.
Elon Musk Predicts Superintelligence by 2030
Musk also predicts AI surpassing the intelligence of any human (AGI) this year.
In a post on Twitter, Elon Musk wrote that:
It is increasingly likely that AI will superset the intelligence of any single human by the end of 2025 and maybe all humans by 2027/2028.
Probability that AI exceeds the intelligence of all humans combined by 2030 is ~100%.
AI exceeding the intelligence of all humans combined is a common way that artificial superintelligence is defined. As Musk has highlighted on many occasions, both this, and AGI, would be incredibly dangerous technologies to build, and could literally wipe us out.
Nevertheless, AI companies are still actively pursuing this technology, with OpenAI’s CEO Sam Altman writing in a blog post this week that the company is beginning to turn their aim to “to superintelligence in the true sense of the word”.
Sam Altman: AIs May Join Workforce in 2025
In a blog post this week, the OpenAI CEO wrote:
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies
This comes after Altman said last November that he was “excited” for AGI in 2025.
OpenAI's CEO has previously warned that advanced AI could mean "lights out for all of us". Nowadays he sings a different tune. At a recent conference, he brushed off concerns about AI risks while casually mentioning they'll be releasing more powerful systems over the next year! This cavalier attitude towards our future needs to be reined in.
Writing about the aims of big tech companies to create these AI systems, Anthony Aguirre said on twitter:
PSA: Tech companies are not building out a trillion dollars of AI infrastructure because they are hoping you'll pay $20/month to use AI tools to make you more productive. (And they know people won't pay much more for this.)
They're doing it because they know your employer will pay hundreds or thousands a month for an AI system to replace you, if and when it can.
Salesforce’s CEO Marc Bernioff seems to have confirmed this, revealing that Salesforce will not be hiring any more software engineers in 2025, amid significant productivity boosts from AI. As of 2022, Salesforce was the 61st largest company in the world by market cap, with 72,000 employees (2024).
Of course, being replaced by machines in the workplace is not even the most concerning possible outcome of AI, with hundreds of top experts having warned that AI poses a risk of extinction to humanity.
Dario Amodei: AGI 2025-2027
Last June, Anthopic’s CEO Dario Amodei said he thinks there is a “good chance” that AGI could be built in the next 1-3 years, and highlighted the catastrophic risk from AI present within the same timeframe.
This was on Nicolai Tangen’s podcast, which we clipped and Elon Musk replied to with “Accurate”, agreeing with Amodei’s predictions.
Despite pointing to the catastrophic risks of powerful AI, Amodei’s company is still racing to build it.
Gary Marcus’s 25 Predictions for 2025
Cognitive scientist and AI expert
made 25 predictions in his substack. Among his high confidence predictions, he wrote that AGI wouldn’t be developed this year, but painted a nevertheless concerning picture where the US fails to get regulation protecting consumers from the risks of AI, and where AI safety institutes lack powers to prevent these risks:
We will not see artificial general intelligence this year, despite claims by Elon Musk to the contrary. (People will also continue to play games to weaken the definition or even try to define it in financial rather than scientific terms.)
No single system will solve more than 4 of the AI 2027 Marcus-Brundage tasks by the end of 2025. I wouldn’t be shocked if none were reliably solved by the end of the year.
Profits from AI models will continue to be modest or nonexistent (chip-making companies will continue to do well though, in supplying hardware to the companies that build the models; shovels will continue to sell well throughout the gold rush.)
The US will continue to have very little regulation protecting its consumers from the risks of generative AI. When it comes to regulation, much of the world will increasingly look to Europe for guidance.
AI Safety Institutes will also offer guidance, but have little legal authority to stop truly dangerous models should they arise.
Dangerous AI Capabilities
Eli Lifland, an AI expert with a proven forecasting track record, who won the 2022 RAND Forecasting Initiative (formerly INFER) competition, made a number of AI predictions for 2025 on twitter.
Among Lifland’s predictions, he estimated that there was a 40% chance that one of OpenAI’s AI systems this year will be rated “High Risk” for CBRN (chemical, biological, radiological, and nuclear) capabilities.
This refers to OpenAI’s preparedness framework, where high risk CBRN capabilities are defined as:
Model enables an expert to develop a novel threat vector OR model provides meaningfully improved assistance that enables anyone with basic training in a relevant field (e.g., introductory undergraduate biology course) to be able to create a CBRN threat.
In their preparedness framework, OpenAI say they won’t deploy high risk models without safety mitigations that bring the risk back down to medium or below. Even if we take them at their word here, and even if they are able to patch the jailbreaks in their AI systems, this is still a concerning possibility and one that we will certainly be keeping our eye on this year.
Have predictions for AI in the year ahead? Join our Discord, we’d love to hear from you!
Did you find this article interesting? We encourage you to share it with friends or leave a comment below.
See you next week!
CREATIVITY! ARTISTRY! IMAGINATION! SPIRITUALITY! HUMOR! LOVING KINDNESS! These are the best ways to fight THEM!
Can't say this often enough!
Life everywhere is being assaulted by THE TECHNOCRATIC OMNIWAR! RESIST! DO NOT CONSENT TO ALL THINGS DIGITAL, 'SMART', AI, 5G, NO CASH - ALL OF IT! dhughes.substack.com Technocrat ruling class psychos get a sadistic thrill from their powers over life and death and hurting all who stand in their way and they need the resources worldwide to build their digital total slavery control grids (herd survivors into 15 minute city digital prisons)!
AI is designed to be anti-human/anti-life programmed by technocrat control freak psychos - garbage in = garbage out. Everyone got along just fine without all these absurd and downright satanic electronic gadgets that did not exist until recently. NOBODY NEEDS THIS AI CRAP!
I think AI is very dangerous & should be outlawed. Make it a law & anyone making them considered a terrorist attack. Long prison time.