We’re excited to be able to share with you another project we’ve been working on. We’ve put together a collection of quotes from CEOs, leaders, and experts on AI and the risks it poses to humanity.
Below are some highlights from the quotes we found, go to: https://controlai.com/quotes to get the full list.
The Danger
A recurring message among AI industry leaders has been that the development of artificial superintelligence threatens us with extinction. For instance, OpenAI’s CEO Sam Altman has on numerous occasions spoken about this risk:
Feb 2015 - “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”
Jan 2023 - “The bad case — and I think this is important to say — is like lights out for all of us.”
Meanwhile Elon Musk, now his rival and CEO of xAI, has been consistent on the threat posed:
Mar 2018 - “Mark my words — AI is far more dangerous than nukes.”
Feb 2023 - “One of the biggest risks to the future of civilization is AI.” -
Nov 2023 - "We are seeing the most destructive force in history here. We will have something that is smarter than the smartest human.”
And Dario Amodei, CEO of Anthropic, has said “My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 per cent and 25 per cent.” (Oct 2023)
These are not isolated warnings from a few concerned individuals, Altman, Musk, and Amodei are currently heading three of the biggest AI companies actively driving the technology.
Their expressed views are shared by the broader AI scientific community, illustrated by the 2023 joint statement by hundreds of AI scientists that identified AI as an extinction threat to humanity:
The Challenge
So how do we ensure that AI remains safe? The unfortunate truth is that nobody knows. Jaan Tallinn, co-founder of Skype and the Future of Life Institute, says “I have not met anyone right now in these labs who says that sure, the risk is less than 1% of blowing up the planet” (May 2023).
There is no known technical method to ensure that superhuman AI does not have disastrous consequences for humanity. AI researcher Eliezer Yudkowsky points out that with this kind of problem, you only get one shot: “We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again” (Jun 2022)
AI godfather Yoshua Bengio lays it out explicitly: “We don't know how to build an AI system that will not turn against humans or that will not become a super powerful weapon if [in] the hands of bad actors or be used to destroy our democracies.” (Apr 2024)
This is a particularly pressing problem, as AI capabilities continue to advance rapidly, Google DeepMind’s Chief AGI Scientist Shane Legg thinks it’s likely AGI will be built in the next 3 years: “Still seems about right to me. Of course this now means a 50% chance of AGI in the next 3 years!" (Jan 2025)
Legg’s prediction is in line with estimates by others in the industry, which we covered in a recent article “The Unknown Future: Predicting AI in 2025”.
The Response
Given the lack of a technical solution to the problem of ensuring that powerful AI systems don’t wipe us out, and the rapid growth in AI capabilities, caution is strongly advised. Microsoft’s AI CEO Mustafa Suleyman says “Until we can prove unequivocally that it is [safe], we shouldn’t be inventing it.” (Apr 2024).
Google DeepMind’s CEO Demis Hassabis observes that we must coordinate globally to manage the risks of this technology: “We must take the risks of AI as seriously as other major global challenges, like climate change [...] It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.” (Oct 2023)
Dan Hendrycks, Director of the Center for AI Safety, issues a stark warning on the need for international coordination: “If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct.” (Jul 2023)
It is clear that in order to survive AI and flourish, countries must cooperate and put in place binding legislation to ensure that AI development is not permitted to continue unchecked.
There is little in the way of concrete policy solutions to this problem, which is why we decided to build one, A Narrow Path.
More recently, Demis Hassabis has advocated for some international AI governance institutions similar to the ones we proposed in A Narrow Path, calling for a collaborative international AI research lab, an IAEA for AI that would monitor and deal with unsafe projects, and collective decisionmaking mechanisms — though with important differences. We believe that an international AI governance framework should be used to ensure that the frontier of AI capabilities is not advanced until it is shown to be safe to do so.
Some AI industry leaders haven’t always been consistent in what they say and do. If you want to follow these inconsistencies, we recommend you check out our other project: “Artificial Guarantees”.
AI Quotes is an ongoing project. If you’ve found any more interesting quotes by AI leaders and have a tip for us, join our Discord, we’d love to hear from you!
See you next week!
, ,
I have been saying that AI is dangerous & should get rid of it. One doesn't have to be a scienetist to know this.