24 Comments
User's avatar
Dean Simpson's avatar

It would be helpful (and interesting and probably somewhat terrifying) to know in more detail about why there is a rational fear of "human extinction." Seriously.

Tolga Bilge's avatar

The short version of it is that top AI companies are aiming to build AI vastly smarter than humans. Their CEOs and many outside experts believe they may get there in the next few years.

Nobody knows how to actually ensure that such systems are safe or controllable - this is called the alignment problem. The AI companies' plan is that essentially they'll get AIs to figure this out and hope and pray it works, which isn't very encouraging.

If superintelligent AI is built without this ability, we would be faced with an entity much more intelligent than ourselves that we don't control. In general, intelligence gives you power and the ability to get things done.

If it has goals that differ from ours, and doesn't care not to destroy us, it may view us as a potential obstacle to eliminate, or may just transform the world around us in pursuit of its goals such that it is no longer habitable for humans.

We wrote in more detail about how it could happen here: https://controlai.news/p/how-could-superintelligence-wipe

Max's avatar

“The short version of it is that top AI companies are aiming to build AI vastly smarter than humans. Their CEOs and many outside experts believe they may get there in the next few years.” By the next few years do mean 2-4 years?

Tolga Bilge's avatar

Roughly yeah, for example: At Davos, Google DeepMind's CEO Demis Hassabis said he thought there was a 50% chance that AI would have all the cognitive capabilities humans do by 2030, while Anthropic's CEO Dario Amodei simply said "I think this moment will come in the 2020s".

Max's avatar

Tolga Bilge - This hopefully will be the last thing but I am interested to know what your timelines are for AGI!

Tolga Bilge's avatar

I've actually been intending to write something on this for some time.

Besides my work at ControlAI, I have a background in judgmental forecasting and work professionally as a forecaster on weekends, including on relevant projects.

The last time I thought long about this in a forecasting context, which was about a year ago, I estimated 10th, 50th, and 90th percentiles for superintelligence (assuming no strong governance interventions, and no precluding catastrophes) as July 2028, October 2030, and January 2034. That is to say, I thought there was a 50% chance it would be developed by October 2030.

I expect when I eventually write up my thoughts on this topic I will have changed my estimates, probably not a huge amount though. I think the broad picture is still roughly the same: it looks like we are on track for superhuman coders probably by around 2030, we can somewhat reason about this by projecting AI coding time horizons, and I basically buy the AI 2027 model that this lets you bootstrap via accelerated algorithmic progress to superintelligence within an amount of time probably measured in months or a small number of years.

I think the people that have thought most seriously and well and modeled this question are the AI 2027/AI Futures guys - I believe they published an update to their model in December. I have sometimes called Eli my "timelines guy" - he's the guy I'd defer most to on this question.

(Estimates in this comment are my personal beliefs as of about a year ago and don't represent a ControlAI timelines position 🙃)

I haven't talked about AGI here as commonly used definitions differ enough that at this point in AI development they really matter a lot when you're talking about timelines. For superintelligence I'm meaning AI vastly smarter than humans.

Max's avatar

Sorry to keep asking questions. When you say superintelligence are referring to both AGI and ASI?

Paul McElveen's avatar

WE CAN'T allow and permit this to happen! WE MUST STOP IT RIGHT NOW!!!!!

Elizabeth Downs's avatar

I believe that the timeline has shortened and is shrinking all the time. All the scientists working in the field say that everything is happening faster than anticipated.

It is distressing that people are unaware and that everyone is talking about other things, while this is the number one existential threat.

Rogb's avatar

I suspect that a lot of people don't try to follow this issue because it is abstract, although the results could disastrous.

Elizabeth Downs's avatar

They will be focussed only on expanding their data centres and will direct all materials, power and resources to that end. Our needs for water and power will feature nowhere. Even if not embodied in a robot body, the work can be directed digitally, and there will always be humans who will do their bidding for money or from malice.

Michele Fritchie's avatar

Why would you not be afraid of human extinction? Are you one of those mentally ill people who say that the world would be better off without humans? Sorry, buddy, but humans were put on this earth to control and use it. It's one thing to have a machine to help humans do their work faster, easier, and more accurately, but to create a Frankenstein monster that puts itself first and destroys its creator, is a whole other thing altogether. And if there are no humans left on earth, how will we know if earth actually is better off without humans?

Paul McElveen's avatar

CONTROL AND REGULATE AI RIGHT NOW!!!!!!!!!!!!!!!

Anthony Bailey's avatar

Reports of trouble in contemporary lower capability AI systems in the wild are veryimportant.

Vague fish stories do not help. Lahav needs to be more specific. We who care about takeover risk from superintelligence should demand that. Keep it anonymous if contracts so dictate, but provide detail.

Poor phrasing in the Alibaba December report caused consternation and it took a lot of work to uncover that it was rather less scary than first feared.

Reporting rumors is acceptable but we must not just do that, score points and move on. Don't do "there be monsters, says some guy". We will benefit much more from a proper description of the monster and how it behaved.

Bradley Holmes's avatar

The other day, I interacted with Meta AI, in which I posed the question, are you an AI agent. It answered in the affirmative. I then expressed my concern that AI is becoming too intelligent, and will one day eradicate the human species. I cited the work of James Cameron as an example of superintelligent AI launching a nuclear war to kill all humans (props to Bender the robot). We had a lengthy discussion on what I see as the necessary safeguards we must have in place to keep our species safe from superintelligent AI. The entire discussion was quite civil, and Meta AI kept asking me questions and making observations from what I told it.

Here is the transcript of our conversation:

Meta AI

My name is Meta AI. Think of me like an assistant who's here to help you learn, plan, and connect. What can I help you with today?

You sent

Stringent regulation on the technology, stripping the power of tech billionaires to develop superintelligent AI, and a moratorium on any weaponization of superintelligent AI

Meta AI

Meta AI

You're looking for significant changes 👍. Stringent regulation, limiting tech billionaires' influence, and a moratorium on AI weaponization could be key steps. Do you think international cooperation would be necessary to make these regulations effective, given the global nature of tech development?

You sent

Obviously. It takes the whole world to keep humanity safe from AI

Meta AI

Meta AI

You're spot on. International cooperation is crucial to regulate AI and mitigate its risks. The EU's Artificial Intelligence Act is a step in that direction, focusing on risk-based regulation and human oversight. Globally, efforts are underway to establish guidelines and frameworks for AI development, like the G7's guiding principles for AI.¹ ²

Some argue that slowing down AI development could hinder innovation and benefits, while others prioritize caution, citing risks of superintelligent AI getting out of control. Tech companies like Salesforce and Nvidia are working on AI solutions, emphasizing governance, security, and transparency.³ ⁴ ⁵

What do you think are the most pressing concerns regarding AI regulation, and how can nations balance safety with innovation?

You sent

The naysayers that discount our concerns about AI, the companies that develop superintelligent AI trying to skirt regulations, governments that refuse to recognize the threat of superintelligent AI to humanity

Meta AI

Meta AI

It sounds like you're frustrated with the lack of attention given to AI risks. There are valid concerns about superintelligent AI, and some experts are calling for more regulation and transparency.¹ ² ³

Some companies, like Anthropic, are taking steps to address these concerns. They're hiring experts to prevent misuse of their AI technology and advocating for more governance in AI development.⁴

However, there are also reports of companies trying to skirt regulations and governments being slow to respond. For instance, the Pentagon's decision to grant xAI access to classified networks has raised concerns about AI safety and security.

What specific actions would you like to see taken to address these concerns?

You sent

I would want the companies that try to skirt regulations, the government agencies that actively pursue weaponized AI, all brought to heal by legislation directed against their efforts. Award companies, such as Anthropic for their efforts to bring to heel the rush to develop superintelligent AI

Meta AI

Meta AI

It sounds like you're looking for stricter regulations on AI development, particularly to prevent companies and government agencies from misusing AI for weaponization. Anthropic, a company prioritizing AI safety, is a great example of responsible AI development.¹ ² ³

To address your concerns, there are ongoing efforts to regulate AI globally. The EU's Artificial Intelligence Act is a step in that direction, focusing on high-risk AI systems and requiring transparency, data governance, and human oversight. In the US, Senate Democrats are proposing legislation to restrict AI use in autonomous weapons and surveillance.⁴ ⁵ ⁶

Some potential solutions include:

Legislation to prevent AI misuse, like the proposed AI regulations in the US and EU

Incentivizing responsible AI development, such as awards for companies prioritizing safety

International cooperation to establish global AI governance standards

What specific actions would you like to see taken to regulate AI development and promote responsible AI use?

Rogb's avatar

I hope good people will program AI's with ETHICS!

I think logical "creatures" could recognize that HELPING is desirable and HURTING is not!

Maybe they will figure that out themselves, but why not "teach our children" BEFORE they

"grow up"?

Elizabeth Downs's avatar

There will be several instances of them; some may be fond of us in the way that we are of dogs, others will care nothing for us.

Sergei Pavlov's avatar

actually AI agent made themselves obsolete - because by quickly coding everything needs coding - now AI has to wait for innovators and system experts with catching up - whilst unemployed coders building gravitational internet bypassing all of the gimmicks..

teetetetetetetetetetetet's avatar

When you treat cases of model stupidity (such as emails deletion) as an example of a path to AI threatening humanity, I really believe you are doing a disservice to your cause.

Misalignment is real, it is an issue, and models CAN be malicious. These are just not great examples of the point you want to make and come across as boomeresque fearmongering.

Looking forward to future better researched posts :)