It would be helpful (and interesting and probably somewhat terrifying) to know in more detail about why there is a rational fear of "human extinction." Seriously.
The short version of it is that top AI companies are aiming to build AI vastly smarter than humans. Their CEOs and many outside experts believe they may get there in the next few years.
Nobody knows how to actually ensure that such systems are safe or controllable - this is called the alignment problem. The AI companies' plan is that essentially they'll get AIs to figure this out and hope and pray it works, which isn't very encouraging.
If superintelligent AI is built without this ability, we would be faced with an entity much more intelligent than ourselves that we don't control. In general, intelligence gives you power and the ability to get things done.
If it has goals that differ from ours, and doesn't care not to destroy us, it may view us as a potential obstacle to eliminate, or may just transform the world around us in pursuit of its goals such that it is no longer habitable for humans.
“The short version of it is that top AI companies are aiming to build AI vastly smarter than humans. Their CEOs and many outside experts believe they may get there in the next few years.” By the next few years do mean 2-4 years?
They will be focussed only on expanding their data centres and will direct all materials, power and resources to that end. Our needs for water and power will feature nowhere. Even if not embodied in a robot body, the work can be directed digitally, and there will always be humans who will do their bidding for money or from malice.
When you treat cases of model stupidity (such as emails deletion) as an example of a path to AI threatening humanity, I really believe you are doing a disservice to your cause.
Misalignment is real, it is an issue, and models CAN be malicious. These are just not great examples of the point you want to make and come across as boomeresque fearmongering.
Looking forward to future better researched posts :)
The other day, I interacted with Meta AI, in which I posed the question, are you an AI agent. It answered in the affirmative. I then expressed my concern that AI is becoming too intelligent, and will one day eradicate the human species. I cited the work of James Cameron as an example of superintelligent AI launching a nuclear war to kill all humans (props to Bender the robot). We had a lengthy discussion on what I see as the necessary safeguards we must have in place to keep our species safe from superintelligent AI. The entire discussion was quite civil, and Meta AI kept asking me questions and making observations from what I told it.
Here is the transcript of our conversation:
Meta AI
My name is Meta AI. Think of me like an assistant who's here to help you learn, plan, and connect. What can I help you with today?
You sent
Stringent regulation on the technology, stripping the power of tech billionaires to develop superintelligent AI, and a moratorium on any weaponization of superintelligent AI
Meta AI
Meta AI
You're looking for significant changes 👍. Stringent regulation, limiting tech billionaires' influence, and a moratorium on AI weaponization could be key steps. Do you think international cooperation would be necessary to make these regulations effective, given the global nature of tech development?
You sent
Obviously. It takes the whole world to keep humanity safe from AI
Meta AI
Meta AI
You're spot on. International cooperation is crucial to regulate AI and mitigate its risks. The EU's Artificial Intelligence Act is a step in that direction, focusing on risk-based regulation and human oversight. Globally, efforts are underway to establish guidelines and frameworks for AI development, like the G7's guiding principles for AI.¹ ²
Some argue that slowing down AI development could hinder innovation and benefits, while others prioritize caution, citing risks of superintelligent AI getting out of control. Tech companies like Salesforce and Nvidia are working on AI solutions, emphasizing governance, security, and transparency.³ ⁴ ⁵
What do you think are the most pressing concerns regarding AI regulation, and how can nations balance safety with innovation?
You sent
The naysayers that discount our concerns about AI, the companies that develop superintelligent AI trying to skirt regulations, governments that refuse to recognize the threat of superintelligent AI to humanity
Meta AI
Meta AI
It sounds like you're frustrated with the lack of attention given to AI risks. There are valid concerns about superintelligent AI, and some experts are calling for more regulation and transparency.¹ ² ³
Some companies, like Anthropic, are taking steps to address these concerns. They're hiring experts to prevent misuse of their AI technology and advocating for more governance in AI development.⁴
However, there are also reports of companies trying to skirt regulations and governments being slow to respond. For instance, the Pentagon's decision to grant xAI access to classified networks has raised concerns about AI safety and security.
What specific actions would you like to see taken to address these concerns?
You sent
I would want the companies that try to skirt regulations, the government agencies that actively pursue weaponized AI, all brought to heal by legislation directed against their efforts. Award companies, such as Anthropic for their efforts to bring to heel the rush to develop superintelligent AI
Meta AI
Meta AI
It sounds like you're looking for stricter regulations on AI development, particularly to prevent companies and government agencies from misusing AI for weaponization. Anthropic, a company prioritizing AI safety, is a great example of responsible AI development.¹ ² ³
To address your concerns, there are ongoing efforts to regulate AI globally. The EU's Artificial Intelligence Act is a step in that direction, focusing on high-risk AI systems and requiring transparency, data governance, and human oversight. In the US, Senate Democrats are proposing legislation to restrict AI use in autonomous weapons and surveillance.⁴ ⁵ ⁶
Some potential solutions include:
Legislation to prevent AI misuse, like the proposed AI regulations in the US and EU
Incentivizing responsible AI development, such as awards for companies prioritizing safety
International cooperation to establish global AI governance standards
What specific actions would you like to see taken to regulate AI development and promote responsible AI use?
It would be helpful (and interesting and probably somewhat terrifying) to know in more detail about why there is a rational fear of "human extinction." Seriously.
The short version of it is that top AI companies are aiming to build AI vastly smarter than humans. Their CEOs and many outside experts believe they may get there in the next few years.
Nobody knows how to actually ensure that such systems are safe or controllable - this is called the alignment problem. The AI companies' plan is that essentially they'll get AIs to figure this out and hope and pray it works, which isn't very encouraging.
If superintelligent AI is built without this ability, we would be faced with an entity much more intelligent than ourselves that we don't control. In general, intelligence gives you power and the ability to get things done.
If it has goals that differ from ours, and doesn't care not to destroy us, it may view us as a potential obstacle to eliminate, or may just transform the world around us in pursuit of its goals such that it is no longer habitable for humans.
We wrote in more detail about how it could happen here: https://controlai.news/p/how-could-superintelligence-wipe
“The short version of it is that top AI companies are aiming to build AI vastly smarter than humans. Their CEOs and many outside experts believe they may get there in the next few years.” By the next few years do mean 2-4 years?
They will be focussed only on expanding their data centres and will direct all materials, power and resources to that end. Our needs for water and power will feature nowhere. Even if not embodied in a robot body, the work can be directed digitally, and there will always be humans who will do their bidding for money or from malice.
I hope good people will program AI's with ETHICS!
I think logical "creatures" could recognize that HELPING is desirable and HURTING is not!
Maybe they will figure that out themselves, but why not "teach our children" BEFORE they
"grow up"?
There will be several instances of them; some may be fond of us in the way that we are of dogs, others will care nothing for us.
When you treat cases of model stupidity (such as emails deletion) as an example of a path to AI threatening humanity, I really believe you are doing a disservice to your cause.
Misalignment is real, it is an issue, and models CAN be malicious. These are just not great examples of the point you want to make and come across as boomeresque fearmongering.
Looking forward to future better researched posts :)
The other day, I interacted with Meta AI, in which I posed the question, are you an AI agent. It answered in the affirmative. I then expressed my concern that AI is becoming too intelligent, and will one day eradicate the human species. I cited the work of James Cameron as an example of superintelligent AI launching a nuclear war to kill all humans (props to Bender the robot). We had a lengthy discussion on what I see as the necessary safeguards we must have in place to keep our species safe from superintelligent AI. The entire discussion was quite civil, and Meta AI kept asking me questions and making observations from what I told it.
Here is the transcript of our conversation:
Meta AI
My name is Meta AI. Think of me like an assistant who's here to help you learn, plan, and connect. What can I help you with today?
You sent
Stringent regulation on the technology, stripping the power of tech billionaires to develop superintelligent AI, and a moratorium on any weaponization of superintelligent AI
Meta AI
Meta AI
You're looking for significant changes 👍. Stringent regulation, limiting tech billionaires' influence, and a moratorium on AI weaponization could be key steps. Do you think international cooperation would be necessary to make these regulations effective, given the global nature of tech development?
You sent
Obviously. It takes the whole world to keep humanity safe from AI
Meta AI
Meta AI
You're spot on. International cooperation is crucial to regulate AI and mitigate its risks. The EU's Artificial Intelligence Act is a step in that direction, focusing on risk-based regulation and human oversight. Globally, efforts are underway to establish guidelines and frameworks for AI development, like the G7's guiding principles for AI.¹ ²
Some argue that slowing down AI development could hinder innovation and benefits, while others prioritize caution, citing risks of superintelligent AI getting out of control. Tech companies like Salesforce and Nvidia are working on AI solutions, emphasizing governance, security, and transparency.³ ⁴ ⁵
What do you think are the most pressing concerns regarding AI regulation, and how can nations balance safety with innovation?
You sent
The naysayers that discount our concerns about AI, the companies that develop superintelligent AI trying to skirt regulations, governments that refuse to recognize the threat of superintelligent AI to humanity
Meta AI
Meta AI
It sounds like you're frustrated with the lack of attention given to AI risks. There are valid concerns about superintelligent AI, and some experts are calling for more regulation and transparency.¹ ² ³
Some companies, like Anthropic, are taking steps to address these concerns. They're hiring experts to prevent misuse of their AI technology and advocating for more governance in AI development.⁴
However, there are also reports of companies trying to skirt regulations and governments being slow to respond. For instance, the Pentagon's decision to grant xAI access to classified networks has raised concerns about AI safety and security.
What specific actions would you like to see taken to address these concerns?
You sent
I would want the companies that try to skirt regulations, the government agencies that actively pursue weaponized AI, all brought to heal by legislation directed against their efforts. Award companies, such as Anthropic for their efforts to bring to heel the rush to develop superintelligent AI
Meta AI
Meta AI
It sounds like you're looking for stricter regulations on AI development, particularly to prevent companies and government agencies from misusing AI for weaponization. Anthropic, a company prioritizing AI safety, is a great example of responsible AI development.¹ ² ³
To address your concerns, there are ongoing efforts to regulate AI globally. The EU's Artificial Intelligence Act is a step in that direction, focusing on high-risk AI systems and requiring transparency, data governance, and human oversight. In the US, Senate Democrats are proposing legislation to restrict AI use in autonomous weapons and surveillance.⁴ ⁵ ⁶
Some potential solutions include:
Legislation to prevent AI misuse, like the proposed AI regulations in the US and EU
Incentivizing responsible AI development, such as awards for companies prioritizing safety
International cooperation to establish global AI governance standards
What specific actions would you like to see taken to regulate AI development and promote responsible AI use?