0:00
/
0:00

Avoiding Extinction with Andrea Miotti and Connor Leahy

Episode 1: Extinction and what we can do to prevent it

Welcome to the first edition of the ControlAI Podcast, hosted by

!

In this episode we invited

, Executive Director of ControlAI, and , CEO of Conjecture, to discuss the extinction threat that AI poses to humanity, and how we can avoid it.

If you'd like to continue the conversation, want to suggest future guests, or have ideas about how we might improve, join our Discord!

If you find the latest developments in AI concerning and the latest steps towards better AI security exciting then you should let your elected representatives know!

We have tools that make it super quick and easy to contact your lawmakers. It takes less than a minute to do so: https://controlai.com/take-action


Transcript:

Max Winga: Hello and welcome to the ControlAI Podcast. I'm your host Max Winga, and joining me today are Andrea Miotti, the Director of ControlAI, and Connor Leahy, CEO of Conjecture and Advisor to ControlAI. We are here today to discuss the risk of extinction from artificial intelligence as AI companies race to build AI systems vastly smarter than humans, along with the necessary solutions to this unprecedented threat.

Connor, you've been quite outspoken in your advocacy on the topic of AI extinction risk, appearing on news programs, podcasts, and debates. Can you explain what's going on with AI and why we should be worried about it?

Connor Leahy: Absolutely. So AI is the huge topic. All of us hear about it, it feels all day, everyday.

And this is a pretty new development, but the concept of AI isn't itself new, or the concept of intelligence. Intelligence, which is like the ability to solve problems. That's kind of how I like to think about it, is intelligence is the ability to solve more and more complex problems. And intelligence is fundamentally the thing that differs, differentiates between humans and chimps. Humans go to the moon. Chimps don't. Chimps have more intelligence than, say, than an earthworm, that's for sure. But they don't have quite what humans have. There's something sort of special about humans, but it's clearly some kind of spectrum of some kind.

So humans are not the strongest. We're not the fastest. We're not the toughest animal by any means. And again, none of us could take a chimp, but there is this thing that we have, which is special about humanity and which has made us the undisputed rulers of the planet, and that is our intelligence.

So far, intelligence has been something that's been mostly relegated to humans only, or like animals only. But recently, this has started to change. From the early days of computers, we started to have systems that can do simple math programs and then more complex math, more complicated applications and stuff like this.

But it wasn't really like intelligence as you or me might necessarily understand it. But over time this has gotten more and more general. The thing that our best AI systems, our best computer systems are capable of doing has expanded and expanded and expanded to the point now that you can just have a conversation with your computer.

You can just open up ChatGPT or Claude and you can have a lovely conversation with these systems. Sure. Are they perfect? No. But this is a huge, huge change, and particularly recently, this rate of progress towards more general intelligence of problem solving and conversational ability and so on has been just going so fast.

Now, why is this potentially a problem? Well, intelligence is the thing that made us the rulers of the planet. Because we're the most intelligent species is why we're in charge. If we were no longer the most intelligent species on the planet, that would no longer necessarily be the case.

Max Winga: Yeah. It's nothing that's like ever happened before, essentially.

Connor Leahy: Exactly. It's more like chimps seeing humans than anything else. We humans are used to being the smartest thing around. But, the potential that we're seeing right now is that these AI systems are becoming more intelligent at a rapid pace, and there is no fundamental physical reason why this can't continue until we have systems that are as smart and smarter than humans, often called AGI or sometimes you're talking about what's called superintelligence, which is an AI system which single-handedly is more intelligent than all of humanity put together.

If such a system were to be built - and many, many scientists, eminent scientists, leaders of AI companies and so on do believe it is possible and imminent to build such a thing - if such a system would be built and we didn't know how to control it; well, and how could you control something that is so much smarter than you, like is there even a precedent for really like truly controlling something that's vastly more intelligent than you? Not really.

AI is not really normal computer software. Normal computer software is written. You have computer programmers who write line by line a computer program telling the computer what to do.

AI is quite different. Modern AI is more like grown. You give it many examples or lots of data that you want it to learn from, and then you grow a program to solve this problem on huge supercomputers by crunching these huge piles of numbers called neural networks. And this results in programs that can do amazing things, but we don't really know how they work.

We don't really know what's going on inside. We do not know how to control them. At the moment, this might be a bit annoying or even a bit amusing, but as they become more intelligent, they become more powerful, eventually as and more powerful than humans, we will have, we'll be at the mercy of systems that we cannot control, that we do not understand, and the future will belong to them, not to us.

Yeah.

Max Winga: And a lot of people will say here, they'll ask, "are these systems sentient? Are they conscious? Is there a human factor that's an important thing that they're missing that means that they won't be able to do this?" I think a lot of people have this hangup when confronted with something like these systems. Do you think that there is some kind of element like that is critical here or is that not really an issue for AI?

Connor Leahy: The thing we really care about here, that I care about is power. I don't particularly care whether an AI system can truly feel love or not. That's not really important for it to be able to take over the economy or to build weapons or to hack computer systems.

It doesn't have to have any recognizable thing like emotions or whatever to be able to do such things.

Intelligence is dangerous because it is powerful. This is the fundamental thing. The fundamental thing is that if you build systems that are very powerful and you do not control, this is an inherently dangerous thing whether or not any of these other factors. So there is another aspect to this where there's a long and storied history in the history of computer science of venerable, respectable computer science professors professing very sincerely about how true AI can never be reached until we've had the missing component of intelligence, which happens to be their personal pet theory.

And all of these people have always been proven wrong over and over and over again. Now obviously there's still something that's like different between a bug and a mammal, and between a mammal and a human. A chimp and a human say. There clearly are some differences between those.

But for centuries we've tried to find the one true thing that is the difference between the two. And this is just, we've never succeeded. We don't know. So, if anyone comes to you and says, confidently, "I have figured it out, we cannot get AI, we can't make it true intelligence until we have x, well you should think they're a crank.

They're just making some shit up. They're probably wrong as were dozens and hundreds and thousands of scientists before them. So as far as we can tell, there's only one thing we can be certain of empirically. These systems are getting much smarter, very fast. Week by week, month by month, year by year, our most intelligent AI systems can solve way harder problems, longer problems, more complex problems, more reliably, and more domains.

Again and again and again, this is the one thing we can all see empirically. Now, could all of this progress stop at some point? Sure. But there is no scientific, grounded reason why we should expect this. So by default, we should just assume that things keep going the way they have so far.

Max Winga: And this is why scientists and experts and even the AI CEOs are warning about extinction risk, potentially?

Connor Leahy: This is exactly related to it, especially over the last couple of years. Many people who previously were not too concerned about the AI risk because they thought it was far away, have now truly changed their mind. We've seen Nobel Prize winners, Turing award winners, CEOs of the top AI companies, global leaders, country leaders, and so on speak about this is that: if you have a system that is smarter than all of humanity that we can't control, humanity is not guaranteed to survive that transition. What would such a system, what would such a machine, such a being want to do with the world? I don't know. Would it include humans? Probably not.

Yeah.

Max Winga: That's difficult. So, we obviously need some solutions to this, and ControlAI has been working in a variety of ways to do so. Andrea, you're the Director of Control AI. You founded it to help solve these problems. What is ControlAI doing to help address these risks?

Andrea Miotti: Yeah, so we're faced with this massive threat, just the end of our species and we need to turn the tide.

We believe it's possible to turn the tide, and this is gonna take effort from all of us. I founded ControlAI because I believe this is just, not just a technical problem. The reality is even with the best scientists working on this problem, we're not gonna make it out alive if we don't put rules in place, regulations in place that actually protect us.

In the end, even if one company could figure out a way to make some of the systems controllable, nothing stops another one to rush ahead with systems that we cannot control. Ultimately, this is a deeply political problem that will be solved with policy solutions, not just going back to the ivory tower and working on fancier algorithms to try to change how the systems thinks, the systems work.

Max Winga: So what sort of policy do we need to make this work?

Andrea Miotti: Yeah, in many ways this is an unprecedented event, but also in other ways we have faced massive global and national security risks of this level before and we've staved them off like. Proof of point is that we're all still here despite having nukes in the world. We've come very close to having complete nuclear annihilation in the past decades.

Yet, thanks to the fact that we've had nuclear non-proliferation agreements, we've had a balance of power between, in the past, the US and the Soviet Union and now between the US and other countries. The fact that we have strong and strict regulation on who can build, or if anyone at all nuclear bombs; who can build, nobody, biological weapons, chemical weapons, this is severely restricted, the ability for a few actors to accidentally or intentionally end the world that we know. And the solutions are gonna look similar for AI. Ultimately, right now we have a bunch of companies that are putting billions into building AI systems smarter than all of us.

This has to stop before we get to systems that are truly so powerful that we cannot control them. If we get to that point, it's gonna be too late. If we get to the point, those systems are gonna be in charge. Those systems are gonna be in control and it's game over for humanity. We'll be, at best, forever at the mercy of these machines, at their behest. And we will just survive if they wish for us to survive and die if they just don't care about us.

Max Winga: So how do we know when the right time is to implement regulations like this? I think a lot of people are really concerned about: there's a lot of good stuff that can come from AI. There's gonna be medical advances, and we'll be able to automate away jobs that are menial and people despise.

And the idea, at least put forward by the big tech companies, is that this will free us to do more enjoyable things. How do we know what the right tradeoff is and when we should put these regulations in place?

Andrea Miotti: Yeah. The truth is, there are only two times to react to an exponential too early or too late.

We are facing an exponential. Nobody has a crystal ball where we can tell exactly at this point in time and exactly in this moment, in 2027, in this location in San Francisco, the first superintelligence will be achieved, so we should just stop a few meters before that. You know what's even the measure?

Is it meters? Is it kilos of GPUs, right? Like we don't really know. The, truth is we don't; and that's a big source of the risk, like, we don't have a deep science of intelligence. We can't really measure intelligence in machines. And we also don't really fundamentally understand it in the way that we do fundamentally understand many other areas of physics.

We now understand how nuclear reactions work. Given that we understand them and we can measure their components, we can then constrain them, bound them, and extract energy that is useful for us without blowing everything up. We don't have this yet with AI, so we need to act as soon as possible.

We need to take action before the exponential continues because the moment when it's too late, it's gonna be too late and we, it's gonna blow past us. And the moment game over appears, there's nothing else we can do. On the other hand, an important thing is that what we're concerned about here is AI systems smarter than humans across tasks.

A lot of the economic benefits that some people do want come from very specialized AIs most of the time, which are quite different from these very powerful AI systems kind of doing any activity that a human can. What some people call AGI and when they're very powerful superintelligence. So it's important to distinguish between these two.

It's a false dichotomy to say, "well we can either, we either need to risk the extinction of our entire species but we're gonna have some great companies in the meantime," like Sam Altman famously said, or saying, "well we need to shut down everything and and then we're gonna be safe."

The reality is there's gonna be a lot of, first of all a lot of the classic, com computer science and computer engineering that we do right now is completely fine, like most of it is done by humans for humans we understand it. Lines of code that we write. Most of them we do understand and we can fix them, we can deal with them.

That's completely fine. A lot of forms of AI are close with this. In many ways AI is a big word that encompasses a wide variety of models. The ones that are really dangerous are the ones that like, like Connor said are the ones that we grow, not build. Because of those, we don't understand fundamentally how they work.

Therefore, we also don't know how to control them at a fundamental level. And we also have the inability to predict in advance when they will cross certain lines. And that's on those ones and especially on the most powerful ones. as we get to superintelligence that we should tackle this problem in two ways.

Like number one, say no to superintelligence. Clearly have a normative principle and have laws in place that say, no one should be building superintelligence. This is the equivalent of building a competitor species that would replace every human on earth. This is not an interest in any country.

No country, no nation-state would want to see their power eliminated by a competitor species. No single human wants this. Let's say no to this. Two, to do this, we don't know exactly when that's gonna arise. We have some proxies that we can use, like amount of computing power, performance on tasks although this we can only measure after the fact so it's a bit dangerous and so forth, so we can use some of these metrics to put some guardrails but also like we've done with other technologies, prohibit certain precursors. You know, technologies that we know are on the path to superintelligence, and if they were to be achieved, they would quickly lead to situations where we can't control them anymore and they skyrocket to superintelligence. Things like AIs that can improve other AIs.

AIs that can do meaningful amounts of AI R&D, this is extremely dangerous. This is also the explicit goal of many teams at some of the top AI companies, which should concern you all.

They're gunning for this right now, and that's extremely dangerous because the moment we have AIs that can improve AIs, this leads to a quick feedback loop of improvement that goes much faster than what humans can monitor and control, and can get us quickly from a world where we don't think we have superintelligence to one where we do and we get surprised and we're done. As well as other things like AIs that can just hack out of their environment. We can't contain them. Why would we have them? We don't want things that could just escape on their own, and a few other things like this. But this is a very focused approach to prevent this massive risk from superintelligence that doesn't impact most AI that we already have and is providing a lot of benefits.

Connor Leahy: I agree with Andrea, for sure, that I think there is a big false dichotomy here. In particular, I think it's truly a false dichotomy. Just because something is complex doesn't mean it's a dichotomy. What we're dealing with is a complex problem and there is no simple solution. There is no easy solution.

There is no solution where you can get, you have your cake and eat it too. It is like there are trade-offs. You can eat some of your cake, and have some of it but there is a trade-off to be made. If you have unregulated everything, well then you get private ownership of nuclear weapons. Obviously that's not something that we want.

Nevermind nuclear weapons, do we think we should open source the blueprints for the F-35 fighter plane? I don't think that makes the world a better place. So there is a part of this, what Andrea also picked up on is that a lot of AI system, the word AI is used very broadly, so in a sense it's both used for systems that are, like, general purpose agents optimizing things in the world, and for things that help you do protein-like folding simulations.

In a sense, these are like the first is closer, like to like the proto of, like a person, right, or like a species. The other is more like a fancy physics simulation, right? Obviously there's a line here, right? Of course they shade into each other. But it's not like it's impossible to say don't build the general purpose intelligent agent but have fun with your physics simulations.

This is definitely a thing that is possible. And also pick up on the thing you said there about the economic factor of these things, like people think that like we would not have to work or something. And this is a commonly held sentiment that I think is really worth just like thinking about for a moment where, how would that actually work?

I think it was Keynes famously in the early 20th century predicted as we're seeing the rise of industrialization and growing of economy that you know by 2015 or so people should only work like 10 hours a week probably, because well we'll be so productive so no one will have to work as much anymore.

And even then, he said they only work 10 hours a week because they like it, and they could actually work even less. And they would still all be fabulously rich and not have to work more. Well this isn't really what happened, of course.

And this is not because Keynes was a stupid guy, like he had a lot of good ideas here.

It's that a lot of how economics works is unintuitive, and not necessarily predictable ahead of time. And when we're dealing with an AI system, which we should really think of it more as an ability to replace labor, if we displace labor entirely how exactly are humans getting economic resources?

Currently, the way we get economic resources is we either own capital or we trade our labor for capital. Well if we look at both of these lines, they both have (if we can't trade our labor anymore because AI are cheaper, faster, better at everything) well then humans who provide labor to get, food, sustenance, shelter, etc. have nothing to trade for.

So on that side, usually what people say is you need something like UBI. So you need like a government program or some kind of institution that gives these people the kind of resources needed to live. On the other hand, the other way that people make money or is that they own capital and extract rent or whatever.

But this relies on the existence and enforcement of property rights. So you have to own capital, you have to have the force to defend your capital and to extract value from it. So as you can see, both of these lines depend on a very strong government or regulating force. If you don't have property rights, well then the AI just takes away everything.

It has much better guns than you do, it [is] much smarter than you. It'll just trick you or kill you. Who cares? So to enforce property rights, you will have to have some human controlled system, which is more powerful than superintelligence, which by definition is impossible because the superintelligence is more powerful than humanity.

And the second applies to UBI as well. Okay, let's say we want to tax the corporations and all their money and give it back to people. Currently, all these companies already don't pay taxes 'cause they all live in some tax haven in Ireland. So like how do you expect a future government who already can't tax normal corporations because they incorporate in Ireland to be able to tax future corporations that are run by superintelligences?

So there's a very deep thing here where there is no practical way that gets you to an outcome where humans are taken care of if there's not something to take care of humans. Whether it's a government system, it's a social system, it's other humans, at some point, resources have to go to humans in order for humans to do human things.

And what is that mechanism? If a superintelligence is in charge of the world, that's the mechanism. And if the superintelligence doesn't want to give us those resources because it prefers using them for whatever goals superintelligence might have then humans don't get any resources.

Andrea Miotti: Picking up on this quickly, there's a new excellent essay that came out this week called The Intelligence Curse by two great co-authors.

And it picks up exactly on this point and describes exact exactly this issue of: we are on track by default, and we can change this track but we are on track to get to AGI, and this on a labor level and economic level will mean that humans will be less and less relevant for the economy.

This means that if human labor becomes irrelevant, this means that a) most people will see their power completely vanish. Already people feel right now that they don't have power in many decisions, and many things are done outside of their control. Well imagine the moment when almost every single person on the planet cannot even trade their labor for money.

From the economic system’s perspective, these people are useless. And we will either need a strong government intervention to make them useful again or to provide value to them out of charity - essentially turn them, turn almost everybody in the planet into beggars - or if whatever power is in charge does not really care about these people they will starve and die.

Their income will go below subsistence level then will disappear because all of the tasks that they can do can be done by an AI better than them, and they'll be gone. And that's the outcome we're facing. And this will also lead to a massive concentration of power either onto the AI's that control whatever amount of economic activity remains, or by if even if by chance we do manage, somebody manages to control somewhat a very powerful AI system that will be the only person or the only group or the only company controlling the entire economy, and everybody will be living at their behest, at their charity. This is clearly not a world that most governments and most people want, and we need to put solutions in place to prevent that fast.

Max Winga: Yeah. So this is very clearly a big problem. It seems, I think for most people that are in the know, like when we talk to people in Silicon Valley and when we talk to people in these AI companies, there's kind of this attitude that like from the outside, that this is all like hype.

It's all marketing hype to just try to boost up this product. And AI is really just the next cryptocurrency and the next metaverse. But when you talk to people on the inside, it seems like they really do believe these things. And this really does seem to be where things are going. ControlAI does a lot of work with, talking with politicians, in the UK but also now in the US.

When you talk to politicians, do you find that they have internalized this and government officials as well, and like other agencies? Does it feel like the government is responding yet to this, or is this still very much a new problem?

Andrea Miotti: It's very much a new problem, and honestly and this is why we need to talk to them and we need to let them know what's happening. Most of them have never heard of this. Most of them have never heard about these risks. And simply, it's not that they know about these risks, and think things are gonna be fine.

They just don't know what's coming. They don't know that companies are actually building, to build AI systems smarter than humans across nearly every single task. They expect, they might have heard of AI. It's in the news, right? They might have heard about AI as somewhat like exactly like a new mobile phones, a new technology that's popping up and is making some small changes to how things are done.

But, like many other ones. But the truth is when they do learn what's happening, they are concerned. And that's great to see. And it's a very normal reaction.

Most people on the planet, and ultimately politicians are people dedicated to the public good, or they should be dedicated to public good and many of them are, they're concerned. They find this horrible that we are on track to completely obsolete the entirety of the human species, and that this is completely unregulated and completely not under control.

And so a big, big thing to be done, this is the main focus of ControlAI right now, our Direct Institutional Plan. Reach every single relevant actor across the democratic process, and inform them about what's happening with AI, what's coming with superintelligence, what the risks are, and what the solutions are with the policies. And we found really fantastic reception in the UK.

We started, at the beginning, multiple people, professional people in the policy field, lobbyists and so on, told us it would be impossible to get a single lawmaker to publicly acknowledge that AI poses an extinction risk. That this is too strong, and people wouldn't already understand what superintelligence anyways, like this is just beyond them.

We started a meeting with lawmakers and at the beginning of this campaign. In just, in just a few months, we got, over 30 of them publicly acknowledging that AI is an extinction risk, that superintelligence would compromise global and national security, and that mandatory regulation should be put in place to prevent this and keep people in control while still having the economic benefits from other forms of AI.

And more than that, speaking of our kind of receptiveness, we didn't expect this, but since we started more than one in three of the people that we meet becomes a supporter. Just hearing this, very often it's the first time they hear about this. They don't know. They might have heard on the news things like the AI extinction statement that came out from the Center for AI Safety last year.

They might have heard about the pause letter that Elon Musk signed. But this might be like the only exposure they've had to these concepts. But after a briefing where we just tell them like, look this is what's happening. We explain to them clearly where we're going, how AI capabilities are improving, what companies are really working for.

They clearly see this as a massive issue and they take a stance and that's great. And we want to see this happen much more everywhere. We're starting to do this in the US as well. It's going well there as well. Individual citizens can make a difference as well. We need more and more people to just do the thing. Go talk to your lawmaker. Talk to people that know lawmakers. Talk to your friends. Talk to them about this problem. Just tell them that's the situation. That's what we're facing. Let's do something. And things can change.

Max Winga: Yeah. I think so. I do think though, that there's a lot of people who will hear this and they will hear, you're getting UK lawmakers to say that we should do regulation.

And I think a very typical response is that if the UK regulates this, all it's gonna do is take itself out of the race and the UK's gonna lose out on all the economic benefits. And, then it just loses to the US and China. Or if the US does it, then it loses the race to China.

How do we address this? Because it does seem like at this point both the US and China are racing towards building these superintelligences. How do we actually organize a global stop on this in a way that is robust and holds up over time?

Andrea Miotti: Yeah, that's a great question. First of all, this is what you would call a collective action problem. So this is a typical shape that these problems have. Many of these wicked problems, tricky problems that don't just exist in one jurisdiction have this shape where if just one actor does something it might not be enough.

But also actors wait for other actors to take the first step, and if nobody moves first nothing happens. The truth is the way to solve these problems is the same way that the way to solve things like a prisoner's dilemma is you need somebody to start cooperating. You need somebody to make the first move and start having taken the first stance in solving the problem and have others follow suit.

Secondly, a single country can do a lot. First of all, the US moving along on this would make a enormous difference, an absolutely enormous difference. The US also moving alone on this could then very clearly signal to other countries do the same or there will be consequences.

That's very important.

If you believe, if you understand that superintelligence would threaten the end of the human species. This is a fundamental. It is not just, not just a species-wide and global-wide issue. This is a very clear national security issue for your own country, like the end of the human species. AI systems smarter than all of you might combine are AI systems that will take over your country and you will not be able to stop them. You do not want to have this as a country in the same way that most countries spend a lot of resources to prevent foreign hostile actors as well as organized non-state groups from destabilizing the country. This is one of those, but on steroids. It's like the biggest, hostile force possible.

What the CEO of Anthropic kind of calls superintelligence a country of geniuses in a data center. And this really helps to visualize what that means, what superintelligence means, and why it's threatening. So the US doing something would make a massive difference. The US could use this position of strength to, to broker a deal with China and make it very clear to China if you pursue this, we will not tolerate it and we'll take all necessary actions to prevent this. On the other hand, if we can make a deal and both countries can understand that it's both in their selfish, sovereign interest to stay in power, not let superintelligence take over, a deal can be made and then can be propagating across other countries. What can other countries do? Countries like the UK. UK is a P5 country. It's still like crucial, it's one of the most powerful countries in the world. It has nuclear weapons. It's part of the security council. It's a close US ally, and it has very strong diplomatic relations with other countries.

It's a big economy. The UK starting to make it very clear: we will not have superintelligence. We will ban superintelligence. We'll make it clear that this is the equivalent of companies building nukes in the backyard, and this we just don't tolerate, would send a very strong signal to other countries, would make the UK market close off to companies that are trying to build superintelligence, would and would quickly start some cooperation with US allies and more do the same. And in practice, let's look at history, alright? How did we deal with nukes? We had the same problem. It's like we could have made the same arguments, but if the US stops building up its nuclear arsenal, can't Cyprus build nukes. Let's take a random, can't Portugal just build nukes and outcompete them?

So everybody should just be in theory with this kind of zero-sum mindset and like complete mistrust, we would end up in equilibrium where every single country on the planet has as many nukes as possible that they can marshal and doesn't cooperate with anybody else.

And the truth is in that equilibrium, we die. It's over. Imagine if you had every single country on the planet with thousands of nukes. Some of these countries will have very poor state administration. They will not be able to manage the security of these nukes. They will, like, you might have accidental detonations. You might have non-state actors stealing them, detonating them. You might have extremely unstable international relations where one country thinks the other is threatening them with an invasion, and nukes the other one. Kickstarts a global war. This will be a completely unstable system.

Luckily, we avoided this. How? By having individual countries like the US, like the UK, by also pressuring and having the Soviet Union limit its arsenal, by having, it's never just trust, it's trust and verify. And in many cases there can even be no trust and verify, making mutually beneficial deals where everybody understands this compromises everybody's security and having one country after the other move to close off the path that brings us to danger.

Connor Leahy: There's an important thing here which is really at the core or the heart of this issue is that it actually truly isn't in people's interest to build superintelligence. This is the core thing. There's the one aspect of it where it's like people often say this is a prisoner's dilemma, but it's not.

It's not a prisoner dilemma. In the prisoner's dilemma, you benefit from defecting, but this is actually not true. If you defect and build superintelligence, you still die. You don't benefit from this. This is not a win scenario, even if you're the only one that defects in a prisoner's dilemma. If you defect, everyone else cooperates, you win. This is not the case in the scenario. In this scenario, if you defect and everyone else cooperates, you still lose. So you might say, if this was true, why are people doing it? And I think there's a fundamental reason just most people don't know. Like literally, I don't think it's that, like, it's not that deep.

It's just like people don't know. They don't understand that what happens if people build superintelligence, like the thing I just told you about like UBI and property rights. It's not that people can't understand this, it's just they've never heard that argument before. They've never thought of it before.

And this is why, like, plans that ControlAI is doing like the DIP I think are so important is just truly people have just not, they don't know. Like many policymakers that we talk to have just, like, never heard about superintelligence before. They thought it's just, "oh, we're just racing to build better protein simulations.”

They just didn't know that there were these other things and what consequences that might have for them, that there might be scenarios where their monopoly on violence just disappears. They just never heard that, never considered that. And, we can argue about, "oh, but they should have blah, blah, blah."

But ultimately we live in a democracy and part of the democratic process is that we as civilians, both experts and lay people, our job is if we see threats and issues we are concerned about is that we are obligated as citizens to go to our represented electives and inform them of the problem.

Inform them that this is a problem we care about. Here are potentially solutions that we think could be applied, but that this is a problem and we have to help them understand these problems. So really a lot of, I think the work needed to allow the brokering of such deals is not some kind of galaxy brain thing or, like, people becoming saints and so selfish, whatever.

Not at all. I think it's in everyone's selfish interest to prevent the existence of superintelligence. It's very much in people's selfish interest. So the thing that we have to do is much closer to education than it is to persuasion. It's not that we're trying to sell people something that isn't true.

We're trying to just tell, inform them of a consequence of actions that are currently being taken. Now, of course, just because an action is self-harming doesn't mean it will necessarily not get taken. So the work has to happen of informing everyone and trust but verify, and make putting in these mechanisms and these abilities so we can collectively not take risks that we don't want to.

Look, if we had a global vote and 90% of people were like, screw it we don't care about living. We want the planet to blow up. We want to build superintelligence. Fair enough, right? Okay, maybe. But this is not the world we live in. We checked, like we've talked to many people in government, in the public, we've done focus groups, we've done polls, we've done lots and lots of things.

And the overwhelming, bipartisan multinational consensus is: Fuck no! We don't want to be destroyed or replaced by a superintelligence. And a core part of what governments are supposed to be for and what democracy is for more broadly is to not let things like this happen, not let these massive national, international security issues, which is what AGI is come to pass but to protect us from these risks and allow us to have the future that we want.

So it is in everyone's interest and people do have this interest, they just, we just have to now build this. We have to inform people, we have to build the coalition and then we have to execute on it. We have to build the mechanisms so that we can get to a good future.

Max Winga: And how are you planning on working that yourself, alongside the work with Control AI? Do you have any plans, any ideas, cooking along those lines?

Connor Leahy: Always. I always, many things to do. So of course, I work closely with my friends at ControlAI a lot. I'm an advisor and help out wherever I can. I think the work at ControlAI is probably the most important thing anyone could be working on right now. So I definitely want to put a lot of that. At my company, Conjecture, we also have been thinking a lot and working a lot on what the technical solutions or technical approaches to actually building safer AI systems, controllable AI systems could look like.

Because so far this conversation has focused very much on the prevent everyone from dying part, which I think is in fact truly the more important, the most important part truly. If the house is on fire, your first goal is to be: get out of the house. Then you worry about your mortgage. First get outta the house.

Once you're outta the house, then you start worrying about your mortgage. And I am also concerned about the mortgage. And when I say the mortgage, the mortgage as humanity is taking out on the future. Is that humanity has been piling on risks for decades, centuries now. Humanity has been piling on risks and risks, whether it's from, degradation of environments, building of more dangerous and weapons, degrading social trust and social institutions.

And of course, the more and more powerful technology even if we prevent superintelligence even if AI doesn't destroy us, I think we're still on a very bad path. Other technology that is powerful enough to destroy humanity will be built and is being built. There are other forms of technology that are not AI, whether it's SynBio, potentially nanotech, or space weapons.

So stuff like nuclear, like space-based nuclear weapons or space rod from gods and stuff like this or pulse-laser weapons. So these for are weapons that can produce femtosecond laser pulses that can target anyone on earth for instant, like lightspeed destruction. This is physically possible. You can put a satellite into orbit physically that can, you can click anywhere on earth and instantly kill anybody.

This is completely physically possible. Don't do it at home, not recommended.

Probably no one has such a system at the moment, but there are many people working on building such systems right now. Currently there are private companies where you, I think Planet Labs is one of them where you can pay them like a hundred bucks or something and pick any place on Earth and within 24 hours they'll show you, they'll show you a high, they'll send you a high definition, fresh satellite image of that exact location with resolution down to 10 centimeters.

And these are getting better. There's new ones coming online that soon will allow it to be that you can, from space, have live video of any place on earth. This is technology that's already being built and will be deployed within the next years. This is completely without AI. This is not even AI.

That's not even AGI. So the reason I bring those up is that AGI is really a specific instance of a much larger problem and a much larger problem is like in AI, we often talk about alignment. It's like this question of like, how do you make something, how do you make an AI system that does what we want?

That build is aligned with human values. But there's also a general alignment problem, which generalizes beyond AI, which is how do you align systems in general with human values, whether that's other people, groups, governments, international orders, etc. How do you build a world that is stable, that is prosperous, that is just, that is aligned with human values and doesn't blow itself up with whatever next technology we built? This is a very hard question and I think someone should be working on it. I know there's many people working on it, but I also want to throw my hat in the ring and think a lot about what would it look like?

What are the steps? What are the systems? What could this look like and what do we need to do to get there? I've been thinking about this problem for many years and I think it is a very important problem to address. I think the work that ControlAI is doing is the more pressing and the more important problem also on top of my mind, but I also spend a lot of my time thinking about this question and trying to iterate towards solutions for how do we build institutions, how do we build governments, how do we build cultures, how do we build worlds that are worth living in?

Max Winga: That sure sounds ambitious. I'm excited to see where it goes. I think we're about done with time. Is there anything else you guys would like to ask of the audience, before we call it a day?

Andrea Miotti: We now have a tool that helps everybody contact their senator, every American citizen, contact their senator in the US super quickly to let them know about AI risk so you can take action and you can make a difference.

We now have over 1000 of these emails have been sent. Many of our supporters have sent them. We're just missing one state. 49 out of 50 states. Please, if you're in Hawaii, contact your senator with our tool. So we're now gonna have 50 out of 50 and we complete the map.

Thank you. Yeah.

Connor Leahy: Just to give you a bit of a feeling in this, sometimes it can feel like contacting your representatives might not be worth it. I remember talking to staffers at a congressional office and I asked them if someone like calls you about an issue, do you care? Does anyone actually hear about it?

And they said, what they told me, at least in this congressional office what they told me was if they get one phone call, probably no. No one will really care. Maybe they'll write it down, maybe not. If they get two or three phone calls, it's definitely getting written down. It will be written down somewhere.

If they get five phone calls in a week, the congressman will personally hear about it for sure. There is no way not. And let's be honest, like 50 states, 5 people, each state, so 250 people. This is a reasonable amount. So if you're in the US or other countries, and be aware that like your representatives, their job is to listen to you and to consider your concerns. And even if one voice might be ignored or not, you don't underestimate it. Yeah. Once you're two, three, it's not that much. And you could be one of those. So really check out the ControlAI website. be a part of it. Join potentially a Discord.

Join the mailing list. We have a lot going on. We have a lot more coming and we need more people to help.

Andrea Miotti: Yeah. And over 500 people already did this. You can be one of them. You can make a difference. Please.

Max Winga: Thank you all for listening. This has been The ControlAI Podcast with Connor Leahy and Andrea Miotti.

You can keep up with our work by subscribing to our newsletter, following us on social media, or joining our Discord. We'll be bringing you more episodes of this podcast in the future. Feel free to let us know who you'd be interested in seeing us talk to next. Bye for now.