ControlAI
Subscribe
Sign in
Home
Podcast
Discord
Our Campaign
Archive
About
Latest
Top
Discussions
The Ultimate Risk: Recursive Self-Improvement
What happens when AI R&D starts snowballing at machine speed?
Dec 4
•
Tolga Bilge
and
Andrea Miotti
32
8
4
November 2025
Dangerous AI Capabilities Advance
“My trust in reality is fading” — Gemini
Nov 27
•
Tolga Bilge
and
Andrea Miotti
66
15
11
When the Hacker is AI
AI just helped run a highly sophisticated real-world cyber-espionage campaign at scale.
Nov 20
•
Tolga Bilge
and
Andrea Miotti
54
4
12
85+ UK Politicians Support Binding AI Regulation
UK lawmakers acknowledge the AI extinction threat and call for binding regulation on the most powerful AI systems.
Nov 6
•
Tolga Bilge
and
Andrea Miotti
103
8
17
October 2025
Supercritical Intelligence
“a true automated AI researcher by March of 2028”
Oct 30
•
Tolga Bilge
and
Andrea Miotti
48
10
12
The Call to Ban Superintelligence
“We call for a prohibition on the development of superintelligence”
Oct 23
•
Tolga Bilge
and
Andrea Miotti
153
22
16
Data Poisoning
"it would be reckless to ignore the potential for it to cause harm"
Oct 16
•
Tolga Bilge
and
Andrea Miotti
73
8
15
The Greatest Threat
Sam Altman’s latest warning that superintelligence could cause human extinction.
Oct 9
•
Tolga Bilge
and
Andrea Miotti
76
14
15
Before the Cliff: Regulating AI
The Artificial Intelligence Risk Evaluation Act
Oct 2
•
Tolga Bilge
and
Andrea Miotti
61
9
3
September 2025
Red Lines
“Governments must act decisively before the window for meaningful intervention closes.”
Sep 25
•
Tolga Bilge
and
Andrea Miotti
62
7
14
The Campaign
Keep Humanity in Control
Sep 18
•
Tolga Bilge
and
Andrea Miotti
56
5
9
Checks and Tech: Emerging Rules on AI
“Safety is a non-negotiable priority”
Sep 11
•
Tolga Bilge
and
Andrea Miotti
34
3
9
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts