We're excited to finally be able to share a project we've been working on.
"Artificial Guarantees" is a collection of inconsistent statements, baseline-shifting tactics, and promises broken by major AI companies and their leaders showing that what they say doesn't always match what they do.
Do you have an example that we missed? Join our Discord to add to the collection.
Below is a selection of some of the most interesting cases we found. You can get our full list at controlai.com/artificial-guarantees
OpenAI
What They Say: Altman states that the risks of AI development going bad would result in “lights out for all of us”
What They Do: Claims that now his worst fear is industry impacts
January 2023 - StrictlyVC
called out Altman on his evasion of the question, stating “Sam’s worst fear I do not think is employment and he never told us what his worst fear actually is”
Altman states that "The bad case -- and I think this is important to say -- is like lights out for all of us"
May 2023 - Senate Hearing Video
Altman, when asked about his fears regarding AI, discusses his concerns about the industry and employment impacts of AI.
When given a chance to respond, Altman states that “my worst fear is that we, the field, the technology, the industry cause significant harm to the world”.
What They Say: OpenAI is a non-profit so they can "stay accountable to humanity as a whole"
What They Do: OpenAI is now aiming to become a for-profit corporation
March 2017 - Mosaic Ventures
Altman explains why OpenAI is a nonprofit:At an event held by Mosaic Ventures in March 2017, Altman said OpenAI is a nonprofit because they “don’t want to ever be making decisions that benefit shareholders. The only people we want to be accountable to is humanity as a whole...”
13 June 2024 - Financial Times
OpenAI’s VP of government affairs, Anna Makanju, told the Financial Times “we don’t have a goal of maximising profit; we have a goal of making sure that AGI benefits all of humanity”.14 June 2024 - The Information
Altman tells shareholders that OpenAI is considering becoming a for-profit corporation.
5 August 2024 - The Guardian
Elon Musk opens a lawsuit against OpenAI “repeating allegations in his previous suit that his former co-founders in OpenAI betrayed him by turning the company from a non-profit into a largely for-profit enterprise.“
27 December 2024 - TheVerge
OpenAI announces its plan to transform into a for-profit company. OpenAI plans to become a Public Benefit Corporation, giving the for-profit arm control of the company.
What They Say: OpenAI understands the need for AI regulations and cares about AI safety
What They Do: Lobby the EU to reduce AI regulations
16 May 2023 - New York Times
OpenAI's CEO, Sam Altman, appears before Congress discussing AI risks and the need for regulation
This is consistent with several public statements Altman has made in the past calling for AI regulation (example here).
20 June 2023 - Time
OpenAI privately lobbies to water down regulations in the EU AI Act
Anthropic
What They Say: AI systems have the potential to cause large-scale destruction within 1-3 years
What They Do: Lobbies against the enforcement of AI safety standards in California
25 July 2023 - Washington Post
Anthropic's CEO, Dario Amodei, appears before a Senate committee to discuss risks from AI.
Amodei states "Anthropic’s projections suggest that AI systems may become much better at science and engineering, to the point where they could be misused to cause large-scale destruction, particularly in the domain of biology. This rapid growth in science and engineering skills could also change the balance of power between nations" [additional source].
26 June 2024 - Norges Bank Investment Management
Amodei: There is a "good chance" that AGI could be built within the next 1 - 3 years. He also states that there is catastrophic risk from AI and that too could be 1 - 3 years away.
23 July 2024 - Axios
Anthropic refuses to support proposed legislation (SB 1047) in California which would enforce AI safety standards without significant amendments.
What They Say: The mitigation of extinction risks as a result of AI should be a global priority
What They Do: Lobby for AI companies to only be fined after a catastrophic event occurs
May 2023 - Center for AI Safety
Anthropic's CEO, Dario Amodei, signs a statement that mitigating AI risk should be a global priority
The statement reads "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
23 July 2024 - Axios
Anthropic calls for significant amendments to proposed legislation (SB 1047) in California which would mean AI companies are only held responsible after a catastrophic event occurs.
Artificial Guarantees is an ongoing project. Due to the sheer volume of these inconsistencies, and the speed at which things are moving, we haven’t been able to build an exhaustive list. So we want your help! If you have a tip for us, join our Discord, we’d love to hear from you!
See you next week!
, ,Thank you for reading our Substack. If you liked this post, we’d encourage you to restack it, or forward the email to others you think might be interested.
You can also find us in many other places on social media:
I believe that AI is dangerous & should be outlawed. AI should have never been allowed to start up in the first place.
> Anthropic refuses to support proposed legislation (SB 1047) in California which would enforce AI safety standards without significant amendments.
They did end up (vaguely/ambigously) supporting it later though, seems important to mention despite all the non-communal phrasing