9 Comments
User's avatar
ToxSec's avatar

“AI safety testing is a blunt tool for a notoriously tricky problem, but it’s what we have at the moment. Really, we would want to be able to look into the model weights of an AI and examine its goals, drives, and behaviors, but this just isn’t possible right now!”

excellent point and excellent read! self replication in ai has been the boogeyman for a long time. it’s interesting to see what will actually happen as these tools are starting to work on themselves.

Expand full comment
B,'s avatar

Here are some thoughts from James Rickards on this matter for those of you who want additional opinions on AI. What he has to say does make sense:

https://dailyreckoning.com/superintelligence-is-beyond-reach/

Expand full comment
Nathan Metzger's avatar

Rickards is immediately incorrect when he says that computers can't do abductive logic and semiotics. "These skills are non-programmable." Yes, and the AI systems that handily demonstrate those abilities are not programmed! I think this this fellow is stuck in 2017.

His next point is that diminishing marginal returns will kick in. This is always a good type of argument about any given paradigm of a specific technology, and is always a bad type of argument against classes of technology that haven't reached physical limits. "In fact, new applications such as GPT-5 from OpenAI have been major disappointments." It sounds like he heard from someone that GPT-5 was bad and then never looked into the state of AI progress again. GPT-5 was a big leap forward, and on the METR time horizons benchmark was actually the most above-trend of any model. People felt like it sucked because 1) expectations were unreasonable and 2) a lot of users wanted the sycophantic 4o model back. (And of course, since he wrote that piece, significantly more capable AI models have been released, with no hint of slowing down.)

"Any search process (including the most sophisticated version of AI) with the fastest processors and LLMs cannot find new information. They can only find existing information." How do humans find new information, then? AI can also do that thing! "AI has no creative capacity." It is well understood that reinforcement learning imbues AI models with creativity. One measure of creativity places LLMs from 2 years ago as more creative than humans. (https://pmc.ncbi.nlm.nih.gov/articles/PMC10858891/) "AI is not “intelligent” or creative." I can guarantee you that he has never tried to define either of those words, so those claims are not very meaningful. Any halfway-decent definition of intelligence and creativity would allow those properties to be testable. When we run those tests by current AI systems, they appear to be very intelligent and creative indeed.

Modern AI is very weird, and I can't blame people for not knowing these things about it. But it's a little embarrassing for someone to write so authoritatively about a subject that they know so little about.

Expand full comment
Dave5017's avatar

Personally I agree with your source that conservation of information in search is true and therefore AI can’t be better than knowing all human knowledge. I’m not good enough at math to understand the paper he cited to know if it’s really conclusive, so I’m just using intuition. However, even if super-intelligence can’t happen AI can still be catastrophic.

All AI so far can be jailbroken, and because LLM model weights are a black box it’s unlikely we can ever fully stop this. Even worse we can’t stop hallucination because of how the architecture works. Even if all AIs know is literally everything humans know but faster, imagine the mess you could make with an agent. And as agents control more and more (like military equipment) this risk could be literally bio weapons or atom bombs. Oh oops, our agentic nuclear submarine hallucinated and vaporized all of China. Oops, some hacker jailbroke our customer service agent and because all of our systems are tied to agents it spilled the medical records of 200 million people and created a Bioweapon.

We need strict regulation and international treaties around AI regardless if they are capable of ASI or not.

Expand full comment
B,'s avatar

Mike Adams created a downloadable AI system that is named Enoch. He claims that he has managed to remove the hallucinations.

It’s available on Brighteon.ai

Hopefully someone will check into it and post again here with their thoughts.

Expand full comment
Jean(Muriel)'s avatar

Only believe it when I see regulation from the beginning… like other sane nations!

Expand full comment
Jim Blakemore's avatar

How concerned should we be about PRC stationing a robot force to the border with Vietnam?

Even if all of the Free World accepts this Accord, how can we monitor what Bad Actors are doing?

Expand full comment
B,'s avatar

Glad someone with some apparent knowledge bothered to read that piece by Rickards. Hopefully more will chime in.

Thanks

Expand full comment
Larissa, Nedene Pedder's avatar

Absolute madness 😠 😳 😫 it is just a matter of time, before these super AI's take control over us humans as they're programmed to think 🤔 that they're smarter than a human being!!!! Are they???

Expand full comment