Discussion about this post

User's avatar
ToxSec's avatar

“AI safety testing is a blunt tool for a notoriously tricky problem, but it’s what we have at the moment. Really, we would want to be able to look into the model weights of an AI and examine its goals, drives, and behaviors, but this just isn’t possible right now!”

excellent point and excellent read! self replication in ai has been the boogeyman for a long time. it’s interesting to see what will actually happen as these tools are starting to work on themselves.

Expand full comment
B,'s avatar

Here are some thoughts from James Rickards on this matter for those of you who want additional opinions on AI. What he has to say does make sense:

https://dailyreckoning.com/superintelligence-is-beyond-reach/

Expand full comment
7 more comments...

No posts

Ready for more?