4 Comments
User's avatar
Katalina Hernández's avatar

Eleanor, Tolga, Andrea: What a greatl initiative! I've been really worried about the lack of basic Alignment / AI Safety literacy among non-technical AI Governance people. Lawyers, Compliance or data protection roles in private companies, who actively work on "Responsible AI" or compliance with AI regulation.

Your posts help, a lot. I hope this is okay- On Wednesday, I will release the second part of my series "Why Should AI Governance professionals & Tech Lawyers Care About AI Safety?". I will quote your Google Maps example, and the breakdown of alignment you give in this article.

I am trying to aim my posts at people with legal or compliance backgrounds, so the overall tone will be different, but I am essentially working towards the same goal :). I am very glad I came accross this post.

Expand full comment
Tolga Bilge's avatar

Thanks Katalina, that's very kind of you to say! Yes, of course feel free to quote us!

Looking forward to reading your post :)

Expand full comment
Andre Kramer's avatar

When alignment fails all we have left is containment: attempting to contain a miss-aligned AI in some sort of sandbox or jail. That is probably just as difficult a problem as alignment but is being championed as a backup by some companies (Google) - which should tell you a lot about their belief in alignment efforts.

Expand full comment
Duane Bass's avatar

Any documents that use Ai in its perpetration needs to be clearly watermarked.

Any Ai machinations should also have a readily available KILL switch, the ultimate escape button.

Expand full comment