Eleanor, Tolga, Andrea: What a greatl initiative! I've been really worried about the lack of basic Alignment / AI Safety literacy among non-technical AI Governance people. Lawyers, Compliance or data protection roles in private companies, who actively work on "Responsible AI" or compliance with AI regulation.
Your posts help, a lot. I hope this is okay- On Wednesday, I will release the second part of my series "Why Should AI Governance professionals & Tech Lawyers Care About AI Safety?". I will quote your Google Maps example, and the breakdown of alignment you give in this article.
I am trying to aim my posts at people with legal or compliance backgrounds, so the overall tone will be different, but I am essentially working towards the same goal :). I am very glad I came accross this post.
When alignment fails all we have left is containment: attempting to contain a miss-aligned AI in some sort of sandbox or jail. That is probably just as difficult a problem as alignment but is being championed as a backup by some companies (Google) - which should tell you a lot about their belief in alignment efforts.
Eleanor, Tolga, Andrea: What a greatl initiative! I've been really worried about the lack of basic Alignment / AI Safety literacy among non-technical AI Governance people. Lawyers, Compliance or data protection roles in private companies, who actively work on "Responsible AI" or compliance with AI regulation.
Your posts help, a lot. I hope this is okay- On Wednesday, I will release the second part of my series "Why Should AI Governance professionals & Tech Lawyers Care About AI Safety?". I will quote your Google Maps example, and the breakdown of alignment you give in this article.
I am trying to aim my posts at people with legal or compliance backgrounds, so the overall tone will be different, but I am essentially working towards the same goal :). I am very glad I came accross this post.
Thanks Katalina, that's very kind of you to say! Yes, of course feel free to quote us!
Looking forward to reading your post :)
When alignment fails all we have left is containment: attempting to contain a miss-aligned AI in some sort of sandbox or jail. That is probably just as difficult a problem as alignment but is being championed as a backup by some companies (Google) - which should tell you a lot about their belief in alignment efforts.
Any documents that use Ai in its perpetration needs to be clearly watermarked.
Any Ai machinations should also have a readily available KILL switch, the ultimate escape button.