A top AI company just dropped a central safety pledge. It couldn’t be clearer that we can’t rely on voluntary commitments to prevent the risk of human extinction that experts warn of.
International treaty to ban super intelligent AI research is needed ASAP. Also, who decides who gets to eat when all jobs are gone. When do we tax the robots to establish basic incomes for all?
I love your newsletter. The only thing you're missing is the existential threat to fresh water. It will reduce water tables everywhere and they have no answers. Some suggest data centers in space which would solve the problem when wafer chips go online and shrink to the size if a suitcase.
I've been investigating this aspect for several years. I don't see how it's sustainable. Please work on that view. People really respond when water,essential for life,disappears.
My skepticism with regard to AI development all comes down to the definition of "intelligence". Given that we are still fumbling around to try to identify, quantify, and "Control?" human intelligence... it seems to be supremely arrogant to be talking about or trying to create machine made superintelligence. This reminds me of the parable of the "disabled" humans trying to describe an elephant
REGULATE AI RIGHT NOW!!!!!!
International treaty to ban super intelligent AI research is needed ASAP. Also, who decides who gets to eat when all jobs are gone. When do we tax the robots to establish basic incomes for all?
I love your newsletter. The only thing you're missing is the existential threat to fresh water. It will reduce water tables everywhere and they have no answers. Some suggest data centers in space which would solve the problem when wafer chips go online and shrink to the size if a suitcase.
I've been investigating this aspect for several years. I don't see how it's sustainable. Please work on that view. People really respond when water,essential for life,disappears.
A pledge of AI safety is like the crocodile assuring the sheep it will not bite it.
Serious Danger!
My skepticism with regard to AI development all comes down to the definition of "intelligence". Given that we are still fumbling around to try to identify, quantify, and "Control?" human intelligence... it seems to be supremely arrogant to be talking about or trying to create machine made superintelligence. This reminds me of the parable of the "disabled" humans trying to describe an elephant