Back in the 70s, there was this book published called "Raw Sewage". It was made up of one panel cartoons like you'd see in the op-ed or political pages of the newspaper. Its main thrust was ecology, but other then-relevant issues, most of which are still relevant. There was one cartoon showing a young man in cap and gown, clutching his just-granted diploma, and a humanoid robot standing in front of a machine. The robot has turned its head to the young man, and the caption has the robot saying to him, "Oh, you haven't heard? The Industrial Revolution is over--we won." it's like a foreshadowing of AI, especially the super-intelligent kind we're trying to control. I once read an anecdote about how robots have taken over building cars in Japan. The automaker's union got ticked off and complained that with fewer human workers, their union dues weren't being paid. So the automakers paid union dues for every robot. But letting AI take over could mean there will be no jobs for humans; with no jobs or income, everyone will become homeless, and that creates more problems. I think AI should have a built-in kill switch so it can be turned off if it gets too close to replacing humans.
It's weird to see an article called "We Need a Global Movement to Prohibit Superintelligent AI" that doesn't name any fledgling organizations trying to do that though. What's up there?
Currently AIs are built to only respond when we the customer give it a request, so I'd think having a separate AI improving it and making requests on it would be sufficient control over the usual AI configuration of only responding to requests. The automated tester AI simply does a scientific "ok let's try this" create & test routine, it would not improve itself, it would be limited to a series of tests it performs on the sunject AI controlled by the human engineers.
Is this obvious to many, or only the rarer few who were electronic engineers or scientists?
Back in the 70s, there was this book published called "Raw Sewage". It was made up of one panel cartoons like you'd see in the op-ed or political pages of the newspaper. Its main thrust was ecology, but other then-relevant issues, most of which are still relevant. There was one cartoon showing a young man in cap and gown, clutching his just-granted diploma, and a humanoid robot standing in front of a machine. The robot has turned its head to the young man, and the caption has the robot saying to him, "Oh, you haven't heard? The Industrial Revolution is over--we won." it's like a foreshadowing of AI, especially the super-intelligent kind we're trying to control. I once read an anecdote about how robots have taken over building cars in Japan. The automaker's union got ticked off and complained that with fewer human workers, their union dues weren't being paid. So the automakers paid union dues for every robot. But letting AI take over could mean there will be no jobs for humans; with no jobs or income, everyone will become homeless, and that creates more problems. I think AI should have a built-in kill switch so it can be turned off if it gets too close to replacing humans.
Openly committing to strongly pursuing recursive self-improvement was really a sad landmark moment.
Funny how they have created an economy bubble that is unsustainable destinaste to burst. The politicians being in favor should be put in jail
Agree,the sooner the better.
Getting good value from this newsletter. Thanks!
It's weird to see an article called "We Need a Global Movement to Prohibit Superintelligent AI" that doesn't name any fledgling organizations trying to do that though. What's up there?
(threat from superintelligent AI systems) ? I find this hard to believe. AI is tool not wild crazy animal or virus.
Currently AIs are built to only respond when we the customer give it a request, so I'd think having a separate AI improving it and making requests on it would be sufficient control over the usual AI configuration of only responding to requests. The automated tester AI simply does a scientific "ok let's try this" create & test routine, it would not improve itself, it would be limited to a series of tests it performs on the sunject AI controlled by the human engineers.
Is this obvious to many, or only the rarer few who were electronic engineers or scientists?