Category: singularity strategies

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if you believe the world is mad. He also thinks that the small differences in brain

Continue Reading…

Category: singularity strategies

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, that with full-fledged nanotechnology you wouldn’t need a supply chain anymore, you could produce literally

Continue Reading…

Category: singularity strategies

496. The Magnitude of His Own Folly

(Interesting discussion about the likelihood of success of FAI.) 496. The Magnitude of His Own Folly Yudkowsky had to finally admit that he could have destroyed the world by building an uFAI. But even that would be too charitable because it implies that he was capable of building AGI which he wasn’t. The universe doesn’t

Continue Reading…

Category: singularity strategies

483. A Prodigy of Refutation; 484. A Sheer Folly of Callow Youth

483. A Prodigy of Refutation In his reckless youth Yudkowsky made the same mistakes as everyone else when thinking about the FAI-problem. So Eliezer1996 is out to build superintelligence, for the good of humanity and all sentient life. At first, I think, the question of whether a superintelligence will/could be good/evil didn’t really occur to

Continue Reading…