Category: existential risks

The Thing that I Protect – (Moral) Truth in Fiction?

The Thing That I Protect But still – what is it, then, the thing that I protect? Friendly AI?  No – a thousand times no – a thousand times not anymore.  It’s not thinking of the AI that gives me strength to carry on even in the face of inconvenience. So what is Yudkowsky’s highest

Continue Reading…

Category: existential risks

Fun Theory Conclusion: Post 31 – 34

(Mainly quotes. I didn’t want to comment all that much cause I discussed all of these issues before.) 31. Higher Purpose In today’s world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide scope * high severity.)  Aging threatens the old, starvation threatens the poor, existential risks threaten

Continue Reading…

Category: existential risks

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements are. Obviously, I pulled these numbers out of thin air – but it’s better than

Continue Reading…

Category: existential risks

AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective cell or ecosystem or developed economy or most any complex system that way either –

Continue Reading…

Category: existential risks

AI Foom Debate: Post 35 – 40

35. Underconstrained Abstractions (Yudkowsky) Yudkowsky replies to Hanson’s post “Test Near, Apply Far”. When possible, I try to talk in concepts that can be verified with respect to existing history. …But in my book this is just one trick in a library of methodologies for dealing with the Future, which is, in general, a hard

Continue Reading…

Category: existential risks

AI Foom Debate: Post 32 – 34

32. Hard Takeoff (Yudkowsky) Natural selection produced roughly linear improvements in human brains. Unmodified human brains produced roughly exponential improvements in knowledge on the object level (bridges, planes, cars, etc ). So it’s unlikely that with the advent of recursively self-improving superintelligence the speed of progress won’t change much. …to try and compress it down

Continue Reading…

Category: existential risks

AI Foom Debate: Post 29 – 31

29. I Heart CYC (Hanson) Hanson endorses CYC, an AI-project headed by Doug Lenat, the inventor of EURISKO. The lesson Lenat took from EURISKO is that architecture is overrated;  AIs learn slowly now mainly because they know so little.  So we need to explicitly code knowledge by hand until we have enough to build systems

Continue Reading…

Category: existential risks

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, that with full-fledged nanotechnology you wouldn’t need a supply chain anymore, you could produce literally

Continue Reading…

Category: existential risks

AI Foom Debate: Post 14 – 19

14. Brain Emulation and Hard Takeoff (Carl Shulman) Argues for the possibility of an intelligence explosion inititiated by a billion-dollar-ems-project. 15. Billion Dollar Bots (James Miller) Another scenario of billion-dollar WBE-projects. The problem with all those great Manhattan-style em-projects is, that you can’t influence them very much. They will probably be nation-based and will lead

Continue Reading…

Category: existential risks

AI-Foom Debate: Post 1 – 6

This is one of the most important Sequences for me, I’ll depart from the usual format. Prologue 1. Fund UberTool? (Robin Hanson) Hanson offers a nice analogy of a recursively self-improving AI in economic terms, but he doesn’t really argue that something like this is unlikely or impossible. Imagine you are a venture capitalist reviewing

Continue Reading…