Category: intelligence explosion

In Praise of Maximizing – With Some Caveats

Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements are. Obviously, I pulled these numbers out of thin air – but it’s better than

Continue Reading…

Category: intelligence explosion

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if you believe the world is mad. He also thinks that the small differences in brain

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 46 – 49

46. Disjunctions, Antipredictions, Etc. (Yudkowsky) First, a good illustration of the conjunction bias by Robyn Dawes: “In their summations lawyers avoid arguing from disjunctions in favor of conjunctions.  (There are not many closing arguments that end, “Either the defendant was in severe financial straits and murdered the decedent to prevent his embezzlement from being exposed

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective cell or ecosystem or developed economy or most any complex system that way either –

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 35 – 40

35. Underconstrained Abstractions (Yudkowsky) Yudkowsky replies to Hanson’s post “Test Near, Apply Far”. When possible, I try to talk in concepts that can be verified with respect to existing history. …But in my book this is just one trick in a library of methodologies for dealing with the Future, which is, in general, a hard

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, that with full-fledged nanotechnology you wouldn’t need a supply chain anymore, you could produce literally

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 20 – 22

20. …Recursion, Magic (Yudkowsky) Recursion is probably the most difficult part of this topic.  We have historical records aplenty of cascades, even if untangling the causality is difficult.  Cycles of reinvestment are the heartbeat of the modern economy.  An insight that makes a hard problem easy, is something that I hope you’ve experienced at least

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 14 – 19

14. Brain Emulation and Hard Takeoff (Carl Shulman) Argues for the possibility of an intelligence explosion inititiated by a billion-dollar-ems-project. 15. Billion Dollar Bots (James Miller) Another scenario of billion-dollar WBE-projects. The problem with all those great Manhattan-style em-projects is, that you can’t influence them very much. They will probably be nation-based and will lead

Continue Reading…

Category: intelligence explosion

AI Foom Debate: Post 7 – 10

7. The First World Takeover (Yudkowsky) A really beautiful post about the origin of life from a “optimization-process-perspective”. Before Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements in our view of the Past. The first 9 Billion years after the Big Bang

Continue Reading…