Category: FAI

In Praise of Maximizing – With Some Caveats

Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing

Continue Reading…

Category: FAI

The Thing that I Protect – (Moral) Truth in Fiction?

The Thing That I Protect But still – what is it, then, the thing that I protect? Friendly AI?  No – a thousand times no – a thousand times not anymore.  It’s not thinking of the AI that gives me strength to carry on even in the face of inconvenience. So what is Yudkowsky’s highest

Continue Reading…

Category: FAI

Fun Theory Conclusion: Post 31 – 34

(Mainly quotes. I didn’t want to comment all that much cause I discussed all of these issues before.) 31. Higher Purpose In today’s world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide scope * high severity.)  Aging threatens the old, starvation threatens the poor, existential risks threaten

Continue Reading…

Category: FAI

Fun Theory: Post 24 – 30

24. Building Weirdtopia Yudkowsky invites the readers to write comments describing possible “weirdtopian” futures, i.e. worlds that are neither utopian nor dystopian, but pretty strange. 25. Justified Expectation of Pleasant Surprises We humans need hope. To get up every morning, we need to believe that there is at least a small chance that things will,

Continue Reading…

Category: FAI

Fun Theory: Post 20 – 23

20. Emotional Involvement Can your emotions get involved in a video game?  Yes, but not much.  Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it’s probably not remotely close to the feeling of triumph you’d get from saving the world in real life.  I’ve played video games

Continue Reading…

Category: FAI

Fun Theory: Post 14 – 19

14. Amputation of Destiny Yudkowsky describes a book by Ian Banks that is about a society, called the Culture, that consists of happy, intelligent, long-living humans, low-grade transhumanists, so to speak. But everything is controlled by Minds, superintelligent AIs. Yudkowsky calls this an amputation of destiny. We humans want to be the main players, we

Continue Reading…

Category: FAI

Fun Theory: Post 11 – 13

11. Nonperson Predicates There is a subproblem of Friendly AI which is so scary that I usually don’t talk about it… …This is the problem that if you create an AI and tell it to model the world around it, it may form models of people that are people themselves.  Not necessarily the same person,

Continue Reading…

Category: FAI

Fun Theory: Post 3 – 10

3. Complex Novelty In the book “Permutation City” by Greg Egan, apparently the favorite Sci-Fi book of Yudkowsky one of the main characters, Peer, modifies himself to find table-leg-carving utterly fascinating and enjoyable. Yudkowsky is horrified by this vision and thinks that… …at that point, you might as well modify yourself to get pleasure from

Continue Reading…

Category: FAI

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements are. Obviously, I pulled these numbers out of thin air – but it’s better than

Continue Reading…

Category: FAI

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if you believe the world is mad. He also thinks that the small differences in brain

Continue Reading…