Fun Theory Conclusion: Post 31 – 34

(Mainly quotes. I didn’t want to comment all that much cause I discussed all of these issues before.)

31. Higher Purpose

In today’s world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide scope * high severity.)  Aging threatens the old, starvation threatens the poor, existential risks threaten humankind as a whole.

But after building a successful FAI there wouldn’t be any people left whom you could save.

But do altruists then have little to look forward to, in the Future?  Will we, deprived of victims, find our higher purpose shriveling away, and have to make a new life for ourselves as self-absorbed creatures?… like it or not, the presence or absence of higher purpose does have hedonic effects on human beings, configured as we are now.  And to reconfigure ourselves so that we no longer need to care about anything outside ourselves… does sound a little sad.

But this is obviously a false dichotomy. You can still care about your loved ones or the Truth, or whatever.

Right now, in this world, any halfway capable rationalist who looks outside themselves, will find their eyes immediately drawn to large groups of people in extreme jeopardy.  Wide scope * great severity = big problem.  It doesn’t mean that if one were to solve all those Big Problems, we would have nothing left to care about except ourselves. Friends?  Family?  Sure, and also more abstract ideals, like Truth or Art or Freedom.  The change that altruists may have to get used to, is the absence of any solvable problems so urgent that it doesn’t matter whether they’re solved by a person or an unperson.  That is a change and a major one—which I am not going to go into, because we don’t yet live in that world.  But it’s not so sad a change, as having nothing to care about outside yourself.  It’s not the end of purpose.

32. The Fun Theory Sequence

A summary of the Fun Theory Sequence and a useful overview with links to the various posts.

Fun Theory is the field of knowledge that deals in questions such as “How much fun is there in the universe?”, “Will we ever run out of fun?”, “Are we having fun yet?” and “Could we be having more fun?”

Fun Theory is serious business.  The prospect of endless boredom is routinely fielded by conservatives as a knockdown argument against research on lifespan extension, against cryonics, against all transhumanism, and occasionally against the entire Enlightenment ideal of a better future.

Many critics (including George Orwell) have commented on the inability of authors to imagine Utopias where anyone would actually want to live.  If no one can imagine a Future where anyone would want to live, that may drain off motivation to work on the project.  But there are some quite understandable biases that get in the way of such visualization.

Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil).  Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance.  Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization – there is room for improvement.  Fun Theory also highlights the flaws of any particular religion’s perfect afterlife – you wouldn’t want to go to their Heaven.

Finally, going into the details of Fun Theory helps you see that eudaimonia is complicated – that there are many properties which contribute to a life worth living.  Which helps you appreciate just how worthless a galaxy would end up looking (with very high probability) if the galaxy was optimized by something with a utility function rolled up at random.  The narrowness of this target is the motivation to create AIs with precisely chosen goal systems (Friendly AI).

Fun Theory is built on top of the naturalistic metaethics summarized in Joy in the Merely Good; as such, its arguments ground in “On reflection, don’t you think this is what you would actually want (for yourself and others)?”

33. 31 Laws of Fun

Another, shorter summary of the Fun Theory Sequence.

34. Value is Fragile

If I had to pick a single statement that relies on more Overcoming Bias content I’ve written than any other, that statement would be:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

Yeah, almost everybody who disagrees with Yudkowsky on the urgency of FAI disagrees with him on this issue. Folks somehow think that intelligence implies nice values either because this is somehow a property of intelligence itself or because values are objective and therefore intelligent minds converge in value-space or something like that. I certainly hope they are right. But I would love to read a convincing argument for that. The most common arguments are:

1.”Dude, we don’t know enough, we have to do more AGI and then we’ll probably see that some of Yudkowsky’s assumptions are wrong.”  Which is probably right. But even if some of Yudkowsky’s assumptions are wrong, does this imply that FAI is bogus?

2. “Valuing something arbitrary like paperclips is obviously stupid, therefore superintelligent AIs can’t value paperclips, by definition.” That sounds to me like a bad argument.

3. “Everything is kinda weird. Maybe there is something objective and inherently supernatural or mysterious in this world. Call it god, the omega point, it doesn’t matter. Everything will be fine.” Yeah, maybe.

It’s really sad. Folks like Goertzel or this David Dalrymple guy (is 20, went to MIT at age 14 -> IQ fucking high) probably think that LW is straw-manning them and their arguments but to me the standard LW-arguments just make sense. Big time. And their arguments just sound like lunacy. It’s like we are speaking different languages. I don’t know what to do about that.

…Value isn’t just complicated, it’s fragile.  There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null.  A single blow and all value shatters.  Not every single blow will shatter all value – but more than one possible “single blow” will do so.

The conclusion:

…Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human.  Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us.  (And once I finally came to that realization, I felt less ashamed of values that seemed ‘provincial’ – but that’s another matter.)

These values do not emerge in all possible minds.  They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.

Touch too hard in the wrong dimension, and the physical representation of those values will shatter – and not come back, for there will be nothing left to want to bring it back.

And the referent of those values – a worthwhile universe – would no longer have any physical reason to come into being.

Let go of the steering wheel, and the Future crashes.

Leave a Reply