The Thing that I Protect – (Moral) Truth in Fiction?

The Thing That I Protect

But still – what is it, then, the thing that I protect?

Friendly AI?  No – a thousand times no – a thousand times not anymore.  It’s not thinking of the AI that gives me strength to carry on even in the face of inconvenience.

So what is Yudkowsky’s highest purpose, the thing he protects?

What does bring tears to my eyes?  Imagining a future where humanity has its act together.  Imagining children who grow up never knowing our world, who don’t even understand it.  Imagining the rescue of those now in sorrow, the end of nightmares great and small.  Seeing in reality the real sorrows that happen now, so many of which are unnecessary even now.  Seeing in reality the signs of progress toward a humanity that’s at least trying to get its act together and become something more – even if the signs are mostly just symbolic: a space shuttle launch, a march that protests a war.

Yudkowsky just wants to create utopia – through Friendly AI.

To really have something to protect, you have to be able to protect it, not just value it.  My battleground for that better Future is, indeed, the fragile pattern of value.  Not to keep it in stasis, but to keep it improving under its own criteria rather than randomly losing information.  And then to project that through more powerful optimization, to materialize the valuable future.  Without surrendering a single thing that’s precious, because losing a single dimension of value could lose it all.

There’s no easy way to do this, whether by de novo AI or by editing brains.  But with a de novo AI, cleanly and correctly designed, I think it should at least be possible to get it truly right and win completely.  It seems, for all its danger, the safest and easiest and shortest way (yes, the alternatives really are that bad).  And so that is my project.

And Say No More of It

But Yudkowsky tries to discourage further discussion about FAI, at least for the next two months. Thinking about FAI is dangerous, it can make you crazy and destroy you emotionally.

… it’s as though the idea of “Friendly AI” exerts an attraction that sucks the emotional energy out of its own subgoals.

~~~~Personal Blah: Man, that is so true. Everything pales and seems fairly pointless in comparison to FAI. X-risks in general dwarf every possible thing (at least if you’re sorta utilitarian) but FAI is even more important – if we assume it’s coherent and feasible which isn’t too likely, but bear with me –  than all other x-risk-reduction-strategies combined. If we get FAI right, we solve all our problems. If we eliminated any other x-risk like e.g. nanotech there would still be a bunch of other threats (biotech, nuclear war, future tech wars with whatever, value drift, dangerous uploads, obviously AI…) that could destroy everything of value. Everything you do is only relevant in so far as it contributes to the building of a FAI.

“You prevented a child from getting raped?  Meh, who cares. Donating 1$ to SIAI or FHI has more EU than saving 100 children from torture.” If you believe in FAI and “shut up and multiply” then staying sane is hard. I mean really hard. “Oh, you watched a south park episode? Congrats – you wasted approximately 1 trillion utilons, I hope you enjoyed it.”

Some part of me really hopes that this whole FAI-business is bullshit. I just want to have a little fun and want to feel like I’m a morally good person when I help my friends or hold doors open for old ladies or don’t abuse little children or women or something. But with this whole singularity/x-risk/FAI-stuff these actions don’t matter. Don’t matter at all.

(Fuck, even without this FAI-business my world view is forever ruined by Hansonian signaling and evo-psych. And I’m speaking of my interpersonal or romantic views. Let’s not even go into MWI or reductionism territory which just adds several layers of crap on top of that.)

So all values of mine that are motivated by altruistic, idealistic, rational-in-the-sense-of-expected-utlity-maximization reasons are being eaten alive by FAI/x-risks. Becoming a scientist? Pff. Going into politics and making the world a better place? Nah. Write a good book? Tss. And so on.

So what things do I really enjoy?

Talking to friends? Sure. Although I don’t have many of those left which of course isn’t surprising since conversations with humans almost always suck.

Watching movies? Somewhat.

Music? Somewhat.

Reading? Yeah, I really like some blogs. I somehow can’t enjoy reading books anymore.

Pretty much the only thing that really excites my passion is drugs. Taking drugs, reading about drugs, advocating decriminalization of drugs, experimenting with drugs, talking about drugs. Yeah, I know, it’s pathetic. I could invent a fairly heroic story about why I care about drugs – exploring altered states of consciousness, discovering new metaphysical truths, traveling through mind-space, “debiasing” myself, whatever – but it all boils down to the desire of escaping my boring everyday vanilla consciousness, which also explains my interest for meditation and lucid dreaming. But you actually have to work for meditation, which is bad.

(Lest you get a false impression, I’m just whining a little bit here. You know, fishing for compassion and stuff. I’m actually pretty happy, I’m smiling right now. Bupropion is amazing, especially lager doses thereof. Srsly.)

Ahhhh, that was a load of gooey self-disclosure. Haha, it feels so narcissistic and wrong to actually publish this stuff. No wait, it is narcissistic. But who cares?  ~~~~End of personal Blah

(Moral) Truth in Fiction?

Some commenters complained that Yudkowsky tried to manipulate his readers and persuade them to adopt his moral views and therefore wrote Three Worlds Collide.

Yudkowsky thinks that through reading non-fiction you gain knowledge, but through reading fiction you gain experiences and…

 …it seems to me that to communicate experience is a valid form of moral argument as well.

…Putting someone into the shoes of a slave and letting their mirror neurons feel the suffering of a husband separated from a wife, a mother separated from a child, a man whipped for refusing to whip a fellow slave – it’s not just persuasive, it’s valid.  It fires the mirror neurons that physically implement that part of our moral frame.

But of course fiction can also be abused:

…there’s the sort of standard polemic used in e.g. Atlas Shrugged (as well as many less famous pieces of science fiction) in which Your Beliefs are put into the minds of strong empowered noble heroes, and the Opposing Beliefs are put into the mouths of evil and contemptible villains, and then the consequences of Their Way are depicted as uniformly disastrous while Your Way offers butterflies and apple pie.  That’s not even subtle, but it works on people predisposed to hear the message.

Another good argument for writing fiction:

Stories may not get us completely into Near mode, but they get us closer into Near mode than abstract argument.  If it’s words on paper, you can end up believing that you ought to do just about anything.  If you’re in the shoes of a character encountering the experience, your reactions may be harder to twist.

 

Leave a Reply