If you want to hold on to one false cherished belief – be it socialism or the soul – you’re epistemically screwed because you now have to protect other false beliefs that protect this false belief.
Having false beliefs isn’t a good thing, but it doesn’t have to be permanently crippling—if, when you discover your mistake, you get over it. The dangerous thing is to have a false belief that you believe should be protected as a belief—a belief-in-belief, whether or not accompanied by actual belief.
A single Lie That Must Be Protected can block someone’s progress into advanced rationality. No, it’s not harmless fun.
There are groups out there that invented whole epistemologies in order to protect their dogmas.
You have to deny that beliefs require evidence, and then you have to deny that maps should reflect territories, and then you have to deny that truth is a good thing…
Thus comes into being the Dark Side.
I worry that people aren’t aware of it, or aren’t sufficiently wary—that as we wander through our human world, we can expect to encounter systematically bad epistemology.
The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.
“Everyone has a right to their own opinion.” When you think about it, where was that proverb generated? Is it something that someone would say in the course of protecting a truth, or in the course of protecting from the truth? But people don’t perk up and say, “Aha! I sense the presence of the Dark Side!” As far as I can tell, it’s not widely realized that the Dark Side is out there.
Deontology can protect you from unknown unknowns.
Every now and then, another one comes before me with the brilliant idea: “Let’s lie!”
Lie about what?—oh, various things. The expected time to Singularity, say. Lie and say it’s definitely going to be earlier, because that will get more public attention. Sometimes they say “be optimistic”, sometimes they just say “lie”.
But at any rate, lie. Lie because it’s more convenient than trying to explain the truth. Lie, because someone else might lie, and so we have to make sure that we lie first. Lie to grab the tempting benefits, hanging just within reach—
Eh? Ethics? Well, now that you mention it, lying is at least a little bad, all else being equal. But with so much at stake, we should just ignore that and lie. You’ve got to follow the expected utility, right? The loss of a lie is much less than the benefit to be gained, right?
Thus do they argue. Except—what’s the flaw in the argument? Wouldn’t it be irrational not to lie, if lying has the greatest expected utility?
When I look back upon my history—well, I screwed up in a lot of ways. But it could have been much worse, if I had reasoned like those who offer such advice, and lied.
Once upon a time, I truly and honestly believed that either a superintelligence would do what was right, or else there was no right thing to do; and I said so. I was uncertain of the nature of morality, and I said that too. I didn’t know if the Singularity would be in five years or fifty, and this also I admitted. My project plans were not guaranteed to deliver results, and I did not promise to deliver them. When I finally said “Oops“, and realized that I needed to go off and do more fundamental research instead of rushing to write code immediately—
—well, I can imagine the mess I would have had on my hands, if I had told the people who trusted me: that the Singularity was surely coming in ten years; that my theory was sure to deliver results; that I had no lingering confusions; and that any superintelligence would surely give them their own private island and a harem of catpersons of the appropriate gender. How exactly would one then explain why you’re now going to step back and look for math-inventors instead of superprogrammers, or why the code now has to be theorem-proved?
But why not become an expert liar, if that’s what maximizes expected utility? Why take the constrained path of truth, when things so much more important are at stake?
Because, when I look over my history, I find that my ethics have, above all, protected me from myself. They weren’t inconveniences. They were safety rails on cliffs I didn’t see.
I made fundamental mistakes, and my ethics didn’t halt that, but they played a critical role in my recovery. When I was stopped by unknown unknowns that I just wasn’t expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.
518. Ethical Inhibitions
Provides an evolutionary explanation for our sense of ethical inhibition. Yeah, deontological norms are useful, especially if you run on corrupted hardware and underestimate black swan events.
519. Ethical Injunctions
Similar to the previous posts.
520. Prices or Bindings?
The German philosopher Fichte once said, “I would not break my word even to save humanity.”
Raymond Smullyan, in whose book I read this quote, seemed to laugh and not take Fichte seriously.
Abraham Heschel said of Fichte, “His salvation and righteousness were apparently so much more important to him than the fate of all men that he would have destroyed mankind to save himself.”
I don’t think they get it.
Uh oh. You’re almost always on the safe side, if you believe the opposite of what Fichte or any of those crazy German Idealists said. Although there may be some “newsomian” reasons (ideas that are at a first glance (actually at the first thousand glances) completely ridiculous, but which turn out to be quite brilliant when you employ sophisticated theories like acausal trade, universal dovetailing or any of the other epistemological magic tricks Will Newsome and other folks discovered) why German Idealism is uber-cool.
But my char has not enough intelligence to use newsomian epistemology, so fuck Fichte. I agree with Yvain:
I am glad Stanislav Petrov, contemplating his military oath to always obey his superiors and the appropriate guidelines, never read this post.