513. Ends Don’t Justify Means (Among Humans) – 515. Traditional Capitalist Values

513. Ends Don’t Justify Means (Among Humans)

Nice post that argues for deontological constraints based on consequentialism and evolutionary psychology:

In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.

From this proposition, I now move on to my main point, a question considerably outside the realm of classical Bayesian decision theory:

“What if I’m running on corrupted hardware?”

In such a case as this, you might even find yourself uttering such seemingly paradoxical statements—sheer nonsense from the perspective of classical decision theory—as:

“The ends don’t justify the means.”

But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.

That doesn’t mean Yudkowsky endorses deontology:

As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another.  But I don’t think that our deontological prohibitions are literally inherently nonconsequentially terminally right.  I endorse “the end doesn’t justify the means” as a principle to guide humans running on corrupted hardware, but I wouldn’t endorse it as a principle for a society of AIs that make well-calibrated estimates.

…”The end does not justify the means” is just consequentialist reasoning at one meta-level up.  If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn’t think this way.  But it is all still ultimately consequentialism.  It’s just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.

514. Entangled Truths, Contagious Lies

Lies don’t travel far in a causally closed universe.

Not all lies are uncovered, not all liars are punished; we don’t live in that righteous a universe.  But not all lies are as safe as their liars believe.  How many sins would become known to a Bayesian superintelligence, I wonder, if it did a (non-destructive?) nanotechnological scan of the Earth?  At minimum, all the lies of which any evidence still exists in any brain.  Some such lies may become known sooner than that, if the neuroscientists ever succeed in building a really good lie detector via neuroimaging.  Paul Ekman (a pioneer in the study of tiny facial muscle movements) could probably read off a sizeable fraction of the world’s lies right now, given a chance.

Not all lies are uncovered, not all liars are punished.  But the Great Web is very commonly underestimated.  Just the knowledge that humans have already accumulated would take many human lifetimes to learn.  Anyone who thinks that a non-God can tell a perfect lie, risk-free, is underestimating the tangledness of the Great Web.

Is honesty the best policy?  I don’t know if I’d go that far:  Even on my ethics, it’s sometimes okay to shut up.  But compared to outright lies, either honesty or silence involves less exposure to recursively propagating risks you don’t know you’re taking.

 

515. Traditional Capitalist Values

Capitalism has its downsides but, unfortunately, it is the best system among humans. In this post Yudkowsky lists some of the traditional capitalist virtues.

But Vassar makes a good point:

“Eliezer: I simply KNOW numerous people for whom the phrases “Real value systems – are phrased to generate warm fuzzies in their users,” and “They will sound noble at least to the people who believe them.” are simply untrue. In such people, there is often a DIRECT conflict between “generating warm fuzzies” and “not being mocked”, as they consider the generation of warm fuzzies to be sufficient grounds for mockery. Do you want me to call them and get THEM to post on the comments thread?

Seriously Eliezer. You have NO IDEA how the main bulk of the finance industry thinks about life. I have been there, you have not, and you are mistaken about this. Hitler and Stalin did get warm fuzzies from their ideology, so that creates a lot of confusion. One CAN be a bastard AND like warm fuzzies. That doesn’t mean that every bastard does so. It’s pretty clear that the Marquis de Sade was NOT after warm fuzzies. Much more normal, neither is Genghis Khan or the typical “most popular girl in school”.”

Leave a Reply