Can we trust our emotions? Obviously not, we run on corrupted hardware; after reading some evolutionary psychology you may think ‘evil hardware’ is a more apt description. But who says this hardware is corrupted or evil? Ultimately our brain, i.e. our evil hardware itself.
We can only rebel against our nature by using our brain. Which is kinda confusing. If even our brain says that many human traits, emotions or behaviors are hypocritical and evil, does that mean that humans are even more satanic than we think? Or does it work the other way round? Do these questions even make sense?
Here are some relevant and pretty disturbing (to me at least) thoughts to which I’ll come back later:
– Our brain is made up of many different modules only some of which are conscious. Should we give greater priority to conscious moduls? Intuitively you may think so but e.g. Kurzban says that our consciousness is more like a PR-agency that tries to convince others that we’re super-awesome (to put it simply). Other modules keep potentially damaging information hidden from this “press secretary” so that it can be more convincing because it truly believes that we are angels. Many unconscious modules know actually more than “we” do, and so only trusting our hypocritical, conscious modules looks like a really bad idea.
-It follows that we don’t have one unitary self (nothing new here of course) and our preferences fluctuate like crazy. Sometimes one part of your brain gains the upper hand and determines your behavior and preferences, sometimes another, depending on the environment and preceding experiences. Just taking a few chemicals, hell, just watching a movie or reading an argument, can change your utility function, by a lot. This is insane.
What shall I do? Is it right to follow the MDMA-utility function, the non-MDMA-utility function, the 2-minutes-after-watching-Crouching-Tiger-Hidden-Dragon-utiliy function, or the 2-minutes-before-watching-Crouching-Tiger-Hidden-Dragon-utiliy function?
– But not only are our Time1-preferences and Time2-preferences inconsistent. Our own brain modules have conflicting preferences at the same time, which is even more problematic since you can’t say that “you’ve learnt something and changed your mind”. (That sounds reasonable, but what is the difference between changing your utility function through “learning” and changing your utility function through evolutionary or other processes? E.g. Yudkowsky likes the first one, but not the last one, Hanson thinks both are o.k.)
So when even one human doesn’t have a coherent utility function, how can CEV possibly work?
This whole morality-business is fucked up.
Great comment by Poke:
I remember first having this revelation as something along the lines of: “You know when you’re in love or overcome by anger, and you do stupid things, and afterward you wonder what the hell you were thinking? Well, your ‘normal’ emotional states are just like that, except you never get that moment of reflection to wonder what the hell you were thinking.”