Probability is subjective because it can only exist in your mind (or other minds). But you can’t change your probability-estimates as you please. If you don’t know anything about the outcome of a coin-flip it doesn’t make sense to say that P(Head) is 0,99. You have to say P(Head)=0,5 and this seems kinda objective.
-> Probability is subjectively objective. (Of course there’s more to it, we’ll come back later.)
On to the comment-section. Even Yvain thinks Yudkowsky’s meta-ethics are problematic (I wonder who actually agrees with Yudkowsky on CEV and meta-ethics? Probably Lukeprog. I would love to know Carl Shulman’s views…):
“I was one of the people who suggested the term h-right before. I’m not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don’t understand where it stops being relativist.
When Eliezer say X is “right”, he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it’s the one that matches what humans believe.
This does, technically, create a theory of morality that doesn’t explicitly reference humans. Just like intelligent design theory doesn’t explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer’s system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.
That’s why I think Eliezer’s system is relative. I admit it’s not directly relative, in that Eliezer isn’t directly picking “Don’t murder” out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he’s referring the question to another layer, and then basing that layer on human opinion.”
Yudkowsky recommends Lawrence Watt-Evans’ books. I haven’t read any of them, actually I haven’t read any fiction for the last 2 years with the notable exception of HP:MoR. Okay, and I’ve read some of Yvain’s stories. Yeah, and some Lovecraft.
Only a news post.
Is there really something like moral progress? Sure, slavery and child labor seem pretty bad and I’m glad they are gone (in most developed countries). But it’s not surprising that I feel like that because I was educated, some would say indoctrinated, by a society that holds these views. What looks like progress may be a random walk. And there are some amazingly smart Lesswrongers like Konkvistador or VladimirM (with whom I agree on almost any other topic) who think exactly that – there is no moral progress; a peasant from the 15th century would be appalled by our society even after engaging in extensive discussions. He simply had different terminal values.
Well, here are some of the reasons why I think something like moral progress really has happened:
-Ancient folks were really dumb and ignorant, even more ignorant than people today. It seems to me that (at least for humans) there is a correlation between knowledge/intelligence and reasonable moral values.
– My values differ from most humans thus it seems unlikely that I’ve been (successfully) indoctrinated by them. In fact, most current societies are rather disgusting. But less disgusting than previous societies.
Naturally, I may be rationalizing cuz’ without moral progress – well, let’s not even go there.