Tim Tyler comments:
Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox’s stomach. They may argue passionately about the rabbit’s fate – and even stoop to violence.
Yudkowsky mocks him:
Boy, you know, when you think about it, Nature turns out to be just full of disagreement.
Rocks, for example, fall down – so they agree with us, who also fall when pushed off a cliff – whereas hot air rises into the air, unlike humans.
I wonder why hot air disagrees with us so dramatically. I wonder what sort of moral justifications it might have for behaving as it does; and how long it will take to argue this out. So far, hot air has not been forthcoming in terms of moral justifications.
Physical systems that behave differently from you usually do not have factual or moral disagreements with you. Only a highly specialized subset of systems, when they do something different from you, should lead you to infer their explicit internal representation of moral arguments that could potentially lead you to change your mind about what you should do.
Attributing moral disagreements to rabbits or foxes is sheer anthropomorphism, in the full technical sense of the term – like supposing that lightning bolts are thrown by thunder gods, or that trees have spirits that can be insulted by human sexual practices and lead them to withhold their fruit.
Yeah. I guess Tim Tyler just meant that rabbits and foxes have different goals, but the word “disagreement” is probably more useful if we use it more narrowly.
Back in the hunter-gatherer days it wasn’t stupid to assume that rivers or trees had minds. But the discovery of the complex structure of the brain and especially the discovery of evolution made anthropomorphism stupid.
Yudkowsky presents a “cartoon proof” of Lob’s Theorem.
Löb’s Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent. Marcello and I wanted to be able to see the truth of Löb’s Theorem at a glance, so we doodled it out in the form of a cartoon. (An inability to trust assertions made by a proof system isomorphic to yourself, may be an issue for self-modifying AIs.)
Afterwards he offers a mathematical puzzle which two commenters can solve. But here is the really interesting part:
I just tested and anecdotally confirmed a hypothesis made with very little data: I suspected that neither Douglas Knight nor Larry D’Anna, the two who pinpointed 8 as the critical step, would be among the objectors to my metaethics. (Either of them can torpedo this nascent theory by stating otherwise.)
And both of them like Yudkowsky’s metaethics just fine! I don’t even understand the cartoon guide to Lob’s Theorem, let alone the solution to that mathematical puzzle, so my disagreement with Yudkowsky’s metaethics maybe doesn’t mean much…
453. Dumb Deplaning
Yudkowsky wonders why the way we get off planes is so ineffective.