432. The Meaning of Right

432. The Meaning of Right

I guess this is Yudkowsky’s central post on meta-ethics/morality but I can’t say it’s very convincing. Anyway, as with the puzzle of free will we should first try to dissolve the question of metaethics:

The key—as it has always been, in my experience so far—is to understand how a certain cognitive algorithm feels from inside.  Standard procedure for righting a wrong question:  If you don’t know what right-ness is, then take a step beneath and ask how your brain labels things “right”.

It is not the same question—it has no moral aspects to it, being strictly a matter of fact and cognitive science.

We have to distinguish between two different levels:

In order to investigate how the brain labels things “right”, we are going to start out by talking about what is right.  That is, we’ll start out wearing our morality-goggles, in which we consider morality-as-morality and talk about moral questions directly.  As opposed to wearing our reduction-goggles,in which we talk about cognitive algorithms and mere physics.  Rigorously distinguishing between these two views is the first step toward mating them together.

Let’s begin with the level of morality-as morality:

… Rightness is contagious backward in time.

So we keep asking the next question.  Why should we press the button?  To pull the string.  Why should we pull the string?  To flip the switch.  Why should we flip the switch?  To pull the child from the railroad tracks.  Why pull the child from the railroad tracks?  So that they live.  Why should the child live?

Okay, that sounds intuitive. But we have to find some terminal values in order to avoid an infinite regress.

If should-ness only comes from should-ness—from a should-consequence, or from a should-universal—then how does anything end up should in the first place?

Exactly, how can we justify our terminal values?

But all our philosophical arguments ultimately seem to ground in statements that no one has bothered to justify—except perhaps to plead that they are self-evident, or that any reasonable mind must surely agree, or that they are a priori truths, or some such.  Perhaps, then, all our moral beliefs are as erroneous as that old bit about slavery?  Perhaps we have entirely misperceived the flowing streams of should?

Why do we have these particular values like “happiness is good” or “torture is bad” (ceteris paribus)? Yudkowksy thinks – AFAICT – that it’s “just” due to our evolutionary history.

So I once believed was plausible; and one of the arguments I wish I could go back and say to myself, is, “If you know nothing at all about should-ness, then how do you know that the procedure, ‘Do whatever Emperor Ming says’ is not the entirety of should-ness?  Or even worse, perhaps, the procedure, ‘Do whatever maximizes inclusive genetic fitness’ or ‘Do whatever makes you personally happy’.”  The point here would have been to make my past self see that in rejecting these rules, he was asserting a kind of knowledge—that to say, “This is not morality,” he must reveal that, despite himself, he knows something about morality or meta-morality.  Otherwise, the procedure “Do whatever Emperor Ming says” would seem just as plausible, as a guiding principle, as his current path of “Rejecting things that seem unjustified.”  Unjustified—according to what criterion of justification?  Why trust the principle that says that moral statements need to be justified, if you know nothing at all about morality?

If you’re guessing that I’m trying to inveigle you into letting me say:  “Well, there are just some things that are baked into the question, when you start asking questions about morality, rather than wakalixes or toaster ovens”, then you would be right.  I’ll be making use of that later, and, yes, will address “But why should we ask that question?”

Ok, I agree, there are some moral statements that are just a priori (man, I hate this word) true. But, how can we test if our moral intuitions are right? There is no way to falsify our intuitions. If you believe that 2+2 =5 you’ll run into problems, sooner or later. But there are possible minds that believe utterly insane crap like “cannabis; LSD, modafinil and amphetamine should be illegal but alcohol and cigarettes should be legal ‘cuz, you know, they kill more people, by at least 5 orders of magnitude. Oh, and antinatalism is of course repugnant”! And those minds probably just do fine!

And let’s assume for a second that Tegmark’s mathematical universe hypothesis is true, then there actually exists an alien civilization of which at least 95% of its citizens have exaclty these views! And members of the society that have conflicting opinions could be, I don’t know, thrown into jail or something like that! How could these despicable creatures find out that their moral views are completely demented? Maybe never!

Okay, now: morality-goggles off, reduction-goggles on.

Those who remember Possibility and Could-ness, or those familiar with simple search techniques in AI, will realize that the “should” label is behaving like the inverse of the “could” label, which we previously analyzed in terms of “reachability”.  Reachability spreads forward in time: if I could reach the state with the button pressed, I could reach the state with the string pulled; if I could reach the state with the string pulled, I could reach the state with the switch flipped.

Wait a second. Doesn’t reachability also spread backwards in time? If I could reach the state with the switch flipped, I also could reach the state with the string pulled, and so on. Whatever, is probably not that important.

It seems that Martha and Fred have an obligation to take care of their child, and Jane and Bob are obligated to take care of their child, and Susan and Wilson have a duty to care for their child.  Could it be that parents in general must take care of their children?

By representing right-ness as an attribute of objects, you can recruit a whole previously evolved system that reasons about the attributes of objects.  You can save quite a lot of planning time, if you decide (based on experience) that in general it is a good idea to take a waterskin on hunts, from which it follows that it must be a good idea to take a waterskin on hunt #342.

Nothing surprising here, I guess.

Let’s say that your mind, faced with any countable set of objects, automatically and perceptually tagged them with their remainder modulo 5.  If you saw a group of 17 objects, for example, they would look remainder-2-ish.  Though, if you didn’t have any notion of what your neurons were doing, and perhaps no notion of modulo arithmetic, you would only see that the group of 17 objects had the same remainder-ness as a group of 2 objects.  You might not even know how to count—your brain doing the whole thing automatically, subconsciously and neurally—in which case you would just have five different words for the remainder-ness attributes that we would call 0, 1, 2, 3, and 4.

If you look out upon the world you see, and guess that remainder-ness is a separate and additional attribute of things—like the attribute of having an electric charge—or like a tiny little XML tag hanging off of things—then you will be wrong.  But this does not mean it is nonsense to talk about remainder-ness, or that you must automatically commit the Mind Projection Fallacy in doing so.  So long as you’ve got a well-defined way to compute a property, it can have a well-defined output and hence an empirical truth condition.

So as long as there is a stable computation involved, or a stable process—even if you can’t consciously verbalize the specification—it often makes a great deal of sense to talk about properties that are not fundamental.  And reason about them, and remember where they have been found in the past, and guess where they will be found next.

Yeah, but hold on there, cowboy. In the case of the property ‘red” we know that, fundamentally, this property isn’t important at all. If we could enhance our visual systems and somehow loose our ability to see “red” but would therefore gain the ability to see every other wavelength or whatever, we would be happy to make this deal. In short: We don’t give a flying fuck about the property “red” as such, in stark contrast to the property “right”.

And even more so, we know what “red” actually means. It’s just electro-magnetic waves of a particular length and they happen to be the same in the whole universe, i.e. other aliens species would know what we mean by “red” and would have probably either other names for it or our property “red” would be a subset (if their visual senses were less sensitive ) or a superset (if their visual senses were more sensitive) of some property which their minds would experience because (just a reasonable guess) all intelligent lifeforms have to have access to a large enough chunk of reality and electro-magnetic waves of 650-750nm are part of that.

Okay, now we’re ready to bridge the levels.

As you must surely have guessed by now, this should-ness stuff is how the human decision algorithm feels from inside.  It is not an extra, physical, ontologically fundamental attribute hanging off of events like a tiny little XML tag.

But it is a moral question what we should do about that—how we should react to it.

To adopt an attitude of complete nihilism, because we wanted those tiny little XML tags, and they’re not physically there, strikes me as the wrong move.

And it seems like an awful shame to—after so many millions and hundreds of millions of years of evolution—after the moral miracle of so much cutthroat genetic competition producing intelligent minds that love, and hope, and appreciate beauty, and create beauty—after coming so far, to throw away the Gift of morality, just because our brain happened to represent morality in such fashion as to potentially mislead us when we reflect on the nature of morality.

Um, yeah, what? We somehow talk at cross purposes. The problem is that the morality of humans is arbitrary and different from a lot of alien species. I think I could get over that, but the even bigger problem is that my morals and those of a conservative fundamentalist are pretty different (or for that matter my morals and those of 99,99% of all humanity are pretty different), and I don’t think that procedures like CEV can overcome those differences, at least not entirely.

But an even bigger problem is that my own utility function, my own moral intuitions are probably incoherent! As I said in previous posts it’s not at all unlikely that watching a movie or drinking coffee can change my utility function by a considerate amount if we focus on the “extrapolation” in CEV, because small changes can lead to huge butterfly effects! (More on that in the comment section). Very naughty.

So here’s my metaethics:

I earlier asked,

What is “right”, if you can’t say “good” or “desirable” or “better” or “preferable” or “moral” or “should”?  What happens if you try to carry out the operation of replacing the symbol with what it stands for?

I answer that if you try to replace the symbol “should” with what it stands for, you end up with quite a large sentence. For a human this is a much huger blob of a computation that looks like, “Did everyone survive?  How many people are happy?  Are people in control of their own lives? …”  Humans have complex emotions, have many values—the thousand shards of desire, the godshatter of natural selection.  I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don’t really have—I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them.  So that I can regard my present values, as an approximation to the ideal morality that I would have if I heard all the arguments, to whatever extent such an extrapolation is coherent.

Sounds reasonable, at least prima facie, but see my above remarks and wait for the comment section.

No one can write down their big computation; it is not just too large, it is also unknown to its user.  No more could you print out a listing of the neurons in your brain.  You never mention your big computation—you only use it, every hour of every day.

Now why might one identify this enormous abstract computation, with what-is-right?

If you identify rightness with this huge computational property, then moral judgments are subjunctively objective (like math), subjectively objective (like probability), and capable of being true (like counterfactuals).

You will find yourself saying, “If I wanted to kill someone—even if I thought it was right to kill someone—that wouldn’t make it right.”  Why?  Because what is right is a huge computational property—an abstract computation—not tied to the state of anyone’s brain, including your own brain.

Yeah, it’s not tied to anyone’s brain but the only reason why we choose to use or identify with this computation rather than another one is that our brains compute an approximation of that, and they do so because of our evolutionary history which is of course completely random. It’s very misleading to say that this computation is not tied to human brains.

The apparent objectivity of morality has just been explained—and not explained away.  For indeed, if someone slipped me a pill that made me want to kill people, nonetheless, it would not be right to kill people.  Perhaps I would actually kill people, in that situation—but that is because something other than morality would be controlling my actions.

Morality is not just subjunctively objective, but subjectively objective.  I experience it as something I cannot change.  Even after I know that it’s myself who computes this 1-place function, and not a rock somewhere—even after I know that I will not find any star or mountain that computes this function, that only upon me is it written—even so, I find that I wish to save lives, and that even if I could change this by an act of will, I would not choose to do so.  I do not wish to reject joy, or beauty, or freedom.  What else would I do instead?  I do not wish to reject the Gift that natural selection accidentally barfed into me.  This is the principle of The Moral Void and The Gift We Give To Tomorrow.

Again, if you significantly changed your cognitive algorithms that are responsible for arithmetic calculations you would fuck up your life. If you transformed your moral algorithms into psychopathic ones you would probably fuck more women, make more money and have more fun. Just saying.

And if our brains are untrustworthy, it is only our own brains that say so.  Do you sometimes think that human beings are not very nice?  Then it is you, a human being, who says so.  It is you, a human being, who judges that human beings could do better.  You will not find such written upon the stars or the mountains: they are not minds, they cannot think.

We know that our brains are very untrustworthy in epistemological matters because of stuff like otpical illusions, weird physics and so on. If we stop to believe in reality it’s still there and will, sooner or later, make us aware of that fact. If we stop believing in our moral norms, no big deal.

In this, of course, we find a justificational strange loop through the meta-level.  Which is unavoidable so far as I can see—you can’t argue morality, or any kind of goal optimization, into a rock.  But note the exact structure of this strange loop: there is no general moral principle which says that you should do what evolution programmed you to do.  There is, indeed, no general principle to trust your moral intuitions!  You can find a moral intuition within yourself, describe it—quote it—consider it deliberately and in the full light of your entire morality, and reject it, on grounds of other arguments.  What counts as an argument is also built into the rightness-function.

Just as, in the strange loop of rationality, there is no general principle in rationality to trust your brain, or to believe what evolution programmed you to believe—but indeed, when you ask which parts of your brain you need to rebel against, you do so using your current brain.  When you ask whether the universe is simple, you can consider the simple hypothesis that the universe’s apparent simplicity is explained by its actual simplicity.

You would do the same thing with morality; if you consider that a part of yourself might be considered harmful, then use your best current guess at what is right, your full moral strength, to do the considering.  Why should we want to unwind ourselves to a rock?  Why should we do less than our best, when reflecting?  You can’t unwind past Occam’s Razor, modus ponens, or morality and it’s not clear why you should try.

I know, I sound like a broken record, anyway: If our beliefs about reality were complete horseshit we would notice that something is wrong because all of our predictions would be falsified, etc. If however our moral norms were totally wrong we wouldn’t be able to tell.

If you hoped to find a source of morality outside humanity—well, I can’t give that back, but I can ask once again:  Why would you even want that?  And what good would it do?  Even if there were some great light in the sky—something that could tell us, “Sorry, happiness is bad for you, pain is better, now get out there and kill some babies!”—it would still be your own decision to follow it.  You cannot evade responsibility.

Meh.

If you hoped that morality would be universalizable—sorry, that one I really can’t give back.  Well, unless we’re just talking about humans.  Between neurologically intact humans, there is indeed much cause to hope for overlap and coherence; and a great and reasonable doubt as to whether any present disagreement is really unresolvable, even it seems to be about “values”.  The obvious reason for hope is the psychological unity of humankind, and the intuitions of symmetry, universalizability, and simplicity that we execute in the course of our moral arguments.

See Marcello’s and Wei Dai’s comments…

If I define rightness to include the space of arguments that move me, then when you and I argue about what is right, we are arguing our approximations to what we would come to believe if we knew all empirical facts and had a million years to think about it—and that might be a lot closer than the present and heated argument.  Or it might not.  This gets into the notion of ‘construing an extrapolated volition’ which would be, again, a separate post.

But if you were stepping outside the human and hoping for moral arguments that would persuade any possible mind, even a mind that just wanted to maximize the number of paperclips in the universe, then sorry—the space of possible mind designs is too large to permit universally compelling arguments.  You are better off treating your intuition that your moral arguments ought to persuade others, as applying only to other humans who are more or less neurologically intact.  Trying it on human psychopaths would be dangerous, yet perhaps possible.  But a paperclip maximizer is just not the sort of mind that would be moved by a moral argument.  (This will definitely be a separate post.)

Once, in my wild and reckless youth, I tried dutifully—I thought it was my duty—to be ready and willing to follow the dictates of a great light in the sky, an external objective morality, when I discovered it.  I questioned everything, even altruism toward human lives, even the value of happiness.  Finally I realized that there was no foundation but humanity—no evidence pointing to even a reasonable doubt that there was anything else—and indeed I shouldn’t even want to hope for anything else—and indeed would have no moral cause to follow the dictates of a light in the sky, even if I found one.

I didn’t get back immediately all the pieces of myself that I had tried to deprecate—it took time for the realization “There is nothing else” to sink in.  The notion that humanity could just… you know… live and have fun… seemed much too good to be true, so I mistrusted it.  But eventually, it sank in that there really was nothing else to take the place of beauty.  And then I got it back.

But if you ask “Why is it good to be happy?” and then replace the symbol ‘good’ with what it stands for, you’ll end up with a question like “Why does happiness match {happiness + survival + justice + individuality + …}?”  This gets computed so fast, that it scarcely seems like there’s anything there to be explained.  It’s like asking “Why does 4 = 4?” instead of “Why does 2 + 2 = 4?”

Now, I bet that feels quite a bit like what happens when I ask you:  “Why is happiness good?”

And that’s also my answer to Moore’s Open Question.  Why is this big function I’m talking about, right?  Because when I say “that big function”, and you say “right”, we are dereferencing two different pointers to the same unverbalizable abstract computation.  I mean, that big function I’m talking about, happens to be the same thing that labels things right in your own brain.  You might reflect on the pieces of the quotation of the big function, but you would start out by using your sense of right-ness to do it.  If you had the perfect empirical knowledge to taboo both “that big function” and “right”, substitute what the pointers stood for, and write out the full enormity of the resulting sentence, it would come out as… sorry, I can’t resist this one… A=A.

That’s admittedly a great ending and who knows, maybe he’s right after all, but the really big assumption he didn’t even begin to justify is that “this big blob of computation” is the same or nearly the same for all of humanity. That’s by far the most important problem. I’m willing to let go of that objective-morality-light-in-the-sky-business (albeit with a heavy heart) but I really hope that there are some good arguments for the “Coherent” part of CEV. Anyway, on to the comment section.

Nick Tarleton:

“Bravo. But:

Because when I say “that big function”, and you say “right”, we are dereferencing two different pointers to the same unverbalizable abstract computation.

No, the other person is dereferencing a pointer to their big function, which may or may not be the same as yours. This is the one place it doesn’t add up to normality: not everyone need have the same function. Eliezer-rightness is objective, a one-place function, but it seems to me the ordinary usage of “right” goes further: it’s assumed that everybody means the same thing by, not just “Eliezer-right”, but “right”. I don’t see how this metamorality allows for that, or how any sensible one could. (Not that it bothers me.)”

See, even Tarleton is one of the psychological-unity-deniers.

Good comment by Tom McCabe:

“You will find yourself saying, “If I wanted to kill someone – even if I thought it was right to kill someone – that wouldn’t make it right.” Why? Because what is right is a huge computational property- an abstract computation – not tied to the state of anyone’s brain, including your own brain.”

Coherent Extrapolated Volition (or any roughly similar system) protects against this failure for any specific human, but not in general. Eg., suppose that you use various lawmaking processes to approximate Right(x), and then one person tries to decide independently that Right(Murder) > 0. You can detect the mismatch between the person’s actions and Right(x) by checking against the approximation (the legal code) and finding that murder is wrong. In the limit of the approximation, you can detect even mismatches that people at the time wouldn’t notice (eg., slavery). CEV also protects against specific kinds of group failures, eg., convince everybody that the Christian God exists and that the Bible is literally accurate, and CEV will correct for it by replacing the false belief of “God is real” with the true belief of “God is imaginary”, and then extrapolating the consequences.

However, CEV can’t protect against features of human cognitive architecture that are consistent under reflection, factual accuracy, etc. Suppose that, tomorrow, you used magical powers to rewrite large portions of everyone’s brain. You would expect that people now take actions with lower values of Right(x) than they previously did. But, now, there’s no way to determine the value of anything under Right(x) as we currently understand it. You can’t use previous records (these have all been changed, by act of magic), and you can’t use human intuition (as it too has been changed). So while the external Right(x) still exists somewhere out in thingspace, it’s a moot point, as nobody can access it. This wouldn’t work for, say, arithmetic, as people would rapidly discover that assuming 2 + 2 = 5 in engineering calculations makes bridges fall down.”

Exactly. You could say that evoultion used its “magical powers to rewrite large portions of everyone’s brain” and now we have the moral intuitions we have only because evolution gave them to us.

VERY important thread, started by a great comment by Marcello (Herreshoff):

“”So that we can regard our present values, as an approximation to the ideal morality that we would have if we heard all the arguments, to whatever extent such an extrapolation is coherent.”

This seems to be in the right ballpark, but the answer is dissatisfying
because I am by no means persuaded that the extrapolation would be coherent
at all (even if you only consider one person.) Why would it? It’s
god-shatter, not Peano Arithmetic.

There could be nasty butterfly effects, in that the order in which you
were exposed to all the arguments, the mood you were in upon hearing them and
so forth could influence which of the arguments you came to trust.

On the other hand, viewing our values as an approximation to the ideal
morality that us would have if we heard all the
arguments, isn’t looking good either: correctly predicting a bayesian port of
a massive network of sentient god-shatter looks to me like it would require a
ton of moral judgments to do at all. The subsystems in our brains sometimes
resolve things by fighting (ie. the feeling being in a moral dilemma.)
Looking at the result of the fight in your real physical brain isn’t helpful
to make that judgment if it would have depended on whether you just had a
cup of coffee or not.

So, what do we do if there is more than one basin of attraction a moral
reasoner considering all the arguments can land in? What if there are no
basins?

BOOM! In your face, man. Srsly, who actually agrees with Eliezer on this CEV-stuff? It seems like almost nobody. Even his closest colleagues like Marcello, Carl Shulman and Vassar are pretty skeptical. Doesn’t look good, at all

Wei Dai:

“I share Marcello’s concerns as well. Eliezer, have you thought about what to do if the above turns out to be the case?”

Eliezer Yudkowsky:

“It seems to me that if you build a Friendly AI, you ought to build it to act where coherence exists and not act where it doesn’t.”

Wei Dai:

“What makes you think that any coherence exists in the first place? Marcello’s argument seems convincing to me. In the space of possible computations, what fraction gives the same final answer regardless of the order of inputs presented? Why do you think that the “huge blob of computation” that is your morality falls into this small category? There seems to be plenty of empirical evidence that human morality is in fact sensitive to the order in which moral arguments are presented.

Or think about it this way. Suppose an (unFriendly) SI wants to craft an argument that would convince you to adopt a certain morality and then stop paying attention to any conflicting moral arguments. Could it do so? Could it do so again with a different object-level morality on someone else? (This assumes there’s an advantage to being first, as far as giving moral arguments to humans is concerned. Adjust the scenario accordingly if there’s an advantage in being last instead.)

You say the FAI won’t act where coherence doesn’t exist but if you don’t expect coherence now, you ought to be doing something other than building such an FAI, or at least have a contingency plan for when it halts without giving any output?”

And the frighteningly intelligent Wei Dai has his doubts, too. It’s enough to make you weep…

Eliezer Yudkowsky:

“What makes you think that any coherence exists in the first place?

Most people wouldn’t want to be turned into paperclips?”

Wei Dai:

“Most people wouldn’t want to be turned into paperclips?

Of course not, since they haven’t yet heard the argument that would make they want to. All the moral arguments we’ve heard so far have been invented by humans, and we just aren’t that inventive. Even so, we have Voluntary Human Extinction Movement.”

Eliezer Yudkowsky:

“Wei, suppose I want to help someone. How ought I to do so?

Is the idea here that humans end up anywhere depending on what arguments they hear in what order, without the overall map of all possible argument orders displaying any sort of concentration in one or more clusters where lots of endpoints would light up, or any sort of coherency that could be extracted out of it?”

Wei Dai:

Wei, suppose I want to help someone. How ought I to do so?

I don’t know. (I mean I don’t know how to do it in general. There are some specific situations where I do know how to help, but lots more where I don’t.)

Is the idea here that humans end up anywhere depending on what arguments they hear in what order, without the overall map of all possible argument orders displaying any sort of concentration in one or more clusters where lots of endpoints would light up, or any sort of coherency that could be extracted out of it?

Yes. Or another possibility is that the overall map of all possible argument orders does display some sort of concentration, but that concentration is morally irrelevant. Human minds were never “designed” to hear all possible moral arguments, so where the concentration occurs is accidental, and perhaps horrifying from our current perspective. (Suppose the concentration turns out to be voluntary extinction or something worse, would you bite the bullet and let the FAI run with it?)”

-> CEV:0           Nihilism:1

2 comments on “432. The Meaning of Right

  1. -

    > Yeah, it’s not tied to anyone’s brain but the only reason why we choose to use or identify with this computation rather than another one is that our brains compute an approximation of that, and they do so because of our evolutionary history which is of course completely random. It’s very misleading to say that this computation is not tied to human brains.

    Right. There seem to be two meanings of the term “objective” in “objective morality”.

    1. Morality is objective if it doesn’t *depend* on a specific mind.
    2. Morality is objective if it can be *found* without a specific mind.

    So “whatever muflax says is right” is not objective in the first sense. You could modify my brain and that would change what is right.

    Eliezer’s computation doesn’t have this problem, once you extract it from his (or humanity’s) brain. Once you know it, you can ask an AI or whoever to implement it and changing brains wouldn’t change morality. So it’s less relative in an important sense.

    But you would’ve never found this computation if you had never encountered a human. muflaxyz in Alpha Centauri can think about philosophy all day and would never find out what is right. In that sense, it’s not objective, but something like the Will of God or the Categorical Imperative would be (or at least their defenders hope so).

    Some people seem to care about the second criterion, others don’t. It may well be impossible. Maybe it’s like the One True Pairing. If someone could plausibly disagree that Spike and Buffy belong together (ahem) just because of the history or location of their brain, then how solid is the OTP really? Relativism is weaksauce.

    • -

      >But you would’ve never found this computation if you had never encountered a human.

      And even if an alien founded it, it would reject it (or at least a substantial fraction of it).

      >In that sense, it’s not objective, but something like the Will of God or the Categorical Imperative would be (or at least their defenders hope so).

      Exactly.

      >Maybe it’s like the One True Pairing. If someone could plausibly disagree that Spike and Buffy belong together (ahem) just because of the history or location of their brain, then how solid is the OTP really?

      Haha, the OTP-argument is indeed a deadly blow for relativism.

Leave a Reply