AI Foom Debate: Post 46 – 49

46. Disjunctions, Antipredictions, Etc. (Yudkowsky)

First, a good illustration of the conjunction bias by Robyn Dawes:

“In their summations lawyers avoid arguing from disjunctions in favor of conjunctions.  (There are not many closing arguments that end, “Either the defendant was in severe financial straits and murdered the decedent to prevent his embezzlement from being exposed or he was passionately in love with the same coworker and murdered the decedent in a fit of jealous rage or the decedent had blocked the defendant’s promotion at work and the murder was an act of revenge.  The State has given you solid evidence to support each of these alternatives, all of which would lead to the same conclusion: first-degree murder.”)  Rationally, of course, disjunctions are much more probable than are conjunctions.”

Disjunctions are also much longer than more simple and straightforward arguments which makes them seem uncertain. You just can’t print them on a T-shirt.

Then Yudkowsky introduces the notion of an antiprediction:

This is when the narrowness of our human experience distorts our metric on the answer space, and so you can make predictions that actually aren’t far from maxentropy priors, but sound very startling.

Some examples: Most aliens probably won’t look like humans at all which may sound surprising at first, but if you think about the vastness of “body-design-space” it’s like saying “you won’t win the lottery”. It’s an antiprediction and almost certainly true.

The claim that some human-level AI could achieve superintelligence in one week, sounds incredible, but a week is only short from the human point of view. There are about 10^15 sequential operations in one week for a population of 2GHz cores.

Yudkowsky mentions three other tests he uses when reasoning about the future. The first one consists of just asking oneself the fundamental question of rationality:

“What do you think you know, and why do you think you know it?”

The second test

…is to ask myself “How worried do I feel that I’ll have to write an excuse explaining why this happened anyway?”

And finally, the third test

 …is the “So what?” test – to what degree will I feel indignant if Nature comes back and says “So what?” to my clever analysis?

47. Are AIs Homo Economicus?

Hanson apparently thinks it’s no big deal that he models AIs in his predictions as “homo economicus”, because, hey, everyone has to make abstraction and leave out unnecessary details.

I think this is quite ridiculous and shows just how insane even the smartest economists can be. Sure, if you think that superintelligent AIs will act like humans you can easily predict that there is no uFAI. Only problem is that your premise isyour conclusion.

He furthermore thinks that our civilization wouldn’t collapse if humans were genuinely selfish, whereas Yudkowsky argues for the opposite.

I think Yudkowsky is right, since most genuinely selfish humans wouldn’t even have kids. The magic of trade isn’t strong enough to bind a world full of psychopaths.

Reply by Yudkowsky:

“The main part you’re leaving out of your models (on my view) is the part where AIs can scale on hardware by expanding their brains, and scale on software by redesigning themselves, and these scaling curves are much sharper than “faster” let alone “more populous”. Aside from that, of course, AIs are more like economic agents than humans are.

My statement about “truly selfish humans” isn’t meant to be about truly selfish AIs, but rather, truly selfish entities with limited human attention spans, who have much worse agent problems than an AI that can monitor all its investments simultaneously and inspect the source code of its advisers. The reason I fear non-local AI fooms is precisely that they would have no trouble coordinating to cut the legacy humans out of their legal systems.”

48. Two Visions Of Heritage (Hanson)

Hanson and Yudkowsky have different visions about the future. Hanson’s vision of heritage..

…is a dry account of small individuals whose abilities, beliefs, and values are set by a vast historical machine of impersonal competitive forces..

I think that Yudkowsky would agree that this is an accurate account of history up to this point. The problem is that Hanson has no problems if something like this continues in the future.

By contrast Yudkowsky’s vision

…is a grand inspiring saga of absolute good or evil hanging on the wisdom of a few mythic heroes who use their raw genius and either love or indifference to make a God who makes a universe in the image of their feelings.

Yeah, this description isn’t too unfair. It sounds like a religion, agreed. But the problem with religions is not that they long for paradise and the ultimate good.

My credo: If we can’t create utopia let’s exterminate all life, simple as that.

Or, to put it more “technical”: If FAI is impossible, antinatalism is the next best option.

49. The Mechanics of Disagreement (Yudkowsky)

Real Bayesian Rationalists can’t agree to disagree. So why do wannabee-rationalists like Hanson and Yudkowsky disagree?

If I had to name a single reason why two wannabe rationalists wouldn’t actually be able to agree in practice, it would be that, once you trace the argument to the meta-level where theoretically everything can be and must be resolved, the argument trails off into psychoanalysis and noise.

I know lots of folks who are smarter, know more and are more rational than I am, but they have opposed view-points, and so I have to rely on my own estimates.

Leave a Reply