AI Foom Debate: Post 14 – 19

14. Brain Emulation and Hard Takeoff (Carl Shulman)

Argues for the possibility of an intelligence explosion inititiated by a billion-dollar-ems-project.

15. Billion Dollar Bots (James Miller)

Another scenario of billion-dollar WBE-projects.

The problem with all those great Manhattan-style em-projects is, that you can’t influence them very much. They will probably be nation-based and will lead to arm-races between several countries, with no great concern for friendliness. You can’t donate to them in order to influence their decisions. The budget of possible WBE-projects is far greater than that of Eliezer-style “9 brains and a box in a basement”-FAI-projects which may be impossible but nevertheless, you could somehow influence them. Sure, I could go into politics or become  researcher in computational neuroscience but the chances of success (due to my personal motivations and abilities) here are infinitesimal. Furthermore most possible futures without a FAI look rather dystopian to me.

16. Surprised by Brains (Yudkowsky)

Very nice parable by Yudkowsky. He compares the advent of superintelligence with the advent of the human brain. Of course it’s only an analogy, with the main purpose of reducing “absurdity-bias”:

With the “invention” of the human brain the rate of evolution sped up whole orders of magnitude. Differences between humans and chimpanzees, although they were very similar and even today share around 95% of their genes, became greater and greater, until humans literally took over the world. Economic principles like Ricardo’s law of comparative advantage were not applicable to this situation. The differences between the minds of humans and chimpanzees were just too huge.

You couldn’t have predicted this scenario if you’ve merely tried to extrapolate from the previous course of evolution.

Robin Hanson replies:

” Species boundaries are pretty hard boundaries to the transfer of useful genetic information. So once proto-humans stumbled on key brain innovations there really wasn’t much of a way to transfer that to chimps. The innovation could only spread via the spread of humans.”

Well, that’s kinda the point. I could just replace some words and get this:

“Mind-architecture boundaries are pretty hard boundaries to the transfer of useful information. So once proto-superintelligent AIs stumbled on key brain/mind innovations there really wasn’t much of a way to transfer that to humans. The innovation could only spread via the spread of AIs.”

Or, with Yudkowsky’s words:

“If there’s a way in which I’ve been shocked by how our disagreement has proceeded so far, it’s the extent to which you think that vanilla abstractions of economic growth and productivity improvements suffice to cover the domain of brainware increases in intelligence: Engelbart’s mouse as analogous to e.g. a bigger prefrontal cortex. We don’t seem to be thinking in the same terms at all.

To me, the answer to the above question seems entirely obvious – the intelligence explosion will run on brainware rewrites and, to a lesser extent, hardware improvements. Even in the (unlikely) event that an economy of trade develops among AIs sharing improved brainware and improved hardware, a human can’t step in and use off-the-shelf an improved cortical algorithm or neurons that run at higher speeds. Not without technology so advanced that the AI could build a much better brain from scratch using the same resource expenditure.

The genetic barrier between chimps and humans is now permeable in the sense that humans could deliberately transfer genes horizontally, but it took rather a large tech advantage to get to that point…”

17. “Evicting” brain emulations (Carl Shulman)

Suppose that Robin’s Crack of a Future Dawn scenario occurs: whole brain emulations (‘ems’) are developed, diverse producers create ems of many different human brains, which are reproduced extensively until the marginal productivity of em labor approaches marginal cost, i.e. Malthusian near-subsistence wages. Ems that hold capital could use it to increase their wealth by investing, e.g. by creating improved ems and collecting the fruits of their increased productivity, by investing in hardware to rent to ems, or otherwise. However, an em would not be able to earn higher returns on its capital than any other investor, and ems with no capital would not be able to earn more than subsistence (including rental or licensing payments). In Robin’s preferred scenario, free ems would borrow or rent bodies, devoting their wages to rental costs, and would be subject to “eviction” or “repossession” for nonpayment.

Which sounds like hell to me, but anyway. It seems that Hanson really believes that major groups of similar ems whose software-design is getting outdated and who are therefore facing “eviction” or, plainly spoken, death wouldn’t start a revolution or anything like that. But this seems very implausible to me. I mean, those ems are after all simulated human brains, with the usual human motives. And if it is possible to produce such compliant and obedient ems then it’s also possible to produce a singleton of extremely loyal and cooperative ems.

18. Cascades, Cycles, Insight… (Yudkowsky)

Why are humans and chimpanzees so different, although we share 95% of our genetic material, and only a few million years of evolution lie between us?

The chimp-level task of modeling others, in the hominid line, led to improved self-modeling which supported recursion which enabled language which birthed politics that increased the selection pressure for outwitting which led to sexual selection on wittiness…

…or something.  It’s hard to tell by looking at the fossil record what happened in what order and why.  The point being that it wasn’t one optimization that pushed humans ahead of chimps, but rather a cascade of optimizations that, in Pan, never got started.

…From a zoomed-out perspective, cascades can lead to what look like discontinuities in the historical record, even given a steady optimization pressure in the background.  It’s not that natural selection sped up during hominid evolution.  But the search neighborhood contained a low-hanging fruit of high slope… that led to another fruit… which led to another fruit… and so, walking at a constant rate, we fell up the stairs.  If you see what I’m saying.

Predicting what sort of things are likely to cascade, seems like a very difficult sort of problem.

But I will venture the observation that – with a sample size of one, and an optimization process very different from human thought – there was a cascade in the region of the transition from primate to human intelligence.

The question is: Could there be more cascades just a few levels above our intelligence niveau?

It doesn’t seem unlikely to me, although we have only a few data-points.

Yudkowsky offers another source of discontinuity:

Cycles happen when you connect the output pipe to the input pipe in a repeatable transformation.  You might think of them as a special case of cascades with very high regularity.  (From which you’ll note that in the cases above, I talked about cascades through differing events: farming -> writing.)

The notion of cycles as a source of discontinuity might seem counterintuitive, since it’s so regular.

But that ain’t true. Consider the phenomenon of nuclear chain reaction. If the effective neutron multiplication factor, k, i.e. the average number of neutrons from one fission that cause another fission, is e.g. 0,9994 nothing much happens. But if k is 1,0006, all hell breaks loose.

The analogous problem will prevent a self-improving AI from being directly analogous to a uranium heap, with almost perfectly smooth exponential increase at a calculable rate.  You can’t apply the same software improvement to the same line of code over and over again, you’ve got to invent a new improvement each time.  But if self-improvements are triggering more self-improvements with great regularity, you might stand a long way back from the AI, blur your eyes a bit, and ask:  What is the AI’s average neutron multiplication factor?

Yudkowsky will elaborate on this analogy in a later post.

Insight is that mysterious thing humans do by grokking the search space, wherein one piece of highly abstract knowledge (e.g. Newton’s calculus) provides the master key to a huge set of problems.  Since humans deal in the compressibility of compressible search spaces (at least the part we can compress) we can bite off huge chunks in one go.  This is not mere cascading, where one solution leads to another:

Rather, an “insight” is a chunk of knowledge which, if you possess it, decreases the cost of solving a whole range of governed problems.

…Our compression of the search space is also responsible for ideas cascading much more easily than adaptations.  We actively examine good ideas, looking for neighbors.

…Insights have often cascaded, in human history – even major insights.  But they don’t quite cycle – you can’t repeat the identical pattern Newton used originally to get a new kind of calculus that’s twice and then three times as powerful.

But this could happen for a superintelligent AI. If it discovers some kind of insight into AI-theory, it could apply this knowledge toitself,and improve its ability to discover insights in AI-theory and so on.

Just imagine what it would be like if studying neuroscience would raise your IQ and improve your memory.

19. When Life Is Cheap, Death Is Cheap (Hanson)

Hanson replies to Shulman’s previous post about “pathologically obedient ems”:

…taking the long view of human behavior we find that an ordinary range of human personalities have, in a supporting poor culture, accepted genocide, mass slavery, killing of unproductive slaves, killing of unproductive elderly, starvation of the poor, and vast inequalities of wealth and power not obviously justified by raw individual ability.  The vast majority of these cultures were not totalitarian.  Cultures have found many ways for folks to accept death when “their time has come.”  When life is cheap, death is cheap as well.  Of course that isn’t how our culture sees things, but being rich we can afford luxurious attitudes.

Hm, I remain somewhat skeptical, although he made some good arguments.

Good comment by Roko:

“You and Carl are debating the different possible ways that a dystopian nightmare could be created, arguing the details of scenarios that we just plain want to avoid. I think that your time would be better spent by first asking “what scenarios do we want to realize” and then thinking about how to get there. Eliezer is adopting this strategy…”

I agree. Basically, every future without a benevolent dictator, or in transhumanistic jargon, a “friendly singleton”, is lost.

Leave a Reply