AI-Foom Debate: Post 11 – 13

11. Observing Optimization (Yudkowsky)

In  “Optimization and the Singularity” I pointed out that history since the first replicator, including human history to date, has mostly been a case of nonrecursive optimization – where you’ve got one thingy doing the optimizing, and another thingy getting optimized.  When evolution builds a better amoeba, that doesn’t change the structure of evolution – the mutate-reproduce-select cycle.

This would change with the advent of superintelligent AIs since they could rewrite their own source-code.

Yudkowksy further argues that you probably can’t make exact predictions or retrodictions based on his theories about the nature of optimization processes alone. Observing the fossil record doesn’t help you if your theories have multiple free parameters. You can only make some easy and qualitative pre- or retrodictions like e.g. “there were no rabbits before the Cambrian explosion”.

12. Life’s Story Continues (Yudkowsky)

When I try to structure my understanding of the unfolding process of Life, it seems to me that, to understand the optimization velocity at any given point, I want to break down that velocity using the following abstractions:

  • The searchability of the neighborhood of the current location, and the availability of good/better alternatives in that rough region. Maybe call this the optimization slope.  Are the fruit low-hanging or high-hanging, and how large are the fruit?
  • The optimization resources, like the amount of computing power available to a fixed program, or the number of individuals in a population pool.
  • The optimization efficiency, a curve that gives the amount of searchpower generated by a given investiture of resources, which is presumably a function of the optimizer’s structure at that point in time.

These are incredibly useful categories. You could name them, when applied to humans, “theoretical terrain”, resources and intelligence.

The first thing to realize is that meta-level changes are rare, so most of what we see in the historical record will be structured by the search neighborhoods – the way that one innovation opens up the way for additional innovations.  That’s going to be most of the story, not because meta-level innovations are unimportant, but because they are rare.

…I just want to note that my view is nothing as simple as “meta-level determinism” or “the impact of something is proportional to how meta it is; non-meta things must have small impacts”.  Nothing much meta happened between the age of sexual metazoans and the age of humans – brains were getting more sophisticated over that period, but that didn’t change the nature of evolution.

Some object-level innovations are small, some are medium-sized, some are huge.  It’s no wonder if you look at the historical record and see a Big Innovation that doesn’t look the least bit meta, but had a huge impact by itself and led to lots of other innovations by opening up a new neighborhood picture of search space.

…My thesis is more along the lines of, “If this is the picture without recursion, just imagine what’s going to happen when we add recursion.”

13. Emulations Go Foom (Hanson)

Hanson thinks that it is more likely that uploads come first. It’s rather improbable that one upload could recursively self-improve and take over the world. Its fundamental mind-design is almost the same as that of a normal human, there will be high competition through other uploads and furthermore information-leakage. Dangerous value-drift is nonetheless very likely and “conscientious” uploads will outcompete ones that are more “relaxed” and enjoy art or something like that.

Very interesting discussion in the comment section:

Carl Shulman:

“In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to “take over the world.”

The first competitor uses some smart people with common ideology and relevant expertise as templates for its bots. Then, where previously there were thousands of experts with relevant skills to be hired to improve bot design, there are now millions with initially exactly shared aims. They buy up much of the existing hardware base (in multiple countries), run copies at high speed, and get another order of magnitude of efficiency or so, while developing new skills and digital nootropics. With their vast resources and shared aims they can effectively lobby and cut deals with individuals and governments world-wide, and can easily acquire physical manipulators (including humans wearing cameras, microphones, and remote-controlled bombs for coercions) and cheaply monitor populations.

Copying a bot template is an easy way to build cartels with an utterly unprecedent combination of cohesion and scale.”

Cameron Taylor:

” “In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to “take over the world.” Sure the leader might make many trillions more in profits, so enriching shareholders and local residents as to make Bill Gates look like a tribal chief proud of having more feathers in his cap. A leading nation might even go so far as to dominate the world as much as Britain, the origin of the industrial revolution, once did. But the rich and powerful would at least be discouraged from capricious devastation the same way they have always been, by self-interest.”

What the? Are you serious? Are you talking about self replicating machines of >= human intelligence or tamagochi?

I must concur with Carl Shulman here. It seems Robin has spent too much time in the economist cult. Self interest is powerful but it is not a guardian angel intent on making all humans and their robot overlords play nice.

10,000 physicist bots acting cooporatively in a way humans egos and self interest could never match. What would they invent? Perhaps a planet wide system of EMP devices? Maybe some superior shielding to go with it? How about a suitably outfitted underground bunker? POP! All the serious electronic competition is fried. A few hundred protected bots emerge from the bunker. Within 2 years they have the world for themselves and possibly their human ‘masters’. There is nothing capricious about that self-interest. In fact, it is far more humane than any other attempt at world conquest, unless you consider the loss of ‘emulated life’.”

Carl Shulman:

“A leading nation might even go so far as to dominate the world as much as Britain, the origin of the industrial revolution, once did.”

A leading nation, with territorial control over a large fraction of all world computing hardware, develops brain emulation via a Manhattan Project. Knowing the power of bots, only carefully selected individuals, with high intelligence, relevant expertise, and loyalty, are scanned. The loyalty of the resulting bots is tested exhaustively (copies can be tested to destruction, their digital brains scanned directly, etc), and they can be regularly refreshed from old data, and changes carefully tested for effects on motivation.

Server farms are rededicated to host copies of these minds at varying speeds. Many take control of military robots and automated vehicles, while others robustly monitor the human population. The state is now completely secure against human rebellion, and an attack by foreign powers would mean a nuclear war (as it would today). Meanwhile, the bots undertake intensive research to improve themselves. Rapid improvements in efficiency of emulation proceed from workers with a thousandfold or millionfold speed-up, with acquisition of knowledge at high speeds followed by subdivision into many instances to apply that knowledge (and regular pruning/replacement of undesired instances). With billions of person-years of highly intelligent labor (but better, because of the ability to spend computational power on both speed and on instances) they set up rapid infrastructure after a period of days and extend their control to the remainder of the planet.

The bots have remained coordinated in values through regular reversion to saved states, and careful testing of the effects of learning and modification on their values (conducted by previous versions) and we now have a global singleton with the values of the national project. That domination is far more extreme than anything ever achieved by Britain or any other historical empire.”

Yudkowsky:

“Carl Shulman has said much of what needed saying.

Whole brain emulations are not part of the AI family, they are part of the modified-human family with the usual advantages and disadvantages thereof: including lots of smart people that seemed nice at first all slowly going insane in the same way, difficulty of modifying the brainware without superhuman intelligence, unavoidable ethical difficulties, resentment of exploitation and other standard human feelings, etcetera.

They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find.

Leaving aside that you’re describing a completely unethical process – as de Blanc notes, prediction is not advocating, but some individual humans and governmental entities often at least try to avoid doing things that their era says is very wrong, such as killing millions of people – at the very least an economist should mention when a putative corporate action involves torture and murder –

– several orders of magnitude of efficiency gains? Without understanding the underlying software in enough detail to write your own de novo AI? Suggesting a whole bird emulation is one thing, suggesting that you can get several orders of magnitude efficiency improvement out of the bird emulation without understanding how it works seems like a much, much stronger claim.

As I was initially reading, I was thinking that I was going to reply in terms of ems being nonrecursive – they’re just people in silicon instead of carbon, and I for one don’t find an extra 8 protons all that impressive. It may or may not be realistic, but the scenario you describe is not a Singularity in the sense of either a Vingean event horizon or a Goodian intelligence explosion; it’s just more of the same but faster.

But any technology powerful enough to milk a thousand-fold efficiency improvement out of upload software without driving those uploads insane, is powerful enough to upgrade the uploads. Which brings us to Cameron’s observation:

Cameron: What the? Are you serious? Are you talking about self replicating machines of >= human intelligence or tamagochi?

I am afraid that my reaction was much the same as Cameron’s. The prospect of biological humans sitting on top of a population of ems that are smarter, much faster, and far more numerous than bios while having all the standard human drives, and the bios treating the ems as standard economic valuta to be milked and traded around, and the ems sit still for this for more than a week of bio time – this does not seem historically realistic.”

Robin Hanson:

“All, this post’s scenario assumes whole brain emulation without other forms of machine intelligence. We’ll need other posts to explore the chances of this vs. other scenarios, and the consequences of other scenarios. This post was to explore the need for friendliness in this scenario.

Note that most objections here are to my social science, and to ethics some try to read into my wording (I wasn’t trying to make any ethical claims). No one has complained, for example, that I’ve misapplied or ignored optimization abstractions.

I remain fascinated by the common phenomena wherein intuitive social reasoning seems so compelling to most people that they feel very confident of their conclusions and feel little inclination to listen to or defer to professional social scientists. Carl Shulman, for example, finds it obvious it is in the self-interest of “a leading power with an edge in bot technology and some infrastructure … to kill everyone else and get sole control over our future light-cone’s natural resources.” Eliezer seems to say he agrees. I’m sorry Carl, but your comments on this post sound like crazy paranoid rants, as if you were Dr. Stranglelove pushing the button to preserve our precious bodily fluids. Is there any social scientist out there who finds Carl’s claims remotely plausible?

Eliezer, I don’t find it obviously unethical to experiment with implementation short cuts on a willing em volunteer (or on yourself). The several orders of magnitude of gains were relative to a likely-to-be excessively high fidelity initial emulation (the WBE roadmap agrees with me here I think). I did not assume the ems would be slaves, and I explicitly added to the post before your comment to make that clear. If it matters I prefer free ems who rent or borrow bodies. Finally, is your objection here really going to be that you can’t imagine a world with vast wealth inequality without the poor multitudes immediately exterminating the rich few? Or does this only happen when many poor think faster than many rich? What kind of social science analysis do you base this conclusion on?”

Carl Shulman:

“You are misinterpreting that comment. I was directly responding to your claim that self-interest would restrain capricious abuses, as it seems to me that the ordinary self-interested reasons restraining abuse of outgroups, e.g. the opportunity to trade with them or tax them, no longer apply when their labor is worth less than a subsistence wage, and other uses of their constituent atoms would have greater value. There would be little *self-interested* reason for an otherwise abusive group to rein in such mistreatment, even though plenty of altruistic reasons would remain. For most, I would expect them to initially plan simply to disarm other humans and consolidate power, killing only as needed to pre-empt development of similar capabilities.

“Finally, is your objection here really going to be that you can’t imagine a world with vast wealth inequality without the poor multitudes immediately exterminating the rich few? Or does this only happen when many poor think faster than many rich? What kind of social science analysis do you base this conclusion on?”

Empirically, most genocides in the last hundred years have involved the expropriation and murder of a disproportionately prosperous minority group. This is actually a common pattern in situations with much less extreme wealth inequality and difference (than in an upload scenario) between ethnic groups in the modern world:

http://www.amazon.com/World-Fire-Exporting-Democracy-Instability/dp/0385503024

Also, Eliezer’s point does not require extermination (although a decision simply to engage in egalitarian redistribution, as is common in modern societies, would reduce humans below the subsistence level, and almost all humans would lack the skills to compete in emulation labor markets, even if free uploading was provided), just that if a CEO expects that releasing uploads into the world will shortly upset the economic system in which any monetary profits could be used, the profit motive for doing so will be weak.”

I agree with Carl Shulman. It seems that Hanson, the economist, can’t even imagine that there will be war between ems and humans.

Pushing FAI instead of WBE now seems more promising again. But this whole discussion is incredibly complex and I’m changing my beliefs even more rapidly than usual, which is definitely not healthy. I just can’t keep the arguments and counter-arguments in my head. Even writing summaries seems unhelpful. The issues at hand are too complex, too numerous and too unpredictable.

Leave a Reply