AI Foom Debate: Post 20 – 22

20. …Recursion, Magic (Yudkowsky)

Recursion is probably the most difficult part of this topic.  We have historical records aplenty of cascades, even if untangling the causality is difficult.  Cycles of reinvestment are the heartbeat of the modern economy.  An insight that makes a hard problem easy, is something that I hope you’ve experienced at least once in your life…

But we don’t have a whole lot of experience redesigning our own neural circuitry.

We have these wonderful things called “optimizing compilers”.

…So why not write an optimizing compiler in its own language, and then run it on itself?  And then use the resulting optimized optimizing compiler, to recompile itself yet again, thus producing an even more optimized optimizing compiler –

Halt!  Stop!  Hold on just a minute!  An optimizing compiler is not supposed to change the logic of a program – the input/output relations.  An optimizing compiler is only supposed to produce code that does the same thing, only faster.  A compiler isn’t remotely near understanding what the program is doing and why, so it can’t presume to construct a better input/output function.

…Now if you are one of those annoying nitpicky types, like me, you will notice a flaw in this logic: suppose you built an optimizing compiler that searched over a sufficiently wide range of possible optimizations, that it did not ordinarily have time to do a full search of its own space – so that, when the optimizing compiler ran out of time, it would just implement whatever speedups it had already discovered.  Then the optimized optimizing compiler, although it would only implement the same logic faster, would do more optimizations in the same time – and so the second output would not equal the first output.

Well… that probably doesn’t buy you much.  Let’s say the optimized program is 20% faster, that is, it gets 20% more done in the same time.  Then, unrealistically assuming “optimization” is linear, the 2-optimized program will be 24% faster, the 3-optimized program will be 24.8% faster, and so on until we top out at a 25% improvement.  k < 1.

So let us turn aside from optimizing compilers, and consider a more interesting artifact, EURISKO.

But Eurisko wasn’t recursive or intelligent enough. It lacked the ability for “insight”. Same thing for Doug Engelbart. Sure, computers let you work more efficiently on some tasks some of which include computers themselves but there’s lots of stuff that a computer can’t help you with.

The computer mouse was simply not recursive enough. Yudkowsky will address this topic in more detail in the next post.

Magic is the final factor I’d like to point out, at least for now, in considering sources of discontinuity for self-improving minds.  By “magic” I naturally do not refer to this.  Rather, “magic” in the sense that if you asked 19th-century Victorians what they thought the future would bring, they would have talked about flying machines or gigantic engines, and a very few true visionaries would have suggested space travel or Babbage computers.  Nanotechnology, not so much.

He concludes:

To “improve your own capabilities” is an instrumental goal, and if a smarter intelligence than my own is focused on that goal, I should expect to be surprised.  The mind may find ways to produce larger jumps in capability than I can visualize myself.  Where higher creativity than mine is at work and looking for shorter shortcuts, the discontinuities that I imagine may be dwarfed by the discontinuities that it can imagine.

And remember how little progress it takes – just a hundred years of human time, with everyone still human – to turn things that would once have been “unimaginable” into heated debates about feasibility.  So if you build a mind smarter than you, and it thinks about how to go FOOM quickly, and it goes FOOM faster than you imagined possible, you really have no right to complain – based on the history of mere human history, you should have expected a significant probability of being surprised.  Not, surprised that the nanotech is 50% faster than you thought it would be.  Surprised the way the Victorians would have been surprised by nanotech.

Thus the last item on my (current, somewhat ad-hoc) list of reasons to expect discontinuity:  Cascades, cycles, insight, recursion, magic.

Of course this is only the poetic version of saying “widen your confidence-intervals for there maybe entirely new things under sun” which is true but doesn’t sound as convincing.

21. Abstract/Distant Future Bias (Hanson)

In this post Hanson seems to introduce his Near/Far-dichotomy. If we think about distant topics like e.g. the future we are more idealistic and abstract, whereas in “near-term-mode” we are more specific and down-to-earth. Hanson mentions the full list:

All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits.

Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits.

Hanson suggests that Yudkowsky’s version of the future is likely to be influenced by these biases. Could very well be the case. But the near-far-biases are really important to your daily life as well.

~~~~Starting Personal Crap (from now on I’ll warn you about paragraphs containing pointless, gooey self-disclosure with this sign. You can safely skip those passages, I’m just to lazy to open evernote or post stuff like that on a separate blog) I’m often idealistic about the far future, especially regarding my own behavior; “Next month, let’s say next year I’m going to really study religiously, work hard, develop brilliant ideas and strategies, etc.”    But I never actually do that much in the present, although it’s gotten better. I don’t fantasize that much anymore (though still a lot), and I’m more disciplined in the present. I guess fantasies about a positive singularity just replaced my idealistic plans about going into academia, publishing some books, etc. which is arguably even worse.

Oh, and what really sucks is that I constantly switch my career plans. I just can’t focus on one topic for more than a few months (with the notable exception of LW/singularity/transhumanism, but you can’t make a living with this shit). I desire to be well-versed in lots of subjects but this only leads to being mediocre in everything. I hate specializing.  ~~~~Ending Personal Crap

22. Engelbart: Insufficiently Recursive (Yudkowsky)

Looking back at Engelbart’s plans with benefit of hindsight, I see two major factors that stand out:

  1. Engelbart committed the Classic Mistake of AI: underestimating how much cognitive work gets done by hidden algorithms running beneath the surface of introspection, and overestimating what you can do by fiddling with the visible control levers.
  2. Engelbart anchored on the way that someone as intelligent as Engelbart would use computers, but there was only one of him – and due to point 1 above, he couldn’t use computers to make other people as smart as him.

…no piece of software that has yet been developed, by mouse or by Web, can turn an average human user into Engelbart or Raskin or Drexler.  You would very probably have to reach into the brain and rewire neural circuitry directly; I don’t think any sense input or motor interaction would accomplish such a thing.

Yeah, this is a major problem. Many smart people seem to miss this completely. Intelligence is really, really important. I guess I’m in a good position to see this. If I were 1-2 standard deviations dumber, I couldn’t even really think reasonably about abstract concepts like intelligence. But if I were 1-2 standard deviations smarter, the gap between me and the smartest scientists wouldn’t be so great as to fill me with awe. It wouldn’t be a huge qualitative gap, and I could discount the differences as ones of work-habit or something.  I couldn’t really experience vastly greater intelligence on a gut-level.

What remains mysterious, is how smart people can overlook the huge differences between their intelligence and that of Average Joe. Probably caused by ugh-fields, signaling gone mad, isolated bubbles like academia, wishful thinking, etc.

Anyway, another important point is that Engelbart was very dependent upon the economy as a whole, whereas a superintelligent AI basically just needs power and access to the internet.

You can have trade secrets, and sell only your services or products – many companies follow that business plan; any company that doesn’t sell its source code does so.  But this is just keeping one small advantage to yourself, and adding that as a cherry on top of the technological progress handed you by the outside world.  It’s not having more technological progress inside than outside.

If you’re getting most of your technological progress handed to you – your resources not being sufficient to do it in-house – then you won’t be able to apply your private productivity improvements to most of your actual velocity, since most of your actual velocity will come from outside your walls.  If you only create 1% of the progress that you use, then a 50% improvement becomes a 0.5% improvement.  The domain of potential recursion and potential cascades is much smaller, diminishing k.  As if only 1% of the uranium generating your neutrons, were available for chain reactions to be fissioned further.

Leave a Reply