AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson)

Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams:

The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective cell or ecosystem or developed economy or most any complex system that way either – such things require not just good structure but also lots of good content.  Loners who start all over from scratch rarely beat established groups sharing enough standards to let them share improvements to slowly accumulate content.

Don’t really understand what the difference is between “raw data” and “good content”. I probably don’t know what I’m talking about but content doesn’t seem to be the problem. Just check the Wikipedia, or search the internet if you don’t know something. But integrating, structuring and understanding this content is the difficult part which looks to me like a problem of the right AI-architecture…

42. Artificial Mysterious Intelligence (Yudkowsky)

Yudkowsky complains that there is something of a clash of cultures in the field of AI.

The Cult of Chaos cheers for emergence, complexity, WBE, neural and genetic algorithms, etc. , i.e. for simulating or imitating without any understanding.

Yudkowsky belongs to the Bayesian Conspiracy. He wants to achieve true insight into the nature of intelligence and reduce the seemingly mysterious and complex phenomenon into analyzable and understandable parts.

The Wright Brothers didn’t manage to fly through imitating a bird but due to insights into the function of flying.

Phil Goetz makes a good remark:

“…we don’t know what consciousness is, and we don’t know what intelligence is; and both occur in every instance of intelligence that we know of; and it would be surprising to find one without the other even in an AI; so I don’t think we can distinguish between them.”

Yeah, Yudkowsky wants to build a superintelligent AI that isn’t conscious. I guess this is impossible/very hard.

But I also think that we really don’t know enough about intelligence or consciousness at this point in time to make any confident predictions about the future. Yudkowsky’s arguments just seem the most plausible ones but I bet there are a lot of unknown unknowns and in 20 years I’ll be embarassed about lots of my current thoughts on this topic.

43. Wrapping Up (Hanson)

A good summary by Hanson:

First, he thinks that cooperation and economic trade is likely to continue because everybody gains from trade (and because he’s a biased economist).

Cooperation between AGI-researchers, be it voluntarily, or involuntarily through information leakages, would also lower the chances of one AI going FOOM.

A hard takeoff would also be highly unusual and extremely unlikely if we’re allowed to extrapolate from historical trends.

There is also good reason to assume that intelligence is multi-dimensional. The human brain consists of a lot of highly specialized modules that are only good at very specific tasks. There likely isn’t any one “ultimate” algorithm that’s responsible for intelligence.

But the real sticking point seems to be locality.  The “content” of a system is its small modular features while its “architecture” is its most important least modular features.

If this is true, it’s almost impossible that one AI can zoom vastly ahead of all the others since it would need huge amounts of specialized “content-systems” and the most efficient way to amass those is through trade.

So I suspect this all comes down to how powerful is architecture in AI, and how many architectural insights can be found how quickly?  If there were say a series of twenty deep powerful insights, each of which made a system twice as effective, just enough extra oomph to let the project and system find the next insight, it would add up to a factor of a million.  Which would still be nowhere near enough, so imagine a lot more of them, or lots more powerful.

This scenario seems quite flattering to Einstein-wannabes, making deep-insight-producing Einsteins vastly more valuable than they have ever been, even in percentage terms.  But when I’ve looked at AI research I just haven’t seen it.  I’ve seen innumerable permutations on a few recycled architectural concepts, and way too much energy wasted on architectures in systems starved for content, content that academic researchers have little incentive to pursue.  So we have come to:  What evidence is there for a dense sequence of powerful architectural AI insights?  Is there any evidence that natural selection stumbled across such things?

This is the question: What makes humans and chimpanzees so different? Lots of small changes and specialized modules or a few deep and complicated insights? Evolutionary psychology and gene associations studies seem to suggest the former.

I also agree with Hanson that it’s highly likely that many folks are biased towards the FAI scenario. Roko makes a good comment about exactly that:

“- this is a source of possible bias for people like me (or Eli, or indeed anyone who thinks they are clever and are aware of the problem) which worries me a lot. In general, people want to think of themselves as being important, having some kind of significance, etc. Under the “architecture heavy” AGI scenario, people like us would be very important. Under the “general economic progress and vast content” scenario, people like us would not be particularly important, there would be billions of small contributions from hundreds of millions of individuals in academia, in the corporate sector and in government which would collectively add up to a benign singularity, without any central plan or organization.

We are therefore prone to overestimate the probability that the first scenario is the case.”

Couldn’t agree more. I often fantasize about being part of something really, really important. I want to save the world. I don’t want to laze around in fucking Auenland, I want to be part of the Fellowship of the Ring.

But what if the Fellowship wouldn’t consist of 9 but of 1 million people? Pffff, in that case, screw the Fellowship. Partying and smoking weed all day in Auenland is way cooler.

44. True Sources of Disagreement (Yudkowsky)

The disagreement between Yudkowsky and Hanson is, among other things, due to profoundly differing intuitions about the nature of intelligence.

Nonetheless, here’s my guess as to what this Disagreement is about:

If I had to pinpoint a single thing that strikes me as “disagree-able” about the way Robin frames his analyses, it’s that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they’re less expensive to build/teach/run.  They aren’t even any faster, let alone smarter.

Yudkowsky then recaps some of his earlier arguments.

Finally, maybe Hanson’s disapproval of FAI and Yudkowsky’s fear of most em-scenarios are the true sources of disagreement:

Robin Hanson’s description of Friendly AI development as “total war” that is harmful to even discuss, or his description of a realized Friendly AI as “a God to rule us all”.  Robin must be visualizing an in-practice outcome very different from what I do, and this seems like a likely source of emotional fuel for the disagreement as well.

Conversely, Robin Hanson seems to approve of a scenario where lots of AIs, of arbitrary motives, constitute the vast part of the economic productivity of the Solar System, because he thinks that humans will be protected under the legacy legal system that grew continuously out of the modern world, and that the AIs will be unable to coordinate to transgress the legacy legal system for fear of losing their own legal protections.  I tend to visualize a somewhat different outcome, to put it mildly; and would symmetrically be suspected of emotional unwillingness to accept that outcome as inexorable.

45. The Bad Guy Bias (Hanson)

Hanson quotes S. Vedantam:

When a tragedy occurs, we instantly ask who or what caused it. When we find a human hand behind the tragedy — such as terrorists, in the case of the Mumbai attacks — something clicks in our minds that makes the tragedy seem worse than if it had been caused by an act of nature, disease or even human apathy.

He concludes that

…this bias should also afflict our future thinking, making us worry more about evil alien intent than unintentional catastrophe.

Yeah, that’s responsible for a lot of bad futurism and irrational thinking in general.

Here’s Yudkowsky’s reply:

“Indeed, I’ve found that people repeatedly ask me about AI projects with ill intentions – Islamic terrorists building an AI – rather than trying to grasp the ways that well-intentioned AI projects go wrong by default.”

Leave a Reply