543. The Weighted Majority Algorithm – 552. Failure by Affective Analogy

Again, nothing great, but beginning with the next post, I’ll summarize the AI-FOOM-debate. Yay!

543. The Weighted Majority Algorithm

Yudkowsky illustrates that randomness is often rather useless by the example of the weighted majority algorithm.

544. Selling Non-Apples

Yudkowsky disses Rodney Brooks’ AI-philosophy.

545. News Post, 546. Meetup Post, 547. Meetup Post

548. Whiter OB

Discussions about the future of Overcoming Bias.

549. The Nature of Logic

Yudkowsky elaborates on the relation between logic and Bayes and concludes:

Logic might be well-suited to verifying your derivation of the Bayesian network rules from the axioms of probability theory.  But this doesn’t mean that, as a programmer, you should try implementing a Bayesian network on top of a logical database.  Nor, for that matter, that you should rely on a first-order theorem prover to invent the idea of a “Bayesian network” from scratch.

Thinking mathematically about uncertain reasoning, doesn’t mean that you try to turn everything into a logical model.  It means that you comprehend the nature of logic itself within your mathematical vision of cognition, so that you can see which environments and problems are nicely matched to the structure of logic.

550. Logical or Connectionist AI

Funny rant about neural networks and connectionist AI.

But neural networks were not marketed as cleverer math.  Instead they were marketed as a revolt against Spock.

No, more than that – the neural network was the new champion of the Other Side of the Force – the antihero of a Manichaean conflict between Law and Chaos.  And all good researchers and true were called to fight on the side of Chaos, to overthrow the corrupt Authority and its Order.  To champion Freedom and Individuality against Control and Uniformity.  To Decentralize instead of Centralize, substitute Empirical Testing for mere Proof, and replace Rigidity with Flexibility.

…But the thing is, a neural network isn’t an avatar of Chaos any more than an expert system is an avatar of Law.

It’s just… you know… a system with continuous parameters and differentiable behavior traveling up a performance gradient.

And logic is a great way of verifying truth preservation by syntactic manipulation of compact generalizations that are true in crisp models.  That’s it.  That’s all.  This kind of logical AI is not the avatar of Math, Reason, or Law.

…But the successful marketing campaign said,

“The failure of logical systems to produce real AI has shown that intelligence isn’t logical.  Top-down design doesn’t work; we need bottom-up techniques, like neural networks.”

And this is what I call the Lemon Glazing Fallacy, which generates an argument for a fully arbitrary New Idea in AI using the following template:

  • Major premise:  All previous AI efforts failed to yield true intelligence.
  • Minor premise:  All previous AIs were built without delicious lemon glazing.
  • Conclusion:  If we build AIs with delicious lemon glazing, they will work.

551. Failure by Analogy

Wasn’t it in some sense reasonable to have high hopes of neural networks?  After all, they’re just like the human brain, which is also massively parallel, distributed, asynchronous, and –

Hold on.  Why not analogize to an earthworm’s brain, instead of a human’s?

Reasoning by analogy is often pretty useless. It’s very hard to pick the right reference class without enough Inside View Understanding since superficial similarities are regularly misleading.

Yes, sometimes analogy works.  But the more complex and dissimilar the objects are, the less likely it is to work.  The narrower the conditions required for success, the less likely it is to work.  The more complex the machinery doing the job, the less likely it is to work.  The more shallow your understanding of the object of the analogy, the more you are looking at its surface characteristics rather than its deep mechanisms, the less likely analogy is to work.

But if your goal is status forget what I said! Most folks despise the Virtue of Narrowness and by using broad analogies you can sound really deep.

552. Failure by Affective Analogy

Alchemy is a way of thinking that humans do not instinctively spot as stupid.  Otherwise alchemy would never have been popular, even in medieval days.  Turning lead into gold by mixing it with things that seemed similar to gold, sounded every bit as reasonable, back in the day, as trying to build a flying machine with flapping wings.  (And yes, it was worth trying once, but you should notice if Reality keeps saying “So what?”)

And the final and most dangerous form of failure by analogy is to say a lot of nice things about X, which is similar to Y, so we should expect nice things of Y. You may also say horrible things about Z, which is the polar opposite of Y, so if Z is bad, Y should be good.

Call this “failure by affective analogy”.

Failure by affective analogy is when you don’t just say, “This lemon glazing is yellow, gold is yellow, QED.”  But rather say:

“And now we shall add delicious lemon glazing to the formula for the Philosopher’s Stone the root of all wisdom, since lemon glazing is beautifully yellow, like gold is beautifully yellow, and also lemon glazing is delightful on the tongue, indicating that it is possessed of a superior potency that delights the senses, just as the beauty of gold delights the senses…”

Leave a Reply