Personal note: Since I’ve finished the metaethics sequence I’ve been pretty lethargic. I don’t know why but I guess it has to do with the (seeming) arbitrariness of Yudkowsky’s metaethics. At the same time it’s very convincing. (Yesterday I read antinatalism-blogs for 12 hours or so. Could also be responsible for my apathy.) And I’m getting ever more confused about the right singularity strategy. Donating to SIAI or pushing FAI-research looks a lot less promising than just a few months ago. Everything is kinda pointless.
Anyway, here are some relatively unimportant posts.
461. Dreams of AI Design
Most AGI-researchers underestimate the difficulty of building an AI:
And indeed I know many people who believe that intelligence is the product of commonsense knowledge or massive parallelism or creative destruction or intuitive rather than rational reasoning, or whatever. But all these are only dreams, which do not give you any way to say what intelligence is, or what an intelligence will do next, except by pointing at a human. And when the one goes to build their wondrous AI, they only build a system of detached levers, “knowledge” consisting of LISP tokens labeled apple and the like; or perhaps they build a “massively parallel neural net, just like the human brain”. And are shocked – shocked! – when nothing much happens.
462. Against Modal Logics
Most philosophy is bunk.
The proliferation of modal logics in philosophy is a good illustration of one major reason: Modern philosophy doesn’t enforce reductionism, or even strive for it.
Most philosophers, as one would expect from Sturgeon’s Law, are not very good. Which means that they’re not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms. Reductionism is, in modern times, an unusual talent. Insights on the order of Pearl et. al.’s reduction of causality or Julian Barbour’s reduction of time are rare.
So what these philosophers do instead, is “bounce” off the problem into a new modal logic: A logic with symbols that embody the mysterious, opaque, unopened black box. A logic with primitives like “possible” or “necessary”, to mark the places where the philosopher’s brain makes an internal function call to cognitive algorithms as yet unknown.
Well, I agree, but all in all philosophy seems to be a healthier field than a lot of the other humanities, like e.g. political science, sociology or comparative literature. Doesn’t mean much of course….
…or they should, logically speaking.
Suppose you’re torn in an agonizing conflict between two choices.
Well… if you can’t decide between them, they must be around equally appealing, right? Equally balanced pros and cons? So the choice must matter very little – you may as well flip a coin. The alternative is that the pros and cons aren’t equally balanced, in which case the decision should be simple.