(Just some posts on anthropomorphic biases. The next post deals with the important stuff, i.e. metaethics)
The core fallacy of anthropomorphism is expecting something to be predicted by the black box of your brain, when its casual structure is so different from that of a human brain, as to give you no license to expect any such thing.
The Tragedy of Group Selectionism (as previously covered in the evolution sequence) was a rather extreme error by a group of early (pre-1966) biologists, including Wynne-Edwards, Allee, and Brereton among others, who believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population.
…But later on, Michael J. Wade went out and actually created in the laboratory the nigh-impossible conditions for group selection. Wade repeatedly selected insect subpopulations for low population numbers. Did the insects evolve to restrain their breeding, and live in quiet peace with enough food for all, as the group selectionists had envisioned?
No; the adults adapted to cannibalize eggs and larvae, especially female larvae.
In retrospect this outcome was predictable. Why didn’t the group-selectionists think of this possibility? Because humans only think of solutions that rank high according to their preferences:
Suppose you were a member of a tribe, and you knew that, in the near future, your tribe would be subjected to a resource squeeze. You might propose, as a solution, that no couple have more than one child – after the first child, the couple goes on birth control. Saying, “Let’s all individually have as many children as we can, but then hunt down and cannibalize each other’s children, especially the girls,” would not even occur to you as a possibility.
If you were in charge of building predators you would choose the humane solution – namely restricted breeding. But evolution doesn’t think like you do.
But the point generalizes: this is the problem with optimistic reasoning in general. What is optimism? It is ranking the possibilities by your own preference ordering, and selecting an outcome high in that preference ordering, and somehow that outcome ends up as your prediction. What kind of elaborate rationalizations were generated along the way, is probably not so relevant as one might fondly believe; look at the cognitive history and it’s optimism in, optimism out. But Nature, or whatever other process is under discussion, is not actually, causally choosing between outcomes by ranking them in your preference ordering and picking a high one. So the brain fails to synchronize with the environment, and the prediction fails to match reality.
Once you’ve developed your emotionally appealing pet-hypothesis that “shows” that everything will be fine, you’ve already lost:
It is a fact of life that we hold ideas we would like to believe, to a lower standard of proof than ideas we would like to disbelieve. In the former case we ask “Am I allowed to believe it?” and in the latter case ask “Am I forced to believe it?”
I’ve made even sillier mistakes, by the way – though about AI, not evolutionary biology. And the thing that strikes me, looking over these cases of anthropomorphism, is the extent to which you are screwed as soon as you let anthropomorphism suggest ideas to examine.
In large hypothesis spaces, the vast majority of the cognitive labor goes into noticing the true hypothesis. By the time you have enough evidence to consider the correct theory as one of just a few plausible alternatives – to represent the correct theory in your mind – you’re practically done. Of this I have spoken several times before.
And by the same token, my experience suggests that as soon as you let anthropomorphism promote a hypothesis to your attention, so that you start wondering if that particular hypothesis might be true, you’ve already committed most of the mistake.
This cognitive bias is especially problematic in the field of AGI, more specifically Unfriendly AI.
Was ist wrong to use the first atomic bomb? I don’t know, but here is a good comment by Scott Aaronson:
I would have exploded the first bomb over the ocean, and only then used it against cities if Japan still hadn’t surrendered. No matter how many arguments I read about this, I still can’t understand the downsides of that route, besides the cost of a ‘wasted bomb.’
But what’s just as tragic as the bomb having been used in anger, is that it wasn’t finished 2-3 years earlier — in which case it could have saved tens of millions of lives.