AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.]

I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements are. Obviously, I pulled these numbers out of thin air – but it’s better than nothing.

0. Our Civilization won’t collapse, we won’t nuke ourselves, etc. (70%)

At least until we build the first superintelligent AI.

1. AI is theoretically possible (90%)

There is nothing special about carbon, i.e. intelligence is substrate-independent and there is nothing mysterious about consciousness/intelligence that would make it somehow impossible to create AI.

2. Intelligence can be achieved through a few deep insights (40%)

That means it’s possible for a small team of math-geniuses to program the first AGI, because they solved the mystery of intelligence. If however intelligence is the product of lots of context-specific gimmicks then AI-Foom is almost impossible.It would require the work of thousands of researchers which would lead to information leakage, competition between different AI-architectures, etc. Furthermore even the AI itself couldn’t really make progress fast enough to jump ahead of everyone.

I’m pretty uncertain about this topic. The evidence points into the direction of multi-dimensionality, at least in the case of human intelligence. Our brain consists of lots of highly specialized modules. But I don’t know how relevant this is for AI.

3. Strong and sustainable recursive self-improvement is possible (80%)

The human level of intelligence is probably nowhere near the maximum possible. You can build minds that are vastly more intelligent and don’t require much more resources.

4. Recursive self-improvement can happen fast (75%)

An AI with transhuman intelligence can achieve superintelligence within 1-2 years or even faster.

5. There is something missing (50%)

Maybe intelligence isn’t as powerful as we think it is. Maybe we live in a simulation and they shut us down if we build SAI or whatever (although I already dealt with that in fact Nr. 1.) You know, maybe the whole argument is somehow confused. Think of Nr. 5 as the meta-uncertainty-safety-net.

Conclusion

I always tried to state the conditional probabilities. That means e.g. that I’m 75% certain that 4 is true, given the fact that 1, 2 and 3 is true, etc. I hope I didn’t make any obvious mistakes. If we multiply the numbers we get….

P(AI FOOM) = 7,5 %

Yeah, that’s in the right ballpark. I’m tempted to change the numbers a little bit in order to get something like 10%, maybe even 20%. But definitely above 1% and below 60%.

 

 

One comment on “AI Foom Debate: Probability Estimates

Leave a Reply