Allen vs Kurzweil in the battle of the Singularity
This opinion pieceby Paul Allen argues the singularity (the point where computing power surpasses human brain power) is not very near, if possible. They base their criticism on Kurzweils law of Accelerating Returns, which assumes that computing power development will undergo substantial acceleration before slowing to assume the “S” curve all development eventually exhibits. SOme of Kurzweils writing seems to question if computing power development will EVER “S” curve into decline, since once it is taken over by synthetic intelligence, it will act more like a nuclear chain reaction, than past technology development. Allen doesn’t buy this. Additionally he invokes “the Complexity Brake” that questions whether a complex adaptive system like the human brain can be “understood” in the usual sense of the word.
Kurzweil responds here He starts off unfortunately ad homimen criticizing Allen rather than his arguments and makes a sort of “appeal to his own authority” based on assuming Allen has not sufficiently studied his work. He simply repeats his arguments after claiming Allen is unaware of them rather than dealing with them. Kurzweil does have some good arguments, but ultimately we have the argument that ever exponential growth scenario eventually “S-curves out”, by Allen, and “except this one” by Kurweill. Both claim empirical evidence on their side. Kurzweil is correct that so far the “large S-curve” of his Law of Accelerating Returns” is composed of finer scale S-Curves that are working over shorter and short timescales. Allen is correct that this type of behavior is not unprecedented, and that ulimately the “macro level” S-curve flattens out.
Kuzrweil gets on thin ice when he criticizes thee “complexity Brake” by stating:
Allen’s statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome.
This declaration against the emergence of information content in a complex adaptive system is puzzling coming from someone who relies on this very thing happening for the singularity to occur. Self-replicating machines that increase in complexity requires that the “design” of this increasingly complex system of machine intelligence would have to arise from a lessor amount of initial information. Since we are only getting at the tip off the epigenic information iceberg, the claim that these interactions do not add to the information in genome is inexplicable.
The thing that neither deals with is what I think may be the uncross-able divide is that of the machine paradigm being digital while the neurons of the brain have electro-chemical analogue characteristics. I am attracted to (but treat as pure speculation) the notion that in addition to an analogue component there could also be a quantum mechanical component. The philosophical argument of the origin of free will and the seat of consciousness gets into some heady stuff there, but notion that biological systems can play games with quantum superposition of information adds a level to human consciousness, that require a fundamentally different technology than digital circuitry to deal with on level deeper than mathematical calculation.
For more on that see Stuart Kauffman on the topic