Quotable (#12)

A resource overhang is a development jolt waiting to happen. Eliezer Yudkowsky on hard AI takeoff (from December 2008):

… hominid brain size increased by a factor of five over the course of around five million years. You might want to think very seriously about the contrast between that idiom, and a successful AI being able to expand onto five thousand times as much hardware over the course of five minutes — when you are pondering possible hard takeoffs, and whether the AI trajectory ought to look similar to human experience.

A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz serial speed, in contrast to neurons that spike 100 times per second on a good day. The “hundred-step rule” in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 serial steps one after the other. We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking. But the much-vaunted “massive parallelism” of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain’s serial slowness — if your computer ran at 200Hz, you’d have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.

So that’s another kind of overhang: because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don’t know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.

A still subtler kind of overhang would be represented by human failure to use our gathered experimental data efficiently.

(via)

4 thoughts on “Quotable (#12)

    • If we had a real ‘Chinese Room’ — functionally equivalent to the brain — it would (by definition) be able to pass the Turing Test, and therefore challenge our intellectual right to deny it the property of emergent intelligence.

      • You use the word “intelligence”. Searle uses the word “consciousness”. I’m not sure whether Turing was trying to test for intelligence or consciousness, but the distinction probably matters. Deep Blue was arguably intelligent but not conscious, for instance.

        Searle accepts that we might build a machine brain (a machine, not just a computer) which achieves consciousness, especially once we really understand how the biological brain achieves it. Scientists have built machine stomachs which can digest human-type food, so why not machine brains? Note that they didn’t achieve this by assuming digestion might be an emergent side product of computation. They figured out how the icky stuff works physically and emulated it.

        We figured out how to do heavier-than-air flight. We always knew it was possible because we could see birds doing it. The way an A380 flies isn’t exactly the same way a bird flies, but it’s applying the same physical principles. The A380 can fly higher, faster and for longer without pausing than any bird, so we can certainly surpass nature. But again, it’s not, fundamentally, computational principles the Wright Brothers applied, even if modern computers can help us design better wings and can even fly the plane better than an unplugged human could.

        • Consciousness is an interesting problem in principle, but our understanding is so poor that it tends to be abused when employed as a brick in arguments of any kind. The great value of the imitation game (Turing Test) is that it provides such a determinate criterion, which isn’t exactly aimed at intelligence (far less consciousness) but rather the capacity to operate with social competence.

          To get back to the Chinese Room, it seems to me to have things upside down in trying to comprehend machine intelligence through speculative thought experiment, in advance of the engineering process. Your mechanical flight example is more useful — begin with functional goals, and philosophize about their ‘possibility’ only secondarily (AI will probably do a better job of that than we can).

Leave a Reply