Quotable (#63)

How anthropomorphism distorts AI forecasting:

… a mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits.

This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life — part of how our tools work, how our cities move and how our economy builds and trades things.

Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.

ADDED: Related (from Bruce Sterling). “… we never talk about roboticized cat, an augmented cat, a super intelligent cat. Why? Because we are stuck in this metaphysical trench where we think it is all about humanity’s states of mind. It is not! We humans do not always have conscious states of mind: we sleep at night. Computers don’t have these behaviors. We are elderly, we forget what is going on. We are young, we do not know how to speak yet. That is cognition. You never see a computer that is so young it cannot speak.”

6 thoughts on “Quotable (#63)

  1. It all depends on what is defined as “intelligence”. A common trope with anti-AI-arguers is that only human-like intelligence (coupled with consciousness) is recognized as such. Then by definition, only humans are intelligent.

    However, if you extend the definition outside human-like intelligence it becomes much harder to define. Not only what it amounts to, but also at which level is a system intelligent.

    Hence AI research is classified into concrete areas like cognitive information processing, learning, autonomous systems. What has become very well developed is narrow, task-specific intelligence. AKA “automating people’s jobs”.

    Regarding dmf’s above link: AI as a threat scenario for humanity is not (at this moment) “Evil self-improving AI becomes superintelligent and converts us all to Von-Neumann probes” but “Autocorrect screw-up writ large”. Randomized small mistakes aren’t so dangerous, but consistent mistakes are. If I’m drunk and crash my car, I die, and possibly take a few others with me. If a software bug causes all cars with a certain firmware to steer toward a tree given certain phase of the moon, it is a global disaster. Automated system’s strength (learning can be centralized) is also its weakness.

    It would be different for compact, truly autonomous self-learning systems. But with that we haven’t even reached small mammal level yet. All our current AI is based on massive centralized supervised deep-learning, combined with human-maintained specific software.

Leave a Reply