Quotable (#193)

Lessons from zombie-psychosis:

Cotard’s Syndrome—in which a person can believe that they’re dead, that their organs are rotting, or that they don’t exist—was first identified by the French neurologist Jules Cotard more than a century ago, in 1882. But the condition is so rare that it’s still far from fully understood. […] … But Cotard’s Syndrome isn’t simply interesting from a neuroscience or psychological perspective. In the world of artificial intelligence, roboticists are working to build ever-more complex machines that replicate human behavior. One of the central questions is whether machines can truly become self-aware. Could understanding Cotard’s Syndrome provide the answer?

This could go so wrong …

Sensitive Interface

Swiss banking giant UBS wants to talk to you about robotic emotion simulation, for some reason. It’s not at all badly done (irrespective of what it’s selling).

Building on [Herbert A.] Simon’s achievements in the field of artificial intelligence, we take a journey to explore the latest innovations in AI and, most importantly, its human element, to ultimately answer the controversial questions: What physical human characteristics and emotions must a robot have to make people react to it? And, obversely, Can AI recognize human emotions? …

The ad (if that’s what it is) has interactive features that seek to make some of its questions performative. It begins to fold back upon itself only in the final section, when it suggests:

Breakthroughs in data processing and conversation systems are helping more and more companies to implement AI in their operations. According to some experts, well-advanced artificial intelligence could someday not only assist businesses in doing their jobs more efficiently, but also bring a more human touch back to customer service, leading consumers to prefer sophisticated and professional AI service to today’s human variety.

Puzzle resolved. We’re exploring a projection of UBS’s customer interface, from the near future.

Simplified Drake

Drake equation

Great Filter calculation proceeds, around the back:

… according to a new paper published in the journal Astrobiology, recent discoveries of exoplanets combined with a broader approach to answering this question has allowed researchers to conclude that, unless the odds of advanced life evolving on a habitable planet are immensely low, then humankind is not the universe’s first technological, or advanced, civilization. […] “The question of whether advanced civilizations exist elsewhere in the universe has always been vexed with three large uncertainties in the Drake equation,” said Adam Frank, professor of physics and astronomy at the University of Rochester and co-author of the paper, in a press release. […] … “Thanks to NASA’s Kepler satellite and other searches, we now know that roughly one-fifth of stars have planets in ‘habitable zones,’ where temperatures could support life as we know it. So one of the three big uncertainties has now been constrained,” explained Frank.

Thing is:

However, the universe is more than 13 billion years old. “That means that even if there have been a thousand civilizations in our own galaxy, if they live only as long as we have been around — roughly ten thousand years — then all of them are likely already extinct,” explained Sullivan. “And others won’t evolve until we are long gone.”

(Apologies for the image quality — stumped in my search for a better one.)

Quotable (#147)

Lots of stimulation in this John Horgan interview with Eliezer Yudkowsky (via). Among the gems:

Horgan: I’ve described the Singularity as an “escapist, pseudoscientific” fantasy that distracts us from climate change, war, inequality and other serious problems. Why am I wrong?
Yudkowsky: Because you’re trying to forecast empirical facts by psychoanalyzing people. This never works.

(Note on ‘Singularity’ FWIW by EY here: “I think that the “Singularity” has become a suitcase word with too many mutually incompatible meanings and details packed into it, and I’ve stopped using it.”)

One more EY snippet: “… human axons transmit information at around a millionth of the speed of light, even when it comes to heat dissipation each synaptic operation in the brain consumes around a million times the minimum heat dissipation for an irreversible binary operation at 300 Kelvin, and so on. Why think the brain’s software is closer to optimal than the hardware? Human intelligence is privileged mainly by being the least possible level of intelligence that suffices to construct a computer; if it were possible to construct a computer with less intelligence, we’d be having this conversation at that level of intelligence instead.”

Quotable (#146)

Freeman Dyson:

I consider that we are still monkeys; we just came down from the trees rather recently, and it’s astonishing how well we can do. The fact that we can even write down partial differential equations, let alone solve them, to me is a miracle. The fact that we ourselves at the moment have very limited understanding of things doesn’t surprise me at all. […] If you go far enough in the future, we’ll be asking totally different questions. We’ll be thinking thoughts which at the moment we can’t even imagine. So I think to say that a question is unanswerable is ludicrous. All you can say is that it’s not going to be answered in the next hundred years, or the next two hundred years… To say there are unanswerable questions makes no sense. But if history comes to a stop, if we descend into barbarism or if we become extinct, then the questions won’t be answered. But to me that’s just a historical accident.