Craig Hickman raises an intriguing question:
In fact one wonders if [Reza Negarestani] is even thinking of humans at all, but rather of those future artificial beings that might replace us: “The craft of an intelligent life-form that has at the very least all the capacities of the present thinking subject is an extension of the craft of a good life as a life suiting the subject of a thought that has expanded its inquiry into the intelligibility of the sources and consequences of its realization.” The notion of a Craft of Intelligent Life-Forms? A utopia of robotic life-forms where the Good Life is one without humans, a perfectly programmed world of robots and environment where the only good is autonomous thought, revisable and autonomous – autopoetic and allopoetic?
Is the difference between Right and Left accelerationism ultimately reducible to the merely nominal decision as to whether we call the thing that’s coming ‘us‘?
(FWIW I doubt it — because controversy over the functionality of competition isn’t so readily soluble — but it’s good to see the question being asked.)
Here‘s the Negarestani essay under discussion.
It’s worse than you thought:
The Fermi paradox is the discrepancy between the strong likelihood of alien intelligent life emerging (under a wide variety of assumptions), and the absence of any visible evidence for such emergence. In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
More recent Fermi Paradox sharpening here.
Descending from the abstract plane, there’s this slender thread to hang on to.
A 100-TeV (and US$10 billion) particle collider in China?
To Arkani-Hamed, the Chinese collider campaign feels like pushing an open door. “When you think about it more, it’s just perfect,” he said, sipping Coke Zero on his office couch. “It would be great for physics; it would be great for China. They’re looking for something where they can just be the best in the world.” He continued, “There are very few things in life where what you want to do for idealistic reasons and what someone else wants to do for Machiavellian reasons are identical. And when that happens, you should just do it. You should just do it!”
(Much else of interest in the article. Eventually you get to the ‘amplituhedron’ …)
From the prologue to Cixin Liu’s The Dark Forest (follow up to The Three-Body Problem):
“See how the stars are points? The factors of chaos and randomness in the complex makeups of every civilized society in the universe get filtered out by distance, so those civilizations can act as reference points that are relatively easy to manipulate mathematically.”
“But there’s nothing concrete to study in your cosmic sociology, Dr. Ye. Surveys and experiments aren’t really possible.”
“That means your ultimate result will be purely theoretical. Like Euclid’s geometry, you’ll set up a few simple axioms at first, then derive an overall theoretic system using those axioms as a foundation.”
“It’s all fascinating, but what would the axioms of cosmic sociology be?”
“First: Survuival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant.”
“Those two axioms are solid enough from a sociological perspective … but you rattled them off so quickly, like you’d already worked them out,” Luo Ji said, a little surprised.
“I’ve been thinking about this for most of my life, but I’ve never spoken about it with anyone before. I don’t know why, really. … One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion, and the technological explosion.” (pp. 13, 14)
Are the aliens hidden by advanced cryptography?
“If you look at encrypted communication, if they are properly encrypted, there is no real way to tell that they are encrypted,” Snowden said. “You can’t distinguish a properly encrypted communication from random behavior.”
(This doesn’t address the question of how an alien culture would be able to encrypt its material civilization — or cosmic matter-energy process — but that’s also a suggestive question.)
The extreme connectionist hypothesis is that nothing very much needs to be understood in order to catalyze emergent phenomena, with synthetic intelligence as an especially significant example of something that could just happen. DARPA’s Gill A. Pratt approaches the question of robot emergence within this tradition:
While the so-called “neural networks” on which Deep Learning is often implemented differ from what is known about the architecture of the brain in several ways, their distributed “connectionist” approach is more similar to the nervous system than previous artificial intelligence techniques (like the search methods used for computer chess). Several characteristics of real brains are yet to be accomplished, such as episodic memory and “unsupervised learning” (the clustering of similar experiences without instruction), but it seems likely that Deep Learning will soon be able to replicate the performance of many of the perceptual parts of the brain. While questions remain as to whether similar methods can also replicate cognitive functions, the architectures of the perceptual and cognitive parts of the brain appear to be anatomically similar. There is thus reason to believe that artificial cognition may someday be put into effect through Deep Learning techniques augmented with short-term memory systems and new methods of doing unsupervised learning. [UF emphasis]
He anticipates a ‘Robot Cambrian Explosion’.
It seems improbable that a sufficiently self-referential pattern recognition system — i.e. an intelligence — is going to be the product of a highly-specified initial design. An AI that doesn’t almost entirely put itself together won’t be an AI at all. Still, by the very nature of the thing, it’s not going to impress anybody until it actually happens. Perhaps it won’t, but we have no truly solid reasons — beyond an inflated self-regard concerning both our own neural architectures and our deliberative engineering competences — to think it can’t.
To professional arguers, history looks dialectical, but —