Quotable (#125)

Steven Weinberg on whiggish history of science:

… scientific history with an eye to present knowledge is needed by scientists. We don’t see our work as merely an expression of the culture of our time and place, like parliamentary democracy or Morris dancing. We see it as the latest stage in a process, extending back over millennia, of explaining the world. We derive perspective and motivation from the story of how we reached our present understanding, imperfect as that understanding remains.

Quotable (#124)

Mathematics as a social model:

The salient feature of syntax is that it is concrete. The question whether a putative proof is indeed a proof is a matter simply of checking. Disputes about the correctness of a proof are quickly settled and the mathematical community reaches permanent consensus. The status, age, and reputation of the parties to the dispute play no role. In this we are singularly blessed.

Quotable (#123)

The moralization of ecology is a strange modern phenomenon, leading to something like this:

Capitalism’s grow-or-die imperative stands radically at odds with ecology’s imperative of interdependence and limit. The two imperatives can no longer coexist with each other; nor can any society founded on the myth that they can be reconciled hope to survive. Either we will establish an ecological society or society will go under for everyone, irrespective of his or her status. Yet we can’t stop the process. A capitalist economy, by definition, lives by growth; as Bookchin observes: “For capitalism to desist from its mindless expansion would be for it to commit social suicide.” We have essentially, chosen cancer as the model of our social system.

Limits can take care of themselves, can’t they? Hitting a harsh boundary and undergoing selection there is the way it works. (Mother Nature and Capitalism share some very basic assumptions in this respect.)

Quotable (#122)

Nick Dyer Witheford (in conversation) on the variants of far Left politics under advanced capitalism:

… it’s clear that capitalism is creating potentials – not just technological, but organizational potentials – which could be adapted in a transformed manner to create a very different type of society. The evident example is the huge possibilities for freeing up time by automation of certain types of work. For me, the problem both with Paul [Mason]’s work, which I respect, and with the accelerationists, is there is a failure to acknowledge that the passage from the potential to the actualization of such communist possibilities involves crossing what William Morris describes as a “river of fire.” I don’t find in their work a great deal about that river of fire. I think it would be reasonable to assume there would be a period of massive and protracted social crisis that would attend the emergence of these new forms. And as we know from historical attempts in the 20th Century to cross that river of fire, a lot depends on what happens during that passage. So there is, if one could put it that way, a certain automatism about the prediction of the realization of a new order in both these schools, which we should be very careful about.

(What automation wants — be definition — is more of itself. There’s a name for that, and it isn’t ‘communism’.)

The abstract for this talk gives a sense of the diagnosis.

Identification

Craig Hickman raises an intriguing question:

In fact one wonders if [Reza Negarestani] is even thinking of humans at all, but rather of those future artificial beings that might replace us: “The craft of an intelligent life-form that has at the very least all the capacities of the present thinking subject is an extension of the craft of a good life as a life suiting the subject of a thought that has expanded its inquiry into the intelligibility of the sources and consequences of its realization.” The notion of a Craft of Intelligent Life-Forms? A utopia of robotic life-forms where the Good Life is one without humans, a perfectly programmed world of robots and environment where the only good is autonomous thought, revisable and autonomous – autopoetic and allopoetic?

Is the difference between Right and Left accelerationism ultimately reducible to the merely nominal decision as to whether we call the thing that’s coming ‘us‘?

(FWIW I doubt it — because controversy over the functionality of competition isn’t so readily soluble — but it’s good to see the question being asked.)

Here‘s the Negarestani essay under discussion.

Dr. Copper is not amused

Wintry indications:

Copper

Financialization, namely massive amounts of leverage, has made the disconnect between the stock market and the economy extend wider and longer than ever before. Maybe another speculative melt up is ahead. Who knows? Maybe DOW 20,000 or 30,000 is in the cards. […] With enough monetary deception anything’s possible. But, nonetheless, gravity still exists. Stocks cannot go up for ever. After a six year bull market, accompanied by a lackluster recovery, stocks could return to prior levels that were in line with present commodity prices. Remember, just a few years ago, Dow 8,000 matched up with current copper prices. Soon it likely will again.

ADDED: Also concerning.

Parable of the Vase

Tim Groseclose reviews Garett Jones’ Hive Mind, whose “primary and most important contribution is to document the following empirical regularity: Suppose you could a) improve your own IQ by 10 points, or b) improve the IQs of your countrymen (but not your own) by 10 points. Which would do more to increase your income? The answer is (b), and it’s not even close. The latter choice improves your income by about 6 times more than the former choice.”

The Parable of the Vase, which it employs to explain the point, is an instantly canonical illustration, Groseclose argues. (“I do not think it is an exaggeration to say that the parable ranks as one of the all-time great examples in economics.”)

The parable begins with a simplifying assumption. This is that it takes exactly two workers to make a vase: one to blow it from molten glass and another to pack it for delivery. Now suppose that two workers, A1 and A2, are highly skilled—if they are assigned to either task they are guaranteed not to break the vase. Suppose two other workers, B1 and B2, are less skilled—specifically, for either task each has a 50% probability of breaking the vase.

Continue reading

Quotable (#121)

From the recent (and excellent) profile of Nick Bostrom in The New Yorker:

Bostrom worries that solving the “control problem” — insuring that a superintelligent machine does what humans want it to do — will require more time than solving A.I. does. The intelligence explosion is not the only way that a superintelligence might be created suddenly. Bostrom once sketched out a decades-long process, in which researchers arduously improved their systems to equal the intelligence of a mouse, then a chimp, then — after incredible labor — the village idiot. “The difference between village idiot and genius-­level intelligence might be trivial from the point of view of how hard it is to replicate the same functionality in a machine,” he said. “The brain of the village idiot and the brain of a scientific genius are almost identical. So we might very well see relatively slow and incremental progress that doesn’t really raise any alarm bells until we are just one step away from something that is radically superintelligent.”