Probably futile, but definitely worth a link:
If you listen to smart people on the right, they are currently laughing their way to the end of humanity as the left continues to push deeper and deeper into the mistakes we are actively refusing to learn from. It is very difficult for the few revolutionary leftists still alive to confront this, because it’s genuinly so vertiginous and horrifying that it really approaches what is cognitively and emotionally unsurvivable for genuinely caring people: there are at least some objective reasons to believe the human species may be genuinely crossing the threshold at which exponentially increasing technological efficiency makes the absolute end of humanity an objective and irreversible empirical reality. I think it’s debatable where we are at in that process, but it seems undeniable this question is now genuinely at stake and I simply don’t see a single person on the revolutionary left seriously considering this with the radical honesty it requires.
The idea that European political fragmentation, despite its evident costs, also brought great benefits, enjoys a distinguished lineage. In the closing chapter of The History of the Decline and Fall of the Roman Empire (1789), Edward Gibbon wrote: ‘Europe is now divided into 12 powerful, though unequal, kingdoms.’ Three of them he called ‘respectable commonwealths’, the rest ‘a variety of smaller, though independent, states’. The ‘abuses of tyranny are restrained by the mutual influence of fear and shame’, Gibbon wrote, adding that ‘republics have acquired order and stability; monarchies have imbibed the principles of freedom, or, at least, of moderation; and some sense of honour and justice is introduced into the most defective constitutions by the general manners of the times.’ […] In other words, the rivalries between the states, and their examples to one another, also meliorated some of the worst possibilities of political authoritarianism. Gibbon added that ‘in peace, the progress of knowledge and industry is accelerated by the emulation of so many active rivals’. Other Enlightenment writers, David Hume and Immanuel Kant for example, saw it the same way. From the early 18th-century reforms of Russia’s Peter the Great, to the United States’ panicked technological mobilisation in response to the Soviet Union’s 1957 launch of Sputnik, interstate competition was a powerful economic mover. More important, perhaps, the ‘states system’ constrained the ability of political and religious authorities to control intellectual innovation. If conservative rulers clamped down on heretical and subversive (that is, original and creative) thought, their smartest citizens would just go elsewhere (as many of them, indeed, did).
Political disintegration combined with cultural-market integration was the key.
In 18th-century Europe, the interplay between pure science and the work of engineers and mechanics became progressively stronger. This interaction of propositional knowledge (knowledge of ‘what’) and prescriptive knowledge (knowledge of ‘how’) constituted a positive feedback or autocatalytic model. In such systems, once the process gets underway, it can become self-propelled. In that sense, knowledge-based growth is one of the most persistent of all historical phenomena – though the conditions of its persistence are complex and require above all a competitive and open market for ideas.
hacked by NG689Skw
[Trashed a couple of innocuous blog posts. Guess it made a point?]
Some realistic questions about prospective machine intelligence regulation:
… we still don’t have a concrete answer about how to effectively regulate the use of algorithms. AI is just another very complex layer added to this already complex discussion, sometimes directly related to “big data” (in the case of deep learning, for example) and other times addressing far bigger questions (in the case of sentient machines, for example).
The UF (accelerationist) response is probably predictable: There isn’t time to reach answers. Acceleration means only (and exactly) that the problem is receding, or escaping. If it would only slow down, everything would be okay. It won’t.
If there’s such a thing as fundamentalist accelerationism — in a good way — it’s this.
From an engrossing discussion of AI threats by Yampolskiy and ‘Spellchecker’ (?):
An AI researcher studying Malevolent AI is like a medical doctor studying how different diseases are transmitted, how new diseases arise and how they impact the patients organism.
If the diseases concerned could read medical papers, that analogy would be perfect.