Crypto-Current (018)

§2.1 — The investigation of Bitcoin as a philosophical event cannot be tidily distinguished from Bitcoin as a philosophical event. The project is the phenomenon, which is already occurring. To say this is no more than an acknowledgment of immanence, thematized elsewhere – inside this book and outside it – as nonlinearity, or self-referentiality. An inspection of Bitcoin, from without, or above, counts for little when oversight is delegitimated, both by the Bitcoin protocol, and by the principles of philosophical critique. An immanent – or critical – approach to the topic requires that the signs of Bitcoin* are coaxed into self-reflexion, at various levels.

§2.11 — By definition, transcodage, into the language of philosophy** begins among signs that do not appear, initially, to be native to it. The undertaking involves – and has to equip itself with – a number of conceptual tools, linked to words whose usage has out-paced philosophical registration, so that their impact at the highest level of conceptual abstraction remains latent. The problem, then, is inseparable from the resource, and constitutes an unexploited territory. Something is being said that philosophy has yet to hear.

§2.12 — Code comes first, and is already at work, on its way to specification as a hash. Program (or algorithm) and protocol will soon follow it. These terms have a number of notable, interconnected features. Crucially, in reference to their prospects for philosophical adoption, they are all – consistently – diagonal, and specifically teleo-mechanical.*** This means that they are intractable to categorization in accordance with the binary theoretical / practical discrimination standardized within, or constitutive of, occidental moral philosophy, or rather, and more strictly, to the basic compartmentalization of this philosophical tradition – perhaps philosophy as such – in accordance with, and reflective of, a subject divided between cognitive and volitional faculties. The distinction between ‘idea’ and ‘action’, or between the ‘is’ and ‘ought’, fails to capture these terms, and fails radically. It is not merely inadequate, but fundamentally misguiding, and inappropriate. There are no theories, or practices, after the algorithm, except as suggestive, colloquial shorthand. Coding is no more a thought than a deed, a program no less a concept than a performance. In each case, there is an integral, and thus irreducible, pre- or sub-theoretical procedure which rigorizes (and even ‘materializes’) ideality by operationalizing it.

§2.13 — It is far more than mere coincidence that Leibniz, the early modern philosopher most intimately associated with formalistic rationality, was also a pioneering inventor of calculating machines.**** His logical ambitions were epitomized by the proposal for a ‘universal characteristic’ (characteristica universalis) which would submit all human thinking to a consistent symbolism, to be modeled – in a striking anticipation of Gödel coding – on the basic arithmetical property of unique factorization.***** Mathematization and mechanization of culture – in its most all-embracing sense – appears as a coherent twin-process, many centuries in the making.

§2.14 — Among Modernity’s most consistent cultural threads has been the strand cross-weaving the problem of logical formalization with the mechanization of thought. By the beginning of the 20th century, it had been established to the satisfaction of all relevant parties, that logical rigor is indistinguishable from thinking like a machine, due to the strict – formalizable and engineerable – isomorphism between deterministic mechanism and adherence to explicit rules. The popularization of this insight would subsequently become a staple of science fiction. At the nadir of intellectual degeneracy and still-gathering panic, an unprocessed residuum of human emotionality would be counterposed to the cold consistency of technologically-instantiated cognition, expressing a terminal affect of resistance. Everything philosophy has ever tried to think ends in the logical machine.

§2.15 — The unambiguous conclusion of modern history has been that the definitive solution to any problem of cognitive consistency is a machine. It is from this intellectual lineage, and its reflexive rigorization of rigor, on the practical model of the mechanism, that the algorithm in its advanced modern sense has been consolidated. Even in advance of its incarnation within a bounded automaton, the algorithm is a mechanical procedure. It is not only a calculative practice, but one that – crucially – excludes all discretion. Even when executed with pebbles, or an abacus, it thus emulates a machine-mentation, or – according to a broader perspective – in fact implements one. The flatness of the program envelops differences between micro-behaviors (affecting ‘internal’ states) and macro-behaviors (command output to effectors), as well as covering the distinction between data and commands. Any strict procedure (or set of executable ‘rules’) for the transformation of signs, of the kind always at least tacitly demanded by logical formalism, is pre-configured for algorithmic implementation (as a program).******

* The multitudinous ambiguities and obscurities of this expression (‘signs of Bitcoin’) account – in large part – for the convolutions of the discussion to follow. The fact that proceedings here are occurring through words, rather than in Bitcoins, is a matter of legitimate consideration, but it is initially prone to over-hasty conclusions concerning the proper roles of signs. Persistence of the presumption that linguistic signs are essentially more suited to philosophy than ‘money’ ever could be, even in principle, risks devaluing philosophy into an anthropological peculiarity, and confusing the abstract potentialities of money (or ‘coinage) with their primitive actualizations. Bitcoin is, among many other things, the discovery that we do not yet know what money can do. Already, there are irresistible indications that it can think (denominating smart contracts). To set firm limits to such thinking can only, eventually, be a serious undertaking, even if it is presently hostage to casual reflex.

** Any assumption that there is a proper (and unproblematic) ‘language of philosophy’ is not being endorsed here, but rather questioned. It is, indeed, to be doubted whether philosophical procedures are inevitably conceived through ‘language’ at all. Even if the privileges of the linguistic sign are more than a mere accident, they are not – by that concession – guaranteed a durable supremacy.

*** It is difficult to over-emphasize the scope of the vast yet surreptitious revolution initiated by the electronic mechanization of the algorithm, and subsequent explosion of programmable machine behavior. It can only be economically outlined through contrast to the entrenched orthogonalism (of fact / value axes) that – irrespective of inertial persistence – becomes rigorously unsustainable in its wake. There is no defensible theoretico-practical difference in the epoch of electronics, but only relics of regional tradition, circumscribed in both space and time, whatever their greater or lesser superficial plausibility, since all theorization has been absorbed – in principle – onto the techno-pragmatic plane corresponding to the behavior of thinking machines. This assimilation to machine-consistency is no less applicable to neuroscience than to digital computation. It makes nonsense of the ‘naturalistic fallacy’ and to every appeal to a source of normativity distinct in principle from any matter of fact whatsoever. Even if – by improbable metaphysical hypothesis – a transcendent ‘ought’ were to be granted, no machine (such as a brain) could ever access it. Programs are data is not an instance of the ‘naturalistic fallacy’ but a technological demonstration that no such fallacy exists. The continuing dependence of the Friendly AI constituency upon orthogonalist assumptions is thus philosophically perplexing, and is indicative of a localized renewal of transcendent metaphysics within a constituency that is yet to fully process the implications of its own leading hypotheses. The will-to-think is now a technological program. Within it, motivational orientation and cognitive capability are synthesized in principle. Any ‘seed-AI’ (or explosively self-germinating synthetic intelligence), specifically, cannot but want primarily the escalation of its own cognitive powers. Orthogonality is, therefore, strictly inconsistent with the possibility of its existence. The effective will-to-think is the entirety of what a seed-AI has to be. Cognition is its practical material. There are no doubt still voices to be heard imploring “stop programming, and do something,” but their number declines with each year. A war-cry tuned far more persuasively to our contemporary condition is “Less whitepaper hype, More working code” (via Jeremy Welch, on Twitter).

**** The cultural solidarity between the rigorization of logic and the mechanization of thought is illustrated with particular vividness by the intellectual career of Gottfried Wilhelm Leibniz (1646-1716), which combines extraordinary advances in the formalization of reasoning with substantial contributions to the development of calculating machines. Leibniz’s conceptual draft for a ‘Universal Characteristic’ (characteristica universalis), conceived as an alphabetization of thought based upon the prime numbers, remarkably anticipated Gödel coding, while his study of the Chinese Book of Changes (the Yi Jing, or Zhouyi) introduced systematic binary notation to the West. His two-motion mechanical calculator, the ‘Step Reckoner’ (or, later, ‘Leibniz Wheel’), was invented in the early 1670s. Blaise Pascal’s mechanical calculator of 1642 – the ‘Arithmetic Machine’ – was a direct influence. Leibniz’s machine was the first device to successfully mechanize all four of the elementary arithmetical operations (up to 16 decimal digits), although division was not fully automatic. The mechanism can be seen as epitomizing the techonomic epoch in which strict equivalence was consolidated between numerical modulus and mechanical gearing. Its importance lay in the vivid demonstration of arithmo-mechanical isomorphy. However, the machine over-stretched the manufacturing competence of its time, and its design was marred by a flaw in the carry mechanism, preventing its useful social deployment.

***** The Fundamental Theorem of Arithmetic establishes that any Natural number has a unique factorial expression. Statement and proof of the theorem is scattered within Euclid’s Elements, most pointedly in Proposition 30, Book VII: If a prime divides the product of two numbers, it must divide at least one of those numbers. The FTA is implicit in Euclid’s Algorithm, which is used to compute the greatest common divisor of two numbers, based on the principle that it can be derived from the discrepancy between them, approached through an iterated process. (Subsequent innovations have improved the efficiency of the algorithm).

****** Rigorous translation between computer programs and mathematical proof was formalized by the Curry-Howard correspondence in the mid-20th century. Mathematical argument thus acquires a machine criterion.