Anti-Cap

This tweet storm is pure evil (but fortunately we’re fairly tolerant of such things at this blog).

The point it raises is going to fuel an important argument, down the road. Better to explore it via an appropriately constructed altcoin, and in the market, though, than to wreck Bitcoin in the course of the dialectic. Hard money philosophy is baked into the Bitcoin protocol. If that doesn’t seem like a good idea, the solution is to try something else.

Burning Man

… a celebration of the post-democratic order, according to Jacobin mag (who have lured UF into taking a non-dismissive glimpse for the first time):

In a just, democratic society, everyone has equal voice. At Burning Man everyone is invited to participate, but the people who have the most money decide what kind of society Burning Man will be — they commission artists of their choice and build to their own whims. They also determine how generous they are feeling, and whether to withhold money. […] It might seem silly to quibble over the lack of democracy in the “governance” of Black Rock City. After all, why should we care whether Jeff Bezos has commissioned a giant metal unicorn or a giant metal pirate ship, or whether Tananbaum wants to spend $2 million on an air-conditioned camp? But the principles of these tech scions — that societies are created through charity, and that the true “world-builders” are the rich and privileged — don’t just play out in the Burning Man fantasy world.

It’s intriguingly reminiscent of this.

Continue reading

Death Valley

Strictly gossip-level, but the bold predictions gets it a mention. It’s Breitbart, so understatement isn’t going to feature:

San Francisco, heartland of wacky progressive politics but also home to some of America’s most innovative technology companies, is in trouble. Not just trouble, actually, but serious shit. […] And the main reason is China. The Wall Street Journal has a good explainer on what’s going on over there, but the basic thing you need to understand is that a lot of glossy American stocks are about to take a tumble, especially tech stocks.

The core of the analyis:

Fear and greed run the stock market, which is, of course, exactly as it should be: they’re the instincts upon which capitalism is built. But that’s a problem for companies who suffer dramatically when global events conspire to shunt investors into safer bets. […] Businesses like Twitter and Facebook have always been grotesquely overvalued, according to conventional analyses. Technology companies get away with hilarious valuations mainly thanks to upward pressure; the inflation happens right at the start when companies raise hundreds of millions of dollars on multimillion dollar valuations, despite not earning a penny in revenue and having no immediate plans to do so. […] That’s in outrageous contradiction to their price-to-earnings ratio, one traditional and very reliable way of valuing companies. […] Tech stocks have absurdly high price-to-earnings ratios, and any blip in the market has a much bigger effect on high PE stocks than low PE stocks. So investors are counting on massive future growth that will likely never come and betting against global events that shave billions off the value of frothy investments.

It could get a little rough.

Free AI

The extreme connectionist hypothesis is that nothing very much needs to be understood in order to catalyze emergent phenomena, with synthetic intelligence as an especially significant example of something that could just happen. DARPA’s Gill A. Pratt approaches the question of robot emergence within this tradition:

While the so-called “neural networks” on which Deep Learning is often implemented differ from what is known about the architecture of the brain in several ways, their distributed “connectionist” approach is more similar to the nervous system than previous artificial intelligence techniques (like the search methods used for computer chess). Several characteristics of real brains are yet to be accomplished, such as episodic memory and “unsupervised learning” (the clustering of similar experiences without instruction), but it seems likely that Deep Learning will soon be able to replicate the performance of many of the perceptual parts of the brain. While questions remain as to whether similar methods can also replicate cognitive functions, the architectures of the perceptual and cognitive parts of the brain appear to be anatomically similar. There is thus reason to believe that artificial cognition may someday be put into effect through Deep Learning techniques augmented with short-term memory systems and new methods of doing unsupervised learning. [UF emphasis]

He anticipates a ‘Robot Cambrian Explosion’.

It seems improbable that a sufficiently self-referential pattern recognition system — i.e. an intelligence — is going to be the product of a highly-specified initial design. An AI that doesn’t almost entirely put itself together won’t be an AI at all. Still, by the very nature of the thing, it’s not going to impress anybody until it actually happens. Perhaps it won’t, but we have no truly solid reasons — beyond an inflated self-regard concerning both our own neural architectures and our deliberative engineering competences — to think it can’t.