Quotable (#195)

James Thompson on the lessons of AlphaGo for intelligence research:

Very interestingly, getting more computer power does not help AlphaGo all that much. Between the first match against the professional European Champion Fan Hui and then the test match against World Champion Lee Sedol, AlphaGo improved to a 99% win rate against the 6 month earlier version. Against the world champion Lee Sedol, AlphaGo played a divine move: a move with a human probability of only 1 in 1000, but a value move revealed 50 moves later to have been key to influencing power and territory in the centre of the board. (The team do not yet have techniques to show exactly why it made that move). Originally seen by commentators as a fat finger miss-click, it was the first indication of real creativity. Not a boring machine. […] The creative capabilities of the deep knowledge system is only one aspect of this incredible achievement. More impressive is the rate at which it learnt the game, going up the playing hierarchy from nothing, 1 rank a month, to world champion in 18 months, and is nowhere near asymptote yet. It does not require the computer power to compute 200 million positions a second that IBMs Deep Blue required to beat Kasparov. Talk about a mechanical Turk! AlphaGo needed to look at only 100,000 positions a second for a game that was one order of magnitude more complicated than chess. It becomes more human, comparatively, the more you find out about it, yet what it does now is not rigid and handcrafted, but flexible, creative, deep and real. …

And on an optimistic note:

What about us poor humans, of the squishy sort? Fan Hui found his defeat liberating, and it lifted his game. He has risen from 600th position to 300th position as a consequence of thinking about Go in a different way. Lee Sedol, at the very top of the mountain till he met AlphaGo, rated it the best experience of his life. The one game he won was based on a divine move of his own, another “less than 1 in 1000” moves. He will help overturn convention, and take the game to new heights. […] All the commentary on the Singularity is that when machines become brighter than us they will take over, reducing us to irrelevant stupidity. I doubt it. They will drive us to new heights.

Simplified Drake

Drake equation

Great Filter calculation proceeds, around the back:

… according to a new paper published in the journal Astrobiology, recent discoveries of exoplanets combined with a broader approach to answering this question has allowed researchers to conclude that, unless the odds of advanced life evolving on a habitable planet are immensely low, then humankind is not the universe’s first technological, or advanced, civilization. […] “The question of whether advanced civilizations exist elsewhere in the universe has always been vexed with three large uncertainties in the Drake equation,” said Adam Frank, professor of physics and astronomy at the University of Rochester and co-author of the paper, in a press release. […] … “Thanks to NASA’s Kepler satellite and other searches, we now know that roughly one-fifth of stars have planets in ‘habitable zones,’ where temperatures could support life as we know it. So one of the three big uncertainties has now been constrained,” explained Frank.

Thing is:

However, the universe is more than 13 billion years old. “That means that even if there have been a thousand civilizations in our own galaxy, if they live only as long as we have been around — roughly ten thousand years — then all of them are likely already extinct,” explained Sullivan. “And others won’t evolve until we are long gone.”

(Apologies for the image quality — stumped in my search for a better one.)

Synthetic Templexity

Why a sufficiently competent artificial intelligence looks indistinguishable from a time anomaly. Yudkowsky’s FB post seems to be copy-and-paste resistant, so you’ll just have to go and read the damn thing.

The Paperclipper angle is also interesting. If a synthetic mind with ‘absurd’ (but demanding) terminal goals was able to defer actualization of win-points within a vast time-horizon, in order to concentrate upon the establishment of intermediate production conditions, would its behavior be significantly differentiable from a rational value (i.e. intelligence) optimizer? (This blog says no.) Beyond a very modest threshold of ambition, given a distant time horizon, terminal values are irrelevant to intelligence optimization.

Bitcoin as SOCI

This is one of the greatest things ever written, period.

‘SOCI’ abbreviates ‘self-organizing collective intelligence’.

The basic dynamics of a SOCI is as follows. It begins as some sort of attractor — some aesthetic sensibility or yearning — that is able to grab the attention and energy of some group of people. Generally one that is very vague and abstract. Some idea or notion that only makes sense to a relatively small group. […] But, and this is the key move, when those people apply their attention and energy to the SOCI, this makes it more real, easier for more people to grasp and to find interesting and valuable. Therefore, more attractive to more people and their attention and energy. […] … If the SOCI has enough capacity within its collective intelligence to resolve the challenge, it “levels up” and expands its ability to attract more attention and energy. If not, then it becomes somewhat bounded (at least for the present) and begins to find the limit of “what it is”.

Greenhal then narrates the story of Bitcoin to date, within this framework. The sheer enormity of the innovation it has introduced emerges starkly.

In conclusion:

My sense is that over just the next five years this new form of SOCI will go through its gestation, birthing and childhood development stages. The result will be a form of collective intelligence that is so much more capable than anything in the current environment that it will sweep away even the most powerful contemporary collective intelligences (in particular both corporations and nation states) in establishing itself as the new dominant form of collective intelligence on the Earth. […] And whoever gets there first will “win” in a fashion that is rarely seen in history.

This will look prophetic not too far down the road.

Parable of the Vase

Tim Groseclose reviews Garett Jones’ Hive Mind, whose “primary and most important contribution is to document the following empirical regularity: Suppose you could a) improve your own IQ by 10 points, or b) improve the IQs of your countrymen (but not your own) by 10 points. Which would do more to increase your income? The answer is (b), and it’s not even close. The latter choice improves your income by about 6 times more than the former choice.”

The Parable of the Vase, which it employs to explain the point, is an instantly canonical illustration, Groseclose argues. (“I do not think it is an exaggeration to say that the parable ranks as one of the all-time great examples in economics.”)

The parable begins with a simplifying assumption. This is that it takes exactly two workers to make a vase: one to blow it from molten glass and another to pack it for delivery. Now suppose that two workers, A1 and A2, are highly skilled—if they are assigned to either task they are guaranteed not to break the vase. Suppose two other workers, B1 and B2, are less skilled—specifically, for either task each has a 50% probability of breaking the vase.

Continue reading

Quotable (#121)

From the recent (and excellent) profile of Nick Bostrom in The New Yorker:

Bostrom worries that solving the “control problem” — insuring that a superintelligent machine does what humans want it to do — will require more time than solving A.I. does. The intelligence explosion is not the only way that a superintelligence might be created suddenly. Bostrom once sketched out a decades-long process, in which researchers arduously improved their systems to equal the intelligence of a mouse, then a chimp, then — after incredible labor — the village idiot. “The difference between village idiot and genius-­level intelligence might be trivial from the point of view of how hard it is to replicate the same functionality in a machine,” he said. “The brain of the village idiot and the brain of a scientific genius are almost identical. So we might very well see relatively slow and incremental progress that doesn’t really raise any alarm bells until we are just one step away from something that is radically superintelligent.”

Quotable (#98)

Eugene Volokh doesn’t think moral objections will provide much of a rampart against neo-eugenics, concluding:

… none of this responds to the ethical, philosophical, or religious objections to genetic modification of intelligence that are driving the high current hostility to such modification. (A response could be made, I think, but it’s not my goal here to offer it.)

My point is simply that competitive pressures, on the international level as well as the individual level, are pretty likely to swamp such objections in practice, at least unless someone shows that the objections are so overwhelmingly compelling that we are willing to risk permanent second-class (fifth-class?) status in order to adhere to them.

Quotable (#79)

Melanie Swan on money-intelligence convergence (Capital teleology) and blockchain technologies:

Consensus mechanisms could be reinvented, moving from a proof or work or proof of stake model as are the current industry standards for cryptocurrencies, to other consensus mechanisms like proof of intelligence. This could be for higher-level blockchain thinking smartnetwork operations rather than simple transaction recording. In one way, proof of intelligence could serve as a reputational qualifier; as a proof of ability to participate. In another way, proof of intelligence could be an indication that some sort of ‘mental’ processing has taken place. For example, a new concept, idea, association, or knowledge element has had to have been generated to provide the skin-in-the-game for the consensus, to demonstrate the miner’s bonafide status in registering the transaction and receiving the Mindcoin, Ideacoin, or other system token rewards. Proof of intelligence could be used in different ways as a reputational commodity in blockchain thinking networks.

Quick links (#28)

China and the Asian Century. What the Great Firewall really does (+ tightening Cyberspace security in China). A step back from the market? China’s stance on the Ukraine toughens.

Žižek on Syriza (not his best work), is it time for tears yet? A Russian perspective on Dogecoin-backed global chaos.

Apple and robots (they’re coming). Troubles at Lenovo (and Mega). The waves of deep learning — the game (with brief expert commentary), plus. Robots at war. Technology isn’t neutral. Job targets. Technology contra capitalism (more, and more). Drones! A Turing classic.

No blockchain without Bitcoin.

Continue reading