Andrea Castillo takes a look at Urbit:
Some developers are seeking to transcend our internet feudalism by minimizing the number of third parties one must patronize to participate in digital society. Open-source operating systems like Linux allow people to take more control over their own computers. Bitcoin substitutes trust in a single payment processor for trust in a cryptographically secure, peer-to-peer network. BitTorrent, similarly, allows individuals to share files using a distributed network that cannot be immediately shut down by targeting any one entity. And several new projects aim to extend this logic to personal computing more generally. There’s OpenBazaar, a distributed marketplace platform that wants to be the “Bitcoin of Amazon” — a censorship-resistant e-commerce protocol that empowers buyers and sellers to transact peacefully without a middleman. There’s the InterPlanetary File System, or IPFS, which would operate as a kind of BitTorrent for the World Wide Web. […] But there is only one project that aims to just start this whole networking thing completely from scratch. It’s an “operating function” called Urbit, and it is by far the most fascinating and bizarre of these attempts to reboot computing. …
Sunspring (see linked video):
Ars is excited to be hosting this online debut of Sunspring, a short science fiction film that’s not entirely what it seems. It’s about three people living in a weird future, possibly on a space station, probably in a love triangle. You know it’s the future because H (played with neurotic gravity by Silicon Valley‘s Thomas Middleditch) is wearing a shiny gold jacket, H2 (Elisabeth Gray) is playing with computers, and C (Humphrey Ker) announces that he has to “go to the skull” before sticking his face into a bunch of green lights. It sounds like your typical sci-fi B-movie, complete with an incoherent plot. Except Sunspring isn’t the product of Hollywood hacks — it was written entirely by an AI. To be specific, it was authored by a recurrent neural network called long short-term memory, or LSTM for short. At least, that’s what we’d call it. The AI named itself Benjamin. …
Chip-making giant, and the guy who said:
Success breeds complacency. Complacency breeds failure. Only the paranoid survive.
From Intel’s news release:
Born András Gróf in Budapest, Hungary, Grove immigrated to the United States in 1956-7 having survived Nazi occupation and escaped Soviet repression. He studied chemical engineering at the City College of New York, completing his Ph.D at the University of California at Berkeley in 1963. After graduation, he was hired by Gordon Moore at Fairchild Semiconductor as a researcher and rose to assistant head of R&D under Moore. When Noyce and Moore left Fairchild to found Intel in 1968, Grove was their first hire. […] Grove played a critical role in the decision to move Intel’s focus from memory chips to microprocessors and led the firm’s transformation into a widely recognized consumer brand. Under his leadership, Intel produced the chips, including the 386 and Pentium, that helped usher in the PC era. The company also increased annual revenues from $1.9 billion to more than $26 billion.
Grove @ Wikipedia.
Obituaries at Fortune, The Verge, Wired, Bloomberg.
Lots of stimulation in this John Horgan interview with Eliezer Yudkowsky (via). Among the gems:
Horgan: I’ve described the Singularity as an “escapist, pseudoscientific” fantasy that distracts us from climate change, war, inequality and other serious problems. Why am I wrong?
Yudkowsky: Because you’re trying to forecast empirical facts by psychoanalyzing people. This never works.
(Note on ‘Singularity’ FWIW by EY here: “I think that the “Singularity” has become a suitcase word with too many mutually incompatible meanings and details packed into it, and I’ve stopped using it.”)
One more EY snippet: “… human axons transmit information at around a millionth of the speed of light, even when it comes to heat dissipation each synaptic operation in the brain consumes around a million times the minimum heat dissipation for an irreversible binary operation at 300 Kelvin, and so on. Why think the brain’s software is closer to optimal than the hardware? Human intelligence is privileged mainly by being the least possible level of intelligence that suffices to construct a computer; if it were possible to construct a computer with less intelligence, we’d be having this conversation at that level of intelligence instead.”
A cluster of crucial arguments here, launched by an exotic question:
What if artificial intelligence is so unfamiliar that we have a hard time recognising it? Could our machines have become self-aware without our even knowing it? The huge obstacle to addressing such questions is that no one is really sure what consciousness is, let alone whether we’d know it if we saw it. …
Despite decades of focused effort, computer scientists haven’t managed to build an AI system intentionally, so it can’t be easy. For this reason, even those who fret the most about artificial intelligence, such as University of Oxford philosopher Nick Bostrom, doubt that AI will catch us completely unawares. And yet, there is reason to think that conscious machines might be a byproduct of some other effort altogether. …
Nature joins the gloom chorus:
… chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.
Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.
(UF has a small bet on this dismal forecast not panning out, but since this is based entirely on arcane philosophical commitments concerning the structure of time, it is not expected to impress anyone.)
ADDED: “… the Moore’s law-driven roadmap is now at an end.”
Go is done, as a side-effect of general machinic ‘beating humans at stuff’ capability:
“This is a really big result, it’s huge,” says Rémi Coulom, a programmer in Lille, France, who designed a commercial Go program called Crazy Stone. He had thought computer mastery of the game was a decade away.
The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games.
This means that similar techniques could be applied to other AI domains that require recognition of complex patterns, long-term planning and decision-making, says Hassabis. “A lot of the things we’re trying to do in the world come under that rubric.”
UF emphasis (to celebrate one of the most unintentionally comedic sentences in the history of the earth).
We’re entering the mopping-up stage at this point.
Eliezer Yudkowsky is not amused.
The Wired story.