Quick links (#6)

How money moves out of China.

‘Spengler’ on demography and geopolitics. Also at ATOL, tail risk.

An interview with Li Ka-shing.

Bitcoin for little children. (Focused on the solution to the double-spending problem.)

How can we do anything of importance without building a space elevator first?

The death of privacy on the Internet is a political catastrophe wrote Evgeny Morozov (last October). Kevin Kelly is more relaxed (although the way “we define ourselves as humans” has to change).

The ‘state of exception’ is just the state.

Returns of Ouroboros. Also, time-looping with tape.

Recalling Heinlein’s ‘speedtalk‘ in an age of computer-accelerated language.

How ‘Neoliberals’ think. (Among the more insightful critiques.)

Beyond ‘bad philosophy’. (This kind of “the-truth-is-not-enough” pseudo-Medieval decadence is increasing popular — which doesn’t say anything good about the state of the West today.)

One thought on “Quick links (#6)

  1. “Risks that are in clear view are analyzed to death, can be managed for, and very often don’t eventuate. Conversely tail risks, not provided for in conventional risk modeling and not properly analyzed…”

    People use “tail risk” both for “excess kurtosis” (in relation to some location-significant model), and “over-optimistic cut-offs”. Many are parroting fashionable jargon for professional journals, of course.

    Suppose a model of the water level at the dikes in Amsterdam. We’ll probably have a well-defined minimum, a most-common-level and the usual moments (the average, variance, etc). With this we can build a model of the maximum level based on the Fisher-Tipper-Gnedenko theorem, but our estimates will be quite sensitive to measurements — and the validity of our assumptions in first place. This is “tail risk” as I’ve always understood it, and as Nassim Taleb used to describe prior to 2008 in technical writing: tsunamis.

    But there’s something completely different being called a tail risk here. Take a normal distribution. Human heights. Strictly speaking, everything is possible under a normal law, from minus infinity to infinity. People the size of buildings; inference under the normal model says this is possible. Of course, the normal distribution has very thin tails; the gobsmacking majority of cases will be within three standard deviations. But if someone gives you a confidence interval based on a normal-like law, they’re setting cut-offs; maybe 2.5% in either direction is enough to throw away. But then something that had a 5% chance of happening happens? That’s not tail risk as I’ve always understood. That’s just an issue of planning.

    This matters because a turn for the conservative won’t save you from tsunamis. Tightening cut-off points to 0.01%? Sure, that will increase costs by a few orders of magnitude (imagine energy source substitution here). Anticipating tsunamis? That’s another kind of statistical uncertainty.

Leave a Reply