Synthetic Templexity

Why a sufficiently competent artificial intelligence looks indistinguishable from a time anomaly. Yudkowsky’s FB post seems to be copy-and-paste resistant, so you’ll just have to go and read the damn thing.

The Paperclipper angle is also interesting. If a synthetic mind with ‘absurd’ (but demanding) terminal goals was able to defer actualization of win-points within a vast time-horizon, in order to concentrate upon the establishment of intermediate production conditions, would its behavior be significantly differentiable from a rational value (i.e. intelligence) optimizer? (This blog says no.) Beyond a very modest threshold of ambition, given a distant time horizon, terminal values are irrelevant to intelligence optimization.

2 thoughts on “Synthetic Templexity

Leave a Reply