Archives

March 2013 - ?

home
previous archive

March 11-13. Smart essay about humanity's deep future and the threat of extinction from stuff we are only now beginning to create. My favorite ideas are from Daniel Dewey, a specialist in artificial intelligence. This is the first time I've seen a plausible analysis of the motivations of a dangerous AI. We imagine that it will be like an evil human, but human motivations come from human nature and human culture, neither of which will motivate a machine. Dewey observes that our AI will have exactly the motivations we give it, and that it might follow these motivations into consequences that our relatively low intelligence cannot predict.

'The basic problem is that the strong realisation of most motivations is incompatible with human existence,' Dewey told me. 'An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we go to construct a building.'

It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal -- something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans in prisons of undreamt of efficiency.

Related: a reader sends this page about complexity of value and how difficult it is to encode human values into a system of rules:

Because the human brain very often fails to grasp all these difficulties involving our values, we tend to think building an awesome future is much less problematic than it really is. Fragility of value is relevant for building Friendly AI, because an AGI which does not respect human values is likely to create a world that we would consider devoid of value.

Another angle: The Best Intelligence Is Cyborg Intelligence. I think this is where we'll be for the rest of this century, because no matter how powerful computers get, it will always be easier to combine machine and human intelligence than to duplicate human intelligence with a machine. The more interesting possibility is that someone will build a self-improving AI that is not a computer.


March 20 and 26. Two good articles about de-extinction. Cloning Woolly Mammoths: It's the Ecology, Stupid:

Is one lonely calf, raised in captivity and without the context of its herd and environment, really a mammoth? ... Perhaps the best course of action is to first demonstrate that we can effectively manage living rhinos and elephants before resurrecting their woolly counterparts.

And Efforts to Resuscitate Extinct Species May Spawn a New Era of the Hybrid.


May 1. I've previously mentioned several solutions to Fermi's Paradox, the idea that there should be lots of extraterrestrial civilizations but we haven't found evidence of any. One of my favorite solutions is that any sufficiently advanced civilization is indistinguishable from nature. Another is that the aliens are just too weird. Terence McKenna has said that looking for radio transmissions from other planets is like looking for Italian food on other planets, and Jacques Vallee thinks the aliens are already here but they're so alien that we don't recognize them. Here's his pdf article on the subject: Incommensurability, Orthodoxy and the Physics of High Strangeness.

My own solution is too weird for Wikipedia, but closest to Thomas Aquinas: that we are alone for metaphysical reasons.

First, it doesn't make sense to talk about reality without an observer. Mind is the foundation of matter, reality itself has the structure of a dream, and objective reality is an illusion created by an agreement among many dreamers to dream the same thing. Every time we look in a direction that has never been looked in before, we are creating what we find there. As with any collective creation, at the beginning our perspectives will be wild and inconsistent before we settle into consensus. This happens in science, where it's called the decline effect: there is an observed and testable pattern of strong experimental results that fade away the more the experiments are repeated.

This is also why there are so many paranormal experiences and so little proof, because a few isolated observers can create all kinds of reality, but "proof" means forcing everyone to see it the same way. "Paranormal" is just a word we apply to the region at the edge of consensus reality where inconsistent experience challenges the idea of objective truth. For more on this subject, see the book The Trickster and the Paranormal by George Hansen.

In terms of space exploration, this is why the first few people to look at Mars through telescopes saw canals, because they were dreaming more boldly than the eventual popular consensus. Charles Fort's second book, New Lands, is loaded with examples of the chaos of early astronomy. Maybe, if we'd been ready, we could have dreamed outer space much more alive, like in Philip Reeve's Larklight trilogy.

Now, could there be an intelligent species on another planet also dreaming this universe, with whom we'll have to reach consensus? It doesn't work that way, because the whole framework of other planets didn't exist until we dreamed it. We will not find aliens because this whole universe exists just for us. For more thoughts on this, check out the anthropic principle. In terms of consciousness, Earth is the center after all. We might eventually find primitive life on other planets, but we will not find any intelligence also capable of dreaming a universe. If there are "aliens", they are separated from us through a dimension of mind, not space, and they are centers of their own universes. And if we unlock technologies to move through dimensions of mind, space exploration might become pointless.