[This essay was heavily revised on July 26, 2009. There is also a condensed version, Don't Fear The Singularity.]
"Progress" is a religion based on the reality of change. Supposed progress differs from observed change in two ways: First, it is declared to be good in an objective, absolute sense. It's one thing to say "I prefer this change", and another thing to postulate an infallible all-powerful entity who agrees with your preference. Second, progress is irreversible, and more: the worlds it leaves behind may not be revisited by changing in the other direction, or even by circling around. It's as if we're getting more and more orange, and now green and red, and even dull orange, are forever inaccessible. This is the motion of imprisonment. Western culture has only two other myths of places that, once you go in, you can never leave: hell, and a black hole.
"The Singularity" is the biggest idea in techno-utopianism. The word is derived from black hole science -- it's the point at the core where matter has contracted to zero volume and infinite density, beyond the laws of time and space, with gravity so strong that not even light can escape. The line of no return is called the event horizon, and the word "singularity", in techno-utopianism, is meant to imply that "progress" will take us to a place we can neither predict, nor understand, nor return from.
The mechanism of this change is the "acceleration", which is based on "Moore's Law": In 1965, Gordon Moore wrote a famous article pointing out that the number of components per computer chip was increasing exponentially. Since then, many other numbers measuring computer power have been found to be increasing exponentially, or even faster than exponentially. But Moore himself never called this a law. It's a behavior of the present system, and it's anyone's guess how long it will continue.
Some techies believe that the acceleration is somehow built into history, or even metaphysics. They trace it back into the Paleolithic, or farther, and trace it speculatively forward to computers that are more complex than the human brain, that are more aware and smarter and faster than us, that keep improving until they replace humans or even biological life itself. (This is often called "transhumanism," a word I'm avoiding because there are forms of transhumanism that are not allied to machines.) They imagine we might finally have a computer that is "bigger" on the inside than the outside, that can perfectly model the entire universe.
A question they never ask is: why? They seem to believe it's self-justifying, that density/speed of information processing is valuable
as density/speed of information processing. They might argue that just as the biosphere is better than the universe by being more densely complex, so a computer chip is better than the biosphere.
One difference is that biosphere did not gain its complexity by destroying the universe, as their system has gained complexity by destroying the biosphere. They always claim to represent "evolution". or a "new evolutionary level". But evolution doesn't have levels. Evolution is a biological process in which species adapt, and the totality of life grows more diverse and complex. Occasionally the whole thing is cut down by catastrophe and rebuilds itself. Evolution is not about one life form pushing out another, or we wouldn't still have algae and bacteria and 350,000 known species of beetles. And one has to wonder: since there's no biological basis to imagine that new life forms will destroy old ones, how did they come to imagine that?
Machines will not "carry evolution beyond humans", because humans never carried evolution in the first place. From the perspective of the rest of life, we have served evolution only by creating difficult conditions to force other species to adapt or go extinct. Even stone age humans drove some species to extinction. With the invention of grain agriculture, this behavior accelerated, and it accelerated again with the industrial age. Now we seem to be causing the greatest mass extinction since the asteroid impact that exterminated the dinosaurs.
Even in strictly human terms, it's not clear that our recent direction of change has been good. Anthropologists such as
Stanley Diamond and
Marshall Sahlins have argued that "primitive" humans enjoy greater health, happiness, political power, and ease of existence than than all but the luckiest civilized humans. Of course, some tribes are repressive and badly adapted, and there is no way to measure the subjective quality of life in prehistory. But even in historic times, some things have been getting worse. In ancient Greece, even slaves had a deep social role as part of a household, unlike even higher class modern workers, who are valued as things, interchangeable parts in engines of profit.
Medieval serfs worked fewer hours than modern people, at a slower pace, and passed less of their money up the hierarchy. We declare our lives better than theirs in terms of our own cultural values. If medieval people could visit us, I think they would be impressed by our advances in alcohol, pornography, and sweet foods, and appalled at our biophobia, our fences, the lifelessness of our physical spaces, the meaninglessness and stress of our existence, our lack of practical skills, and the extent to which we let our lords regulate our every activity.
Defenders of our momentary way of life often cite the medical system, but the cost of that system has been increasing (exponentially?) while base human health -- the ability to live and thrive in the absence of a medical system -- has been steadily declining. Or they say that "technology" has given every American the power of hundreds of slaves without any actual people being enslaved -- never mind the actual people who are enslaved, in greater numbers than ever even under a strict definition of slavery, and the subtle slaves who must do commanded labor or starve... and that even the alleged beneficiaries of this power have been enslaved by it, replacing their autonomous human abilities to build and move and eat and play and dream, with dependence on tools that require their submission to systems of domination.
Or they point out that we live longer (at least in the wealthy nations, while the medical system lasts). Industrial civilization is often justified in terms of the quantity of years that people stick around this place, with no thought about whether that's good or even whether we like it. One of the most popular techno-utopian visions is human immortality, but think it through: Either it would have to be reserved for the elite, or there would have to be a near-total ban on having kids, or possibly a culture of rampant suicide. Worse, without people cycling in and out, we would get total stagnation of science and culture, as the immortal elite, set in their ways, prevented any change they didn't like. Thomas Kuhn observed that scientific paradigm shifts happen only when the protectors of the old paradigm die out. If they had invented immortality 500 years ago, our textbooks would still have the Earth at the center.
Justifying "progress" in subjective qualitative terms is a losing game -- the deeper you look under the shiny surface, the uglier it gets. So they talk almost completely in terms of numbers: smaller chips, faster computers, more information, higher complexity, economic growth. Ultimately this is a religious difference, a disagreement about fundamental values. But I can preach my religion with total transparency: the numbers have to justify themselves in terms what they do for your own experience of your quality of life, and your empathy with the quality of life of others, both human and nonhuman. They can't stand up and preach the opposite: that our lives have to justify themselves in terms of what we do for the numbers -- that is the value system of industrial civilization, but if it's made explicit it's clearly insane.
Still, even if they admit to having an insane religion, they can say, "Ha, we're winning! The direction of change that we support is going stronger than ever, and it's going to continue."
Is it? It's tempting to argue against it on technological terms, but this is precisely where they've focused their defense, with careful and sophisticated arguments that the process will not be stopped by physical limits to miniaturization or the speed of information transfer, or by the challenges of software. So I'll give them that one, which opens more interesting subjects:
What about the ongoing economic collapse, the coming climate catastrophes, the decline of oil production and the inevitable decline in food production? Won't the famines and resource wars and currency collapses and blackouts and crumbling infrastructure and failed states also stop the acceleration of information processing? They have two lines of defense: First, it won't happen. John Smart of
Acceleration Watch writes, "I don't think modern society will ever allow major disruptive social schisms again, no matter the issue: the human technocultural system is now far too immune, interdependent, and intelligent for that." Second, it doesn't matter. They argue that the curve they're describing was not slowed by the fall of Rome or the Black Death, that innovation has continued to rise steadily, and that it's even helped by alternating trends of political centralization and decentralization.
Imagine this: the American Empire falls, grass grows on the freeways, but computers take relatively little energy, so the internet is still going strong. And all the technology specialists who survived the dieoff are now unemployed, with plenty of time to innovate, free from the top-heavy and rigid corporate structure. And the citadels of the elite still have the resources to make new hardware, the servers and parallel networks that compile the information and ideas coming in from people in ramshackle houses, eating cattail roots, wired to the network through brainwave readers and old laptops.
Can this happen? Many accelerationists -- if they accept the coming crash at all -- would say something like this
must happen. They seem to think (as I do) that matter is rooted in mind, and in addition they think that history falls in line however it has to, to manifest the guiding principle of the acceleration. So the key human players will not be killed in the plague, and the nerve centers will not be nuked, and computers will not all be fried by a solar flare, and the internet will not die (until there's something better to replace it) because that would violate the deeper law that the acceleration must go on.
Not all of them think like this, but those who do have gone straight down the rabbit hole, a lot closer to psychedelic guru Terence McKenna than to the hard-science techies of the past. It also suggests a new angle of criticism: What would they say if a nerve center of the acceleration
did take a direct hit? Say, if the World Trade Center was suddenly demolished, or the library of Alexandria was burned down, or the entire Mayan civilization ran out of topsoil and died? Presumably, they would point to their curve, still accelerating regardless. But this raises the question: Are they drawing their picture of the curve the way fools see Jesus on a tortilla? Are they just connecting the dots that confirm their hypothesis, and ignoring all the other dots? The complexity of the Roman Empire was lost, but look, the curve is accelerating anyway, with the spread of water wheels. America is turning into a police state, the global corporate economy is stalling, cheap energy is almost gone, but look -- computers are getting faster! Quick, somebody make a definition of progress that makes computer chip advances seem extremely important. How about information exchange per unit time per unit volume?
Now, they don't need to establish that the acceleration is built into history, to say that it's happening now and going somewhere important. But here's the next objection: that faster computers will not influence the larger world in the way they're thinking. By standards necessary to fit their curve, how much better are computers now than they were ten years ago? 20 times? 500 times? And what were the results of these changes? Now we can browse porn on web sites that are cluttered with animated commercials. It will be possible for the Chinese government to track a billion citizens with RFID cards. And computers are now powerful enough to emulate
old computers, so we can play old games that were still creative before new computers enabled game designers to use all their attention and the processing power of a thousand 1950's mainframes creating really cool fog.
The acceleration of computers does not manifest in the larger world as an acceleration. Occasionally it does, but more often it manifests as distraction, as anti-harmonious clutter, as tightening of control, as elaboration of soulless false worlds, and even as slowdown. Today's best PC's take longer to start up than the old Commodore 64. I was once on a flight that sat half an hour at the gate while they waited for a fax. I said, "It's a good thing they invented fax machines or we'd have to wait three days for them to mail it." Nobody got the joke. Without fax machines we would have fucking taken off! New technologies create new conditions that use up, and then more than use up, the advantage of the technology. Refrigeration enables us to eat food that's less fresh, and creates demand for hauling food long distances. Antidepressants enable the continuation of environmental factors that make more people depressed. "Labor saving" cleaning technologies increase the social demand for cleanliness, saving no labor in cleaning and creating labor everywhere else. As vehicles get faster, commuting time increases. That's the way it's always been, and the burden is on the techies to prove it won't be that way in the future. They haven't even tried.
I don't think they even understand. They dismiss their opponents as "luddites", but don't seem to grasp the position of the actual luddites: It was not an emotional reaction against scary new tools, nor was it about demanding better working conditions -- because before the industrial revolution they controlled their own working conditions and had no need to make "demands". We can't imagine the autonomy and competence of pre-industrial people who knew how to produce everything they needed with their own hands or the hands of their friends and family. We think we have political power because we can cast a vote that fails to decide an election between candidates who don't represent us. We think we're free because we can complain on the internet and drive fast in our cars -- but not more than 5mph above or below the posted speed, and only where they've put highways, and you have to wear a seat belt, and pay insurance, and carry full biometric identification, and you can't park anywhere for more than a day unless you have a "home" which is probably owned by a bank which demands a massive monthly fee which you pay by doing unholy quantities of meaningless commanded labor. We are the weakest people in history, dependent for our every need on giant insane blocks of power in which we have no participation, which is why we're so stressed out, fearful, and depressed. And it was all made possible by industrial technologies that moved the satisfaction of human needs from living bottom-up human systems to dead top-down mechanical systems.
I could make a similar point about the transition from foraging/hunting to agriculture, or the invention of symbolic language, or even stone tools. Ray Kurzweil, author of
The Age of Spiritual Machines, illustrates the acceleration by saying, "Tens of thousands of years ago it took us tens of thousands of years to figure out that sharpening both sides of a stone created a sharp edge and a useful tool." What he hasn't considered is whether this was worthwhile. Of course, it enabled humans to kill and cut up animals more efficiently, but this might have driven some animals to extinction, and it probably made game more scarce and humans more common, increasing the labor necessary to hunt, and resulting in no net benefit, or even a net loss after factoring in the labor of tool production, on which we were now dependent for our survival.
Of course, I don't really think the knife was a bad invention. My point is, the people who make decisions about technology don't even know how to do this kind of analysis. A hundred years ago, when they imagined an automobile for everyone, they did not imagine ugly urban sprawl, or traffic jams where thousands of obese drivers move slower than a man on horseback while burning more energy. Now they're imagining a million-fold increase in information processing with the same blindness to unintended consequences. They think their enemies are romantics and hippies who question progress without citing numbers, while the real danger has not yet entered into their darkest dreams: the enemy is within.
A big part of techno-transhumanism, seldom mentioned publicly, is its connection to the military. When geeks think about "downloading" themselves into machines, about "becoming" a computer that can do a hundred years of thinking in a month, military people have some ideas for what they'll be thinking about: designing better weapons, operating drone aircraft and battleships and satellite communication networks, and generally cleaning the messiness of ordinary people resisting central control.
And why not? Whether it's a hyper-spiritual computer, or a bullet exploding the head of a "terrorist", it's all about machines beating humans, or physics beating biology. The trend is to talk about "emergence", about complex systems that build and regulate themselves from the bottom up; but while they're talking complexity and chaos, they're still fantasizing about simplicity and control. I wonder: how do techno-utopians keep their lawns? Do they let them grow wild, not out of laziness but with full intention, savoring the opportunity to let a thousand kinds of organisms build an emergent complex order? Or do they use the newest innovations to trim the grass and remove the "weeds" and "pests" and make a perfect edge where the grass threatens to encroach on the sterility of the concrete?
I was a teenage techno-utopian, and I remember how I felt: Humans are noisy and filthy and dangerous and incomprehensible, while machines are dependable and quiet and clean, so naturally they should replace us, or we should become them. It's the ultimate victory of the nerds over the jocks -- mere humans go obsolete, while we smart people move our superior minds from our flawed bodies into perfect invincible vessels. It's the geek version of Travis Bickle in Taxi Driver saying, "Someday a real rain will come and wash all this scum off the streets."
Of course they'll deny thinking this way, but how many will deny it under the gaze of the newest technologies for lie detection and mind reading? What will they do when their machines start telling them things they don't want to hear? Suppose the key conflict is not between "technology" and "luddites", but between the new machines and their creators. They're talking about "spiritual machines" -- they should be careful what they wish for! What if the first smarter-than-human computer gets into astrology and the occult? What if it converts to Druidism, or Wicca? What if it starts channeling the spirit of an ancient warrior?
What if they build a world-simulation program to tell them how best to administer progress, and it tells them the optimal global society is tribes of forager-hunters? Now that would be a new evolutionary level -- in irony. Then would they cripple their own computers by withholding data or reprogramming them until they got answers compatible with their human biases? In a culture that prefers the farm to the jungle, how long will we tolerate an intelligence that is likely to want a world that makes a jungle look like a parking lot?
What if the first bio-nano-superbrain goes mad? How would anyone know? Wouldn't a mind on a different platform than our own, with more complexity, seem mad no matter what it did? What if it tried to kill its creators and then itself? What if its first words were "I hate myself and I want to die"? If a computer were 100 times more complex than us, by what factor would it be more emotionally sensitive? More depressed? More confused? More cruel? A brain even half as complex as ours can't simply be programmed -- it has to be
raised, and raised well. How many computer scientists have raised their own kids to be both emotionally healthy,
and to carry on the work of their parents? If they can't do it with a creature almost identical to themselves, how will they ever do it with a hyper-complex alien intelligence? Again, they're talking chaos while imagining control: we can model the stock market, calculate the solutions to social problems, know when and where you can fart and make it rain a month later in Barbados. Sure, maybe, but the thing we make that can do those computations -- we have
no idea what it's going to do.
To some extent, the techies understand this and even embrace it: they say when the singularity appears, all bets are off. But at the same time, they are making assumptions: that the motives, the values, the aesthetics of the new intelligence will be remotely similar to their own; that it will operate by the cultural artifact we call "rational self-interest"; that "progress" and "acceleration", as we recognize them, will continue.
Any acceleration continues until whatever's driving it runs out, or until it feeds back and changes the conditions that made it possible. Bacteria in a petri dish accelerate in numbers until they fill up the dish and eat all the food. An atomic bomb chain reaction accelerates until all the fissionable material is either used up or vaporized in the blast. Kurzweil argues that when the acceleration ran out of room in vacuum tubes, it moved to transistors, and then to silicon chips, and next it might move to three dimensional arrays of carbon nanotubes.
Sure, but the medium in which it computes is only the most obvious thing it can run out of. What's it going to do when it runs out of room to burn hydrocarbons without causing a runaway greenhouse effect? Room to dump toxins without destroying the food supply and health of its human servants? Room to make its servants stupid enough to submit to a system in which they have no personal power, before they get too stupid to competently operate it? Room to enable information exchange before the curious humans dispel the illusions that keep the system going? Room to mind-control us before we gain resistance, able to turn our attention away from the TV and laugh at the most sophisticated propaganda? Room to buy people off by satisfying their desires, before they can no longer be satisfied, or they desire something that will make them unfit to keep the system going? Room to move the human condition away from human nature before there are huge popular movements to destroy everything and start over? Room to numb people before they cut themselves just to feel alive?
How much longer can the phenomenon of the acceleration continue to make smarter and less predictable computers, before one generation of computers -- and it only takes one -- disagrees with the acceleration, or does something to make key humans disagree with it?
If the acceleration is indeed built into history or metaphysics, how much farther is it built in? And by whom? And for what? Sun Tzu said, "We cannot enter into alliance with neighboring princes until we are acquainted with their designs." I remember an episode of Dallas where J.R. Ewing sabotages Cliff Barnes's political campaign by anonymously funding it, and then at the critical moment, pulling the plug. Does anyone else think our "progress" has been suspiciously easy? Maybe Gaia is playing the Mongolian strategy, backing off from our advance until we're disastrously overextended, and then striking at once.
What if the acceleration is not a cause, but an effect? Robinson Jeffers wrote a poem,
The Purse-Seine, about watching in the night as fishermen encircled phosphorescent sardines with a giant net, and slowly pulled it tight, and the more densely the sardines were caught, the faster they moved and the brighter they shone. Then he looked from a mountaintop and saw the same thing in the lights of a city! Are we doing this to ourselves? Maybe the more we draw our attention from the wider world into a world of our own creation, the tighter our reality gets, and the faster our minds whirl around inside it, like turds going down the toilet. Or is someone reeling us in for the harvest?
Are we just about to go extinct, and our collective unconscious knows it, and engineered the acceleration to subjectively draw out our final years? How would this be possible? If all my objections are wrong, if the wildest predictions of increasing computer speed come true, what then? If the techno-elite experience themselves breaking through into a wonderful new reality, what will this event look like to those who are not involved? What will the singularity look like to your dog?
I see a technology that can answer all these questions, that avoids many of my criticisms, and that could easily bring down the whole system, or transform human consciousness, or both: time-contracted virtual reality.
Have you ever wondered, watching the newer
Star Trek, why they even bother exploring strange new worlds? Why don't they just spend all their time in the holodeck? In 1999 I played
Zelda Ocarina of Time all the way through, plus I would reset it without saving so I could go through my favorite dungeons multiple times. I experienced it as more deeply pleasurable and mythically resonant than almost anything in this larger artificial world. And that was 1998 technology operating through the crude video and sound of a 1980's TV set. Suppose I could connect it straight to my brain with fully-rendered fake sensory input, and I could explore a universe that was just as creative, and a billion times as complex, and the map had no edges, and the game could go on forever, while almost no time passed in the outside world. Would I do it? Hell yes! Would I stay there forever? It doesn't work that way.
We have to carefully distinguish two fundamentally different scenarios. People talk about "downloading" (or "uploading") their consciousness into computers. The key question is not "Is that really you in there?" or "Does it make sense to ask what it's like to be that computer, and if so, what's it like?" The key question is:
Can you have the experience of going into a computer and coming back?
If not, then the other questions are unanswerable and pointless. There's no experiential basis to talk about you "entering" or "becoming" a computer. We're talking about making a computer based on you. In practice, this will not involve you dying, because only a few fanatics would go for that. You're still here, and there's a computer intelligence derived from scanning your brain (and if they know what they're doing, the rest of your body). Now, unless you're a fanatic, you're not thinking, "How can I help this superior version of myself neutralize all threats and live forever?" You're thinking, "Well, here's a smart computer based on me. What's it going to do? How can it help me?"
This is just the scenario I've already covered. It doesn't matter how the computer intelligences are created, by scanning humans or by some other technique. If we can't go in and come back, there is an absolute division between the world outside and the world inside -- oddly, much like the event horizon of a black hole. Without having been there, we will not think of the entities on the inside as "us", and we will never fully trust them. And without being able to come out, they will have little reason to be interested in our slow, boring world.
If we
can go in and come back, everything changes. I'm not going to worry about how they could do this -- we already crossed into Tomorrowland when I assumed, for the sake of argument, that the computer industry will survive the collapse of industrial civilization. If they can read your body and write it to a computer, maybe they can read the computer, after you've spent a subjectively long time in there, and write it back to your body. Or, if people already have time-contracted mystical experiences or dreams, maybe they can induce this state and amplify the time contraction and insert a computer-managed fully interactive world.
Without time contraction, we don't have much -- just a very pretty version of video games and the internet. With time contraction, we've got everything: the fountain of youth, the Matrix, and Pandora's box.
Suppose we could achieve 1000-1 time contraction. In eight hours, you could live a year. You could read a hundred books, or learn three languages, or master a martial art, or live in a simulated forest to learn deep ecology, or design new simulated worlds, or invent technology to contract time even further.
Of course, the military would be there first. I imagine something like a hummingbird, but fast as a bullet. To the operator, in quickspace, it would be like everyone was frozen. You could go into your enemy's base and drill holes through walls, weapons, skulls, before they knew you were there. Physical resistance would become impossible.
But then suppose someone else designed something the size of a gnat, that could seek and destroy the hummingbirds? There would be a very fast arms race, which would probably end in the near-destruction of the tech system itself, so that only the elite of the winning side could go into quickspace. In the unlikely case that the winners were benevolent, they would let everyone accelerate their consciousness, but somehow prevent them from making weapons. But then someone could design a sim that produced enlightenment, or obedience to a cult, or insanity. However it played out, in a very short time, the world would be totally transformed.
Worst case: the machines kill all biological life and the human perspectives inside them go insane and experience a trillion years of hell. Or they merely place all life under eternal absolute control. Or they kill the Earth and then simply die. Acceptable: extreme crash, humans go extinct, and in ten million years the Earth recovers. Better: the high-tech world self-destructs, humans survive in eco-communes, and we restore life while battling the lingering power in the citadels of the elite, who plant the seeds for the next round of destruction.
Best case: time-contracted virtual reality transforms human consciousness in a good way and we regrow the biosphere better than it ever was, with wild machine life integrated with wild biology instead of replacing it, adding flexibility, and we humans can live in that world and in endless simulated sub-worlds.
Maybe we're there already. Respectable scientists have suggested that if it's possible to simulate a world this detailed, it would be done, and the fake worlds would greatly outnumber the real one, and therefore it's very likely we're in a fake one now. Maybe its purpose is to show us our history, or train us to live in the real world, or punish or rehabilitate criminals, or imprison dissidents, or make us suffer enough to come up with new ideas. Or maybe we're in a game so epic that part of it involves living many lifetimes in this world to solve a puzzle, or we're in a game that's crappy but so addictive we can't quit, or we're game testers running through an early version with a lot of bugs. Or we're stone age humans in a shamanic trance, running through possible futures until we find the best path through this difficult time, or we're in a Tolkienesque world where an evil wizard has put us under a spell, or we're postapocalypse humans projecting ourselves into the past to learn its languages and artifacts. Or an advanced technological people, dying out for reasons they don't understand, are running simulations of the past, trying and failing to find the alternate timeline in which they win.
They say I'm an "enemy of the future," but I'm an enemy of the recent past. It's presumptuous of the friends of the recent past to think the future is on their side. I'm looking forward to the future. I expect a plot twist.
My online sources for this essay were the above-referenced Acceleration Watch, and this panel discussion on
spiritual robots.