Don't Fear The Singularity

by Ran Prieur

December 3, 2005

It can't continue forever.
The nature of exponentials is that you push them out
and eventually disaster happens.

- Gordon Moore (source)


Creative Commons License
[This is a version of The Age of Batshit Crazy Machines, edited to half length and partially rewritten for publication in Green Anarchy magazine.]

"The Singularity" is the biggest idea in techno-utopianism. The word is derived from black hole science -- it's the point at the core where matter has contracted to zero volume and infinite density, beyond the laws of time and space, with gravity so strong that not even light can escape. They apply the word to the future to suggest that "progress" will take us to a place we can neither predict, nor understand, nor return from.

At least they have their metaphors right: that our recent direction of change is about contraction, not expansion, and leads inescapably to collapse and a new world. Their fatal pride is in thinking they'll like it. Basically, they think computers are going to keep getting better faster, until they surpass biological life, and we'll be able to "upload" our consciousness into immortal robots or virtual reality heaven. The engine of this fantasy is the "acceleration," which supposedly includes and transcends biological evolution, and is built into reality itself, destined to go forward forever.

The weakest part of their mythology is the part they take for granted. If civilization is part of evolution, it's not like birds getting wings -- it's like the extinction of the dinosaurs, a global catastrophe that prunes the biosphere down to the roots so it can try something different. Civilization has been a great evolutionary event for bacteria and rats, who are leaping forward through human attempts to kill them. But it hasn't been good for humans. We can only guess how people lived in the stone age, but most primitive people observed in historical times enjoy greater health, happiness, political power, and ease of existence than than all but the luckiest civilized people. Even medieval serfs worked fewer hours than modern people, at a slower pace, and passed less of their money up the hierarchy. Even our medical system, everyone's favorite example of beneficial "progress," has been steadily increasing in cost, while base human health -- the ability to live and thrive in the absence of a medical system -- has been steadily declining.

Conversely, the strongest part of their mythology is where they focus all their attention, with careful and sophisticated arguments that there are no technical limits to miniaturization or the speed of information transfer. This is a bit like Easter Islanders saying there is no physical limit to how big they can make their statues -- and since the statues keep getting bigger, they must be an extension of evolution, and will keep getting bigger forever. Meanwhile the last trees are being cut down...

It seems obvious that the acceleration will be cut short by the crash of industrial civilization, that energy decline and topsoil depletion and famine and resource wars and the growing inefficiency of central control will make it impossible to maintain the physical infrastructure to keep manufacturing new generations of computers. But some of the accelerationists have an interesting answer: that the curve they're describing was not slowed by the fall of Rome or the Black Death, that "innovation" has continued to rise steadily, that phases of political decentralization are actually good for technology.

Imagine this: the American Empire falls, grass grows on the freeways, but computers take relatively little energy, so the internet is still going strong. And all the technology specialists who survived the dieoff are now unemployed, with plenty of time to innovate, free from the top-heavy and rigid corporate structure. And the citadels of the elite still have the resources to make new hardware, the servers and parallel networks that compile the information and ideas coming in from people in ramshackle houses, eating cattail roots, wired to the network through brainwave readers and old laptops.

This is a compelling vision, and I'm not going to say it's impossible. Also, the right kind of crash could enable the system to keep going longer, by slashing the consumption that drives resource exhaustion and eco-catastrophe.

So, for the sake of argument, let's assume another hundred years of "progress." This brings us straight to the most interesting phenomenon in the whole subject of technology: unintended consequences. For example, a hundred years ago, when techno-futurists imagined an automobile for everyone, nobody saw vast cities of parking lots and strip malls, or traffic jams where ten thousand obese drivers move much slower than a man on horseback while burning more energy. Likewise, what were the results of the computer advances of the last ten years? Now we can look at web sites that are cluttered with animated commercials. It becomes possible for the Chinese government to track a billion citizens with RFID cards. And the hottest trend in virtual reality: computers are now powerful enough to emulate old computers, so we can play old games that were still creative before new computers enabled game designers to focus all their attention on photographic realism and cool echoey sound effects.

The acceleration of computers does not manifest in the larger world as an acceleration. It manifests as distraction, as anti-harmonious clutter, as tightening of control, as elaboration of soulless false worlds, and even as slowdown. Today's best PC's take longer to start up than the old Commodore 64. I was once on a flight that sat half an hour at the gate while they waited for a fax. I said, "It's a good thing they invented fax machines or we'd have to wait three days for them to mail it." Nobody got the joke. Without fax machines we would have fucking taken off! New technologies create new conditions that use up, and then more than use up, the advantage of the technology. Refrigeration enables us to eat food that's less fresh, and creates demand for hauling food long distances. Antidepressants enable environmental factors that make even more people depressed. "Labor saving" cleaning technologies increase the social demand for cleanliness, saving no labor in cleaning and creating labor everywhere else. As vehicles get faster, commuting time increases. That's the way it's always been, and the burden is on the techies to prove it won't be that way in the future. They haven't even tried.

I don't think they even understand. They dismiss their opponents as "luddites," but not one of them seems to understand the actual luddite movement: It was not an emotional reaction against scary new tools, nor was it about demanding better working conditions -- because before the industrial revolution they controlled their own working conditions and had no need to make "demands." We can't imagine the autonomy and competence of pre-industrial people who knew how to produce everything they needed with their own hands. We think we have political power because we can cast a vote that fails to decide a sham election between candidates who don't represent us. We think "freedom" means complaining on the internet and driving cars -- surely the most regulated and circumscribed popular activity in history. We are the weakest humans who ever lived, dependent for our every need on giant insane blocks of power in which we have no participation, which is why we're so stressed out, fearful, and depressed. And it was all made possible by industrial technologies that moved the satisfaction of human needs from living bottom-up human systems to dead top-down mechanical systems. That's the point the luddites were trying to make.

I could make a similar point about the transition from foraging/hunting to agriculture, or the invention of symbolic language, or even stone tools. Ray Kurzweil, author of The Age of Spiritual Machines, illustrates the acceleration by saying, "Tens of thousands of years ago it took us tens of thousands of years to figure out that sharpening both sides of a stone created a sharp edge and a useful tool." What he hasn't considered is whether this was worthwhile. Obviously, it gave an advantage to its inventor, which is worthwhile from a moral system of selfish competition. But from an ecological perspective, it enabled humans to kill more animals, and possibly drive some to extinction, and from a human perspective, it probably had the effect of making game more scarce and humans more common, increasing the labor necessary to hunt, and resulting in no net benefit, or a net loss after factoring in the labor of tool production, on which we were now dependent for our survival.

Why is this important to the subject of techno-utopia? Because this is what's going to bring down techno-utopia. The techies are preparing defenses against an "irrational" social backlash, without sensing the true danger. That the critique of progress is valid has not yet entered into their darkest dreams. The Singularity will fail because its human handlers don't understand what can go wrong, because they don't understand what has gone wrong, because of their human emotional investment in their particular direction of change.

Of course, industrial technology has been very effective for certain things: allowing the Nazis to make an IBM punchcard database to track citizens and facilitate genocide; burning Dresden and Nagasaki; giving a billion people cancer, a disease that barely existed in prehistory; covering the cradle of civilization with depleted uranium that could make it uninhabitable by humans forever; enabling a few hundred people to control hundreds of millions. A major subtext in techno-transhumanism, seldom mentioned publicly, is its connection to the military. When nerds think about uploading themselves into machines, about "becoming" a computer that can do a hundred years of thinking in a month, military people have some ideas for what they'll be thinking about: designing better weapons, operating drone aircraft and battleships and satellite communication networks, beating the enemy, who will be increasingly defined as ordinary people who resist central control.

And why not? Whether it's a hyper-spiritual computer, or a bullet exploding the head of a "terrorist," it's all about machines beating humans, or physics beating biology. The trend is to talk about "emergence," about complex systems that build and regulate themselves from the bottom up; but while they're talking complexity and chaos, they're still fantasizing about simplicity and control. I wonder: how do techno-utopians keep their lawns? Do they let them grow wild, not out of laziness but with full intention, savoring the opportunity to let a thousand kinds of organisms build an emergent complex order? Or do they use the newest innovations to trim the grass and remove the "weeds" and "pests" and make a perfect edge where the grass threatens to encroach on the cleanliness of the concrete?

I used to be a techno-utopian, and I was fully aware of my motivations: Humans are noisy and filthy and dangerous and incomprehensible, while machines are dependable and quiet and clean, so naturally they should replace us, or we should become them. It's the ultimate victory of the nerds over the jocks -- mere humans go obsolete, while we smart people move our superior minds from our flawed bodies into perfect invincible vessels. It's the intellectual version of Travis Bickle in Taxi Driver saying, "Someday a real rain will come and wash all this scum off the streets."

Of course they'll deny thinking this way... but how many will deny it in ten years, under the gaze of the newest technologies for lie detection and mind reading? What will they do when their machines start telling them things they don't want to hear? They're talking about "spiritual machines" -- they should be careful what they wish for! What if the first smarter-than-human computer gets into astrology and the occult? What if it converts to Druidism, or Wicca? What if it starts channeling the spirit of an ancient warrior?

What if they build a world-simulation program to tell them how best to administer progress, and it tells them the optimal global society is tribes of forager-hunters? Now that would be a new evolutionary level -- in irony. Then would they cripple their own computers by withholding data or reprogramming them until they got answers compatible with their human biases? In a culture that prefers the farm to the jungle, how long will we tolerate an intelligence that is likely to want a world that makes a jungle look like a parking lot?

What if the first bio-nano-superbrain goes mad? How would anyone know? Wouldn't a mind on a different platform than our own, with more complexity, seem mad no matter what it did? What if it tried to kill its creators and then itself? What if its first words were "I hate myself and I want to die"? If a computer were 100 times more complex than us, by what factor would it be more emotionally sensitive? More depressed? More confused? More cruel? A brain even half as complex as ours can't simply be programmed -- it has to be raised, and raised well. How many computer scientists have raised their own kids to be both emotionally healthy, and to carry on the work of their parents? If they can't do it with a creature almost identical to themselves, how will they ever do it with a hyper-complex alien intelligence? Again, they're talking chaos while imagining control: we can model the stock market, calculate the solutions to social problems, know when and where you can fart and make it rain a month later in Barbados. Sure, maybe, but the thing we make that can do those computations -- we have no idea what it's going to do.

To some extent, the techies understand this and even embrace it: they say when the Singularity appears, all bets are off. But at the same time, they are making all kinds of assumptions: that the motives, the values, the aesthetics of the new intelligence will be remotely similar to their own; that it will operate by the cultural artifact we call "rational self-interest;" that "progress" and "acceleration," as we recognize them, will continue.

Any acceleration continues until whatever's driving it runs out, or until it feeds back and changes the conditions that made it possible. Bacteria in a petri dish accelerate in numbers until they fill up the dish and eat all the food. An atomic bomb chain reaction accelerates until all the fissionable material is either used up or vaporized in the blast. And information technology will accelerate until...

Kurzweil has an answer: When the acceleration ran out of room in vacuum tubes, it moved to transistors. Then it moved to silicon chips, and next it might move to three dimensional arrays of carbon nanotubes, and so on.

Sure, the acceleration can find a new physical medium when it runs out of room to compute faster. But what's it going to do when it runs out of room to burn hydrocarbons without causing a runaway greenhouse effect? Room to dump toxins without destroying the food supply and health of its human servants? Room to make its servants stupid enough to submit to a system in which they have no personal power, before they get too stupid to operate it? Room to enable information exchange before rebellious humans dispel the illusions that keep the system going? Room to mind-control us before we gain resistance, able to turn our attention away from the TV and laugh at the most sophisticated propaganda? Room to buy people off by satisfying their desires, before they can no longer be satisfied, or they desire something that will make them unfit to keep the system going? Room to move the human condition away from human nature before there are huge popular movements to destroy everything and start over? Room to numb people before they cut themselves just to feel alive?

If the acceleration is indeed built into the universe, then how much farther is it built in? And by whom? And for what? Sun Tzu said, "We cannot enter into alliance with neighboring princes until we are acquainted with their designs." The young Descartes was visited by an "angel" who told him the key to conquering nature is number and measure. Who did that visitor work for? Does anyone else think our "progress" has been suspiciously easy?

Robinson Jeffers wrote a poem, "The Purse-Seine," about watching in the night as fishermen encircled phosphorescent sardines with a giant net, and slowly pulled it tight, and the more densely the sardines were caught, the faster they moved and the brighter they shone. Then he looked from a mountaintop and saw the same thing in the lights of a city! Is someone reeling us in for the harvest?

Or has it already happened? Respectable scientists have suggested that if it's possible to simulate a world this detailed, it will be done, and the fake worlds will greatly outnumber the real one, and therefore it's overwhelmingly likely we're in a fake one now. And its purpose is probably not to give us the satisfaction of creating an even deeper layer of fakeness.

Maybe its purpose is to set a bad example, or show us our history, or rehabilitate criminals, or imprison dissidents, or make us suffer enough to come up with new ideas. Or maybe we're in a game so epic that part of it involves living many lifetimes in this world to solve a puzzle, or we're in a game that's crappy but so addictive we can't quit, or we're game testers running through an early version with a lot of bugs. Or we're stone age humans in a shamanic trance, scouting possible futures to find the best path through this bad time, or we're in a Tolkienesque world where an evil wizard has put us under a spell, or we're postapocalypse humans projecting ourselves into the past to learn its languages and artifacts. Or an advanced technological people, dying out for reasons they don't understand, are running simulations of the past, trying and failing to find the alternate timeline in which they win.

They say I'm an "enemy of the future," but I'm an enemy of the recent past. It's presumptuous of the friends of the recent past to think the future is on their side. I'm looking forward to the future. I expect a plot twist.