January - March, 2011

previous archive

January 3. Here's a set of predictions for 2011 from the Energy Bulletin. I don't like short-term predictions. It's like we're standing on the beach, and waves are coming in and going out seemingly at random. In five minutes, will the water be at our ankles, or our necks? No one knows! But we do know that the tide is coming in.

I'm reading Neal Stephenson's The Diamond Age and noticed this amazing quote: "It is remarkable how much money you can make shovelling back the tide. In the end you need to get out while the getting is good." Most of the American federal budget is now dedicated to shoveling back the tide. Of course that money should be spent moving to higher ground -- but the time for that was 30 years ago. Now the water is so high that the slightest lapse in shoveling will drown us. And that's another good metaphor: people who are drowning don't thrash around or call for help, even if it would save their lives, because they're putting all their energy into keeping their nose above water for just one more breath.

That's what I expect over the next few years: lawmakers just want to get one more breath, stay in office another term, not spark a riot, so they will make every decision to minimize short-term pain while reinforcing long-term decline. Looking at the actual laws that they pass is like looking at the foam on top of the waves. The only useful information is to see how high the waves are. Sooner or later, there will be no possible decision that will not spark a riot, and then the game changes, from shoveling to staying afloat.

January 5-7. Spooky Experiments That See The Future. It describes one experiment where people began responding to arousing or non-arousing pictures before the computer randomly decided which one to show them, and another experiment where the subjects took a word recall test, and they did better on the words that were randomly chosen for them to type in after the test.

I can tell you what's going to happen as more researchers try to duplicate these experiments. The results will get weaker and weaker until they become statistically meaningless. You can read about this happening again and again in the long history of psychic research. A good book on the subject is Charles Tart's The End of Materialism. But that doesn't mean it's all a dead end. We're only beginning to get weird.

This article, The Truth Wears Off, describes the same thing happening in all kinds of experiments, from drug trials to the mating preferences of birds. It's called the decline effect. A psychologist, Jonathan Schooler, noticed it consistently happening with his own experiments, and devised a meta-experiment, where he predicted that he would get strong results that would taper off into noise -- and he did! (It happened to be a precognition experiment similar to the one above.)

The second half of the article tries to explain the whole thing as some kind of selection bias. But notice that nobody has done an experiment to test whether it's selection bias. This is religious thinking: "We have a model of reality, and in order to stay within our model, this is what must be happening." The science is in Schooler's work in the middle of the article: Almost at will, you can get one-in-a-million results that later taper off into nothing. This should happen only one time in a million, and yet it happens reliably!

I'm a philosophical idealist: I think mind is the fundamental reality, and the physical world is like a collective dream. And in a collective dream designed for stability, the decline effect is exactly what we should expect. Anyone can make a spike in reality, but these spikes must be pulled back into conformity with the consensus. But here's another meta-experiment: Suppose you do an experiment in precognition or telepathy or remote viewing, and you never tell anyone. Or suppose you tell people in a flaky manner and with no evidence, so they can easily disbelieve you. If you're not encroaching on the consensus reality, will the effect still fade over time, or will it persist?

Don't try that experiment alone! Andy comments, and I agree, that breaking away from consensus reality is dangerous. Also he writes: "The point isn't whether we can prove that thoughts create reality; the point is to create a better reality using the best tools we have now."

So, can we use telekinesis or divination to influence the political system? Maybe, but I think it's more the other way around: decentralized politics enables decentralized reality! Stuart Staniford recommends the book The Trickster and the Paranormal. Here's a link to the introduction. [Update 2020: it's still the best book on this subject.]

January 15. I've just made a long post on the permaculture forums. It was in response to a thread about a community experiment in free meals, which seemed to show that people are ungrateful and lazy. Here's the link to my post, from which you can scroll up to see the posts I refer to, and here's the full text:

Humans enjoy working, or we would not have been so successful as a species. "Gaming the system" is not something we do all the time, but something we do under one or more dysfunctional conditions: 1) We lack full participation in power. 2) We are motivated externally rather than internally. 3) We have been institutionalized -- that is, we have lived under the first two conditions for so long that our responses to them have become habitual.

Here's a great video, The surprising truth about what motivates us, which helps explain external vs internal motivation. Basically, if a task involves any thinking at all, people perform worse if they're being rewarded with money. We perform best, and are most satisfied, when the work includes Autonomy, Mastery, and Purpose.

So, this food experiment might have worked out differently if the people had not been asked to contribute to a meal planned and prepared by someone else, but instead, if their contribution was linked to their power to plan and prepare meals themselves.

Also, as Burra mentioned, it's very likely that most of them had been institutionalized. I have a friend in Seattle who moved his stepson from the public schools to a radical private school where the kids can choose their own course of study and are never put under pressure to do any work. I talked to the instructors and they said that new students always spend weeks or months moping around and doing nothing, but they all eventually find the motivation inside them to start doing great work.

Another example, from the book The Continuum Concept: There's an autonomous tribal culture, and one guy went away to live in the city for many years. When he came back, nobody asked him to do any work. This tribe has a taboo against even asking anyone to do anything. For months he did nothing, but finally he made a little garden, and eventually he was happily and fully contributing.

What these examples have in common is that the community is made up of people who already know how to self-motivate and give each other slack, and they can absorb a few people who don't know how to live that way, and support them while they learn. But we live in a culture in which almost everyone has been beaten down with carrots and sticks. So, can a group of these people bootstrap themselves into an autonomous internally-motivated community? Maybe we have to do it, as in the above examples, by somehow building a core, and then absorbing new members one at a time.

January 24. TED talk by a guy who tried to build a toaster from scratch. It's interesting in a surprising way. Maybe a team of highly skilled DIY-ers could really do what Thwaites was trying to do, and build a toaster using raw materials from nature and preindustrial methods. But those materials and methods are no longer optimal. Because he was not highly skilled, he figured out that it was easier to use a microwave oven to smelt the iron, and to "mine" the plastic from a landfill. And if you're going to compromise that much, why not just find a toaster in a landfill and repair it? That's how the postindustrial age will be different from the preindustrial age.

February 8-10. The anti-tax movement is a product of increasing complexity. Contrary to popular myth, Americans do not have a long history of being against taxes. Until the late 1970's we liked taxes, because while the Empire was rising, taxes led to government spending that had clear benefits. But when the Empire peaked, government spending fell into diminishing returns. "Huge investments are now needed just to obtain incremental progress." There isn't really a way out, but tax cuts will hasten the collapse.

Of course, as the old system ossifies and breaks, vigorous new systems will grow through its cracks, and they will have better returns on investments. This piece, Great stagnation or external growth, argues that the internet is doing this already. Through sites like Craigslist and Wikipedia, and powers of searching and networking, the internet is providing tremendous value -- but most of it is outside the money economy, so life can get better even as the economy collapses.

March 7. I've had several emails lately from people worried about "government", and I want to do a little rant. We are in the midst of an ancient struggle between central control and... well, the opposite is so complex that if we try to describe it with a single word, that word gets distorted by the forces of control. For example, "liberty" and "freedom" have come to mean the freedom of the powerful to crush the weak under their boots. Even "autonomy" implies individualism, and ultimately, the strong individuals crushing the weak under their boots.

The opposite of central control is a system where all powers are distributed to all: group decisions are made by consensus of all members, and anything that anyone is permitted to do, everyone is permitted to do. So there are no official secrets, no restricted areas, no licenses, no uniforms. There are restraints, but they apply to all. There are people with greater physical and mental powers, but there is no mechanism to leverage these internal powers into external powers written into the system. The only "authority" is when someone is respected for understanding something better than others.

Obviously, we're a long way from building a system like that at a high level of complexity. I think it's going to take us thousands of years. In the meantime, we will pass through many rises and falls of control systems under many guises. It's like a bunch of plagues passing through us until we gain immunity to all of them. And if our immune systems are lazy, they will always be fighting the previous one instead of the new one.

The last round of really bad control systems was in the mid-20th century. The Nazis were the scariest, but the Soviet system was more stable, and over time, more harmful. Two of the most influential political writers of the 20th century, George Orwell and Ayn Rand, were both reacting to the Soviet system. Their reactions, among others, have come down to us as a set of cultural immune system habits: the enemy is big government, existing for its own sake, fed by taxes, building ugly concrete monoliths full of dull-minded bureaucrats, justifying itself through stories of happy people working together.

If you haven't noticed, that system is dead, except in China, where it's not quite dead. The new dominant control system is big money, existing for its own sake, fed by profits, building sparkly glass monoliths full of depressed cubicle monkeys, and justifying itself through stories of strong and free individuals working hard to earn shiny toys. Government remains only as a buffer between big money and all other life, or in some cases, as a weapon that both sides are grappling with to use it against the other. In the third world, soon including America, big money controls government absolutely. But most Americans, stuck in the old immune response, see the hammer of government coming down on them and don't notice who is holding the hammer.

I don't mind the government. It's true that most federal spending makes its way (through bank bailouts and military contractors and medical insurance companies and the mortgage payments of entitlement recipients) to the giant concentrations of money. But my income is low enough to not pay federal income tax, and I expect to be able to keep it that low. It's true that governments are still strong enough to crush anyone who gets in their way, but they're also so slow and predictable that it's easy to get out of their way. Governments are like glaciers, while private interests, unhindered by government, are like fires.

I'm thinking that even big money is dying, because it can't concentrate any more wealth at the top without the bottom falling out. So what's the next phase of control? Don't answer that, but keep your eyes open.

March 4. It's been about four years since I read a piece of fiction I loved so much that I didn't want it to end. The last one was Philip Reeve's Mortal Engines series, and this one is a Japanese comic, Yokohama Kaidashi Kikou. That link goes to the Wikipedia page, and here's a link to read it online. If you've never read any Japanese comics, you need to know that the panels are read from right to left.

The setting is decades or centuries in the future. The oceans have risen, there are overgrown ruins everywhere, and the human population is greatly reduced. The central character is Alpha, a human-like robot who runs an isolated coffee shop, and gradually explores more of her world. What sets this world apart, not only from other postapocalypse fiction, but almost all other fiction, is that there is no conflict! The characters are all nice people, and nothing really bad happens. It's all just beautiful and dreamy, and a bit sad. There's a popular idea that a world without evil would be boring, but clearly we just weren't imagining it well enough. The only thing I've read that's at all comparable is Richard Brautigan's In Watermelon Sugar. Also of interest to transhumanists: the "robots" are not like machines but more like spiritual beings.

March 9. Good Complexity, Bad Complexity. The author points out that technological systems get more fragile as they get more complex, while ecological systems get more robust. You remove one piece of a computer and the whole thing fails; you remove one species from a forest and it's only slightly weaker. From here, he argues that we should all use simple technologies and complex ecologies, the way hunter-gatherers do. But I want to go a different direction:

Why does added complexity make technological systems weaker and ecological systems stronger? How does the complexity know? Clearly we're being confused by language, and what we call "complexity" in a computer is different from what we call "complexity" in a forest. One difference is that a computer is controlled by the CPU, while a forest is decentralized; another difference is that a computer has fixed connections, while a forest can shift its connections around; another is that a computer depends on single components for most of its functions, while a forest is loaded with redundancy.

I think if you look at the system of hardware and software that runs Google, you'll find it's a lot more like a forest than like a single computer. The internet itself was designed with heavy redundancy -- it's been said that the net interprets censorship as damage and routes around it. With a few more components -- long-range wi-fi, backyard solar panels, garage microprocessor fabs -- the internet could even survive a hard crash.

It's not that our tech system is too complex, but that it's complex in the wrong way. We should be designing technologies to mimic nature. And going back to my last post, we should be designing societies to mimic nature. Our political and economic systems are still so amazingly primitive that they require decisions to come from the center, instead of emerging from everywhere. Empires fall, not because we're at the end of history, but because we're at the beginning.

March 15. Why do geeks love nuclear power? More precisely, using Howard Gardner's theory of multiple intelligences, why do people with high logical-mathematical intelligence like nuclear power so much more than people with high intelligence in other areas? Framed this way, it's an easy question. In the world of logic and numbers and predictable machines, nuclear power is totally safe. Chernobyl doesn't count because, for political reasons, the plant was not designed, regulated, or run correctly. Fukushima doesn't count because, for economic reasons, the plant was not built to withstand an 8.9 earthquake and tsunami. Nuclear power would be perfect if only you stinky primates would obey our beautiful science!

Of course, the anomalies that cause nuclear accidents are not the exception, but the rule. In this post, Stuart Staniford writes, "I have been deeply impressed at the ability of nuclear facilities to act as a multiplier of other kinds of disasters." And Sharon Astyk argues that we should design for failure:

...the costs of building levees to withstand category 5 hurricanes, and deepwater drilling platforms with automatic shutoffs, and nuclear plants in safe zones to withstand higher earthquakes are enormous -- at a time when the US alone has 3 trillion dollars worth of delayed infrastructure work... We seem to be reaching a point where Joseph Tainter's observations on the diminishing returns of complexity become strikingly obvious.

My own argument... is that we should turn it around and presume failure. That is, we should ask ourselves "what strategies are most effective and least risky in failure situations...given that systems failures happen all the time." In this model, distributed systems are less dangerous than centralized ones, and those that give partial return even if projects aren't completed or if models break down are more valuable than those that give more but only if we front-load a huge investment to them. It creates, in the end a different way of looking at our world...

To be fair, even with accidents, nuclear has killed far fewer people than coal. But even if it were physically harmless, it's still socially harmful (along with most other present sources) because it's centralized, and humans will inevitably turn centralized energy production into centralized political power. And even if they invent "Mr. Fusion" and we all have home energy plants, I still think it would be a disaster, because the more energy we have, the bigger mistakes we make using that energy, especially if the source allows us to become detached from other life. The ideal energy technology is home-scale, home-fixable, integrated with other life, and produces only enough for modest fun and comfort.

March 18. Cognitive Archeology of the West. Paula Hay combines Richard Sorenson's observations of "pre-conquest consciousness" with the "fall of man" story from the Bible, and argues that civilization began with binary thinking.

I basically agree, but I don't agree with what seems to follow: that we will either return to primal consciousness or go extinct. Our invention of detached, either-or, objective, mechanistic, individualist thinking is like many of our other inventions: it leads us to mistakes and tragedies until we learn to use it correctly, and integrate it into a larger understanding. In the future, we might bring out this style of thinking, use it, and put it away, like a carpenter would use a hammer. But at the moment, we've been hammering for so long that we can't even imagine other tools.

The problem is, how do you use rational language to describe extra-rational thinking? If you want to use a hammer to describe a hammer, you just pound it. But to use a hammer to describe a screwdriver, or a saw, or a sailing ship, you have to be extremely skilled with a hammer, or people will just think you're hammering badly.

March 26. Sometimes I feel like I'm in the middle of a war. There are bullets flying and explosions all around, and I'm trying to organize people on my side to fight effectively, and instead they're just standing around saying, "Look, they're shooting at us! I can't believe they're actually shooting at us! Look at those bad, bad people doing that bad, bad thing! Shame on th- (takes bullet in head)"

There's only one place for morality in this world, and that is that your actions must serve the greatest, widest good that you can perceive. Beyond that, it's all strategy and tactics. Applying morality to the actions of other people is a strategic error. I think this error goes back to our tribal ancestors. If one person does something to harm the tribe, the others will use shaming to bring this person into line. If this feels to us like a moral action, it's because it was easier for our ancestors to mindlessly throw righteous indignation at the wrongdoer, than to carefully discern why a behavior is harmful and how shaming will correct it.

Now here we are in a world of high tech and giant systems, still reflexively following the habits of the tribe, possessing a magical tool that sends words and pictures around the world in seconds, and wasting it by pointing fingers at governments and corporations and their human servants, as if these unimaginably complex systems, berated by a shrill minority, will bow their heads and obey like little children pressured by everyone they've ever known.

But there were things in our ancestral environment that behaved very much like governments and corporations and banks and mass collective short-sightedness: storms, floods, avalanches, disease epidemics, fires. As Thaddeus Golas wrote in The Lazy Man's Guide to Enlightenment: "When your consciousness is open, any action you take in reference to evil has no more significance than digging a ditch to channel floodwaters away from a house." If we think of modern "evils" as amoral forces of nature, and not as wayward family members, we begin to see how to deal with them.

The corporation can be influenced through the boycott, which modern lefties have tragically misunderstood as an individual action aimed at avoiding guilt. The correct way to do a boycott is to organize a block of customers of a business, large enough that without their money the business will fail. Then you make a precise demand, and you all withhold your money until the demand is met. Likewise, a government will not be influcenced if every one of its citizens shouts disagreement, but it will be influenced if you can make a precise demand, and organize enough people to stop the government from functioning until that demand is met.

But suppose the damage is being done by almost everyone, through popular behaviors that feel meaningful and have no clear alternative. Suppose the harmful systems are so bound up with perceived benefit and necessity that you cannot organize enough people to stop them. What do you do when a flood is too big to save the town with channels and sandbags? You evacuate. But now my metaphor is breaking down, because I'm not telling people to head for the hills, or to take their money out of the credit union, or even to stop voting. Go ahead and participate in the system, but your strategic goal is no longer to turn the system around and save everyone. Rather, it is to guide the system through its collapse.

next archive