Ran Prieur

"Look at the sunset from the sun's point of view."

- Steven Wright


old stuff


about me

search this site

Creative Commons License

December 19. Two Hacker News comment threads, one from today and one from two years ago, on the same 2017 Twitter thread: Almost everything on computers is perceptually slower than it was in 1983.

December 16. Posting will probably continue to be light through the holidays. Today I have some thoughts about wealth and power. I was reading an article about wealth inequality, and remembered my critique of the word "privilege". I've said that it points to two different kinds of things, but now I can see three. First is things that some people have, that everyone should have, like not going hungry, or being free to travel.

Second is things that some people have, where it's okay that only some people have them. Would you rather live in a mansion or a cottage? Would you rather go to a five star restaurant or a dive bar? It's like asking your favorite color. Just because something costs more money, doesn't mean it should be universally available, or unavailable.

The problem is that money is closely related to the third category, the thing that some people have, that no one should have: power over others. And that's the real problem with wealth inequality: that our whole society is built to make us all NPC's in the game of leveraging money into more money, where only one in a thousand can play.

I was reading an article about personality differences between men and women, which tries to describe some differences without interpreting them as either biological or cultural. I think most of the described differences are cultural. Women are more "compassionate, polite, anxious, self-doubting" because they're still emerging from thousands of years of having lower social status.

Right now, the phrase "women's voices" implies voices of the oppressed, voices of the excluded. Only when it no longer has that meaning, will we know who women are.

December 11. Two videos of innocent psychedelia. First, from the subreddit, a cover of the Crystal Castles song Untrust Us, performed by the Capital Children's Choir.

And Gabriel sends BIG BIRD (Terence McKenna). Someone has compiled more than eight minutes of McKenna's best spoken word bits, and lip-synced it to a video of Big Bird taking children on a tour of a farm. My favorite bit:

The unspeakable is the true domain of being. And then within that, there is a very small subset of those things which can actually be captured in language. Mostly it's all mystery.

Coming in the wake of the new video about me, I'm impressed by McKenna's skill as a speaker: his word choice, his careful pace, and his inflections. And I wonder if some of my words would sound better from the mouth of a muppet.

December 9. A year and a half ago, a fan and a cameraman came to make a film about me, and it's just been posted to YouTube, a short doc about Ran Prieur.

I was a little afraid to watch it, but I'm really happy with it. It's weird to see myself from the outside. It reminds me of that Far Side comic, where these two guys are listening to a tape recording of themselves, and saying, "Wow, we sound like total dorks." And the joke is, they are total dorks.

Another thing I noticed is how much happier I am when I'm talking about writing fiction. It's a lot more rewarding than writing this blog, but it's also a lot harder, and has an even smaller audience.

Now I'm thinking about fame. Our culture tells us that fame is an accomplishment, when really it's a lifestyle choice. The difference between the famous and the unknown, is not how good they are at what they do (except athletes). The difference is that some people channel their skill in a way that gives them shallow connections with a lot of people. And unless you're someone like Tom Cruise, I think that's a mistake.

Early in the doc I mention Emily Dickinson. I think, for an introvert, she did it exactly right: she wrote without compromising for an audience, she never had to deal with fame, and people are still reading her stuff two centuries later.

I don't think of my own writing as self-expression. I think of it as something that was always there, and I was just the first person to find it. It's like I'm colonizing a planet, and I'd like to eventually hang out with other people who have come to live there.

December 5. So I just tried the hot new apple variety, Cosmic Crisp, and they're for real. I've never been a fan of Honeycrisps. They're crunchy and sweet, but they lack the tartness and denseness of a great apple. The best eating apples are all russets, but you can't buy them in stores because their skins are not shiny. This actually goes back to Monday's subject: the bigger the crowd, the harder it is to get them to buy apples that are better on the inside than the outside.

But now, consumers don't need to be smart, because Cosmic Crisps are pretty on the outside, and dense and full-flavored on the inside. I just did a taste test against Fuji, which is no slouch, and it wasn't even close. They're also really expensive -- the other night I paid almost $10 for five of them. But in a few years, as the supply increases, prices will come down, and the Cosmic Crisp will drive a lot of varieties off the shelves.

Everyone knows the worst apple is Red Delicious, but a hundred years ago they were really good. What happened was genetic drift, reinforced by the values of industrial farming, so the apples got gradually cheaper to grow and ship, with worse flavor, but still pretty. The same thing might happen to Cosmic Crisp, and in another hundred years we'll need a new apple.

December 2. Smart blog post by a long-time reader, Idea strength, cringe, and the media environment. The basic idea is that technology has connected us so much that creative work is taking fewer risks:

When artists become more socially connected to each other and to consumers, bold choices get riskier. Every mind in a perceptual network is a vector for cringe. As the network grows more interconnected, the potential for cringe increases. An artistic risk that might have been low in a less connected environment becomes high.

This reminds me of a bit in this YouTube talk, Why Greatness Cannot Be Planned. It's by the creator of Picbreeder, a brilliant image-breeding site that no longer works because web browsers can no longer run Java. Anyway, he found out that images bred by individuals are much better than images bred by voting.

One of the comments explains it like this: "Consensus-driven frameworks prematurely optimize and miss the necessary low-fitness stepping stones needed to find creative complex solutions." In simpler language: you have to go through difficult stuff to get to good stuff, and the bigger the crowd, the harder it is to get them to go through difficult stuff. Another example is Hollywood test screenings, which polish out anything really good because someone thinks it's too weird.

But when I think about it more, there are two different things going on here. One is what I've just described, the blandifying effect of the crowd. The other is the long tail of taste: with more creative work available, there's more room to like stuff that fewer people like.

In listening to music, I've gone a long way down that rabbit hole, and the difficult thing is not in how I hear the music. It's not like I'm forcing myself to listen to stuff that sounds bad until I like it. The music always pulls me in, and the difficult thing is the loneliness of loving something that no one else understands. Or, the obstacle to exploring the long tail of taste, is not perceptual but social.

November 29. A few stray links. First, an interview with the author of McMindfulness:

Corporations in the U.S. are losing approximately 300 billion dollars a year from stress-related absences and seven out of ten employees report being disengaged from their work. So there's certainly a problem... The remedy has now become mindfulness, where employees are then trained individually to learn how to cope and adjust to these toxic corporate conditions rather than launching kind of a diagnosis of the systemic causes of stress not only in corporations but in our society at large.

Rex sends a 52 minute video, The Plant Ecology of Concrete, Garbage and Urine. As described by one of the comments: "Imagine a guy with a camera walking through homeless people territory and a dump place, getting all scientific about plants." Also with lots of fun and cynical social commentary.

A new Paul Graham post, The Bus Ticket Theory of Genius, arguing that the most important factor in doing great work, is to be obsessed with something with no expectation that it's useful, like people who collect old bus tickets.

This week I got obsessed with making a video of an obscure song, using images from two classic books of sci-fi paintings. I've just uploaded it: Wireheads - Sagan.

November 26. I'm taking the week off from blogging. There are some Thanksgiving-related recipes on my old misc. page.

November 21. So I just found out that the original Blade Runner movie was set in November of 2019. I've been toying with this idea, that the best way to keep AI from doing bad stuff, is to make every AI intelligible, to humans, as a person. The idea comes from science fiction, where you might have computers or androids who don't behave quite like humans, but they're still characters. You can still get a grip on their behavior with the same mental tools that you use to understand other humans. We might need this constraint on AI's, because we know how to think about people, but we don't know how to think about whatever Google is doing.

Related: a blog post by Adam Elkus, The Varieties Of The Technological Control Problem. Near the end, he uses the movie Ex Machina to illustrate how our relationship with technology is like an abusive couple:

All abusive relationships begin between two individuals that believe they are both in love and that they can meet each other's varied needs. But over time several negative things occur. First, the parameters of the relationship are subtly changed to the disadvantage of one of the parties. Second, that party becomes less and less capable of recognizing what is happening to them and breaking free of the abuser. So perfectly independent and emotionally stable men and women can in theory become shells of their former selves after being trapped in an inescapable web of abuse that, sadly, they come to believe that they deserve. This is a good metaphor for [Langdon] Winner's own formulation of the control problem. Technology can be "autonomous" in the sense that humans may enter into relationships with technologies with the goal of improving their lives. However in reality people end up finding themselves hopelessly dependent on the technologies and deprived of any meaningful control or agency over them.

New subject. It's getting close to the end of 2019, and I'm ready to list my favorite music of the decade. I have a chronological 2010's playlist on Spotify, but two of my top three songs are on neither Spotify nor YouTube. Those two also happen to be from my two favorite albums of the decade, which you can find from the Bandcamp pages. The top three:

The Lovely Eggs - Wiggy Giggy

Wireheads - Holiday

Big Blood - A Watery Down Pt II

November 18. Today's subject: feeling bad, starting with a reddit thread cross-posted to the subreddit: Suicide hotline is bullshit. Apparently this is only a problem in the USA, where unhappiness beyond a certain level is criminalized, and if the system finds out you're suicidal, they will make your life much worse.

That's why so many depressed people pretend they're happy. You can find a lot of examples in this thread: How does your depression manifest in ways that non-depressed people wouldn't expect or understand?

Related, from a couple weeks ago, How would you explain anxiety to someone who rarely deals with it?

And a smart article on deep brain stimulation as a cure for depression. This reminds me of a bit from the BBC documentary, Pain Pus and Poison, where the pioneers of anesthetics had to overcome the belief that it's impossible to cheat pain.

Finally, a fascinating article from the Guardian, I wish I'd never been born: the rise of the anti-natalists. There are different levels of anti-natalism, which I would define by how far people go in projecting their own unhappiness onto others. The first level is just not having kids of your own, because you think they're better off not existing. The next level is wanting all of humanity to go extinct. The next level is the belief that "all sentient beings should be spared from life."

Never mind the difficulty of defining "sentient", I'd like to hear a debate between a full-on anti-natalist and a squirrel. (It's easier for me to imagine a talking squirrel than a genuinely happy human.) I would say, isn't life full of good feelings that make the bad feelings worthwhile? The anti-natalist would say, those are two different orders of feeling, you can't compare them, and any amount of pain makes existence a bad idea. The squirrel would say the opposite: life contains a lot of pain, but it's all part of the joy of being alive.

So the real cause of anti-natalism isn't pain, but the absence of pleasure. Now we're tripping over language again, because "pleasure" implies stuff like eating ice cream and watching Netflix, which are overwhelmed by serious depression and anxiety. We don't have a word for the kind of good feeling that would overwhelm common bad feelings, because that kind of good feeling is not part of our way of life.

November 14. Why Technologists Fail to Think of Moderation as a Virtue, a smart review of a new book about artificial intelligence. There's a famous thought experiment called the paper clip maximizer, and Elon Musk tells a version about a strawberry maximizer, an AI that eventually "blankets every nook and cranny of the planet with strawberry fields and annihilates civilization in the process."

But the review cites sci-fi author Ted Chiang, who says the tech executives are projecting their own value system, every company trying to maximize growth at all costs. We're worried about computers, when corporations are already powerful and dangerous artificial beings.

I would argue it like this: an actual strawberry maximizer is almost harmless, because we understand what it's doing. It has a clear and simple motive, and the moment it destroys something we care about, in order to grow more strawberries, we'll stop it. But nobody understands what Google is doing, not even the people who work for it. It doesn't even make sense to say Google has a motive. It has behaviors, and consequences, and its behaviors are tied in with our own interests, so that it can do a lot of damage before we get serious about stopping it.

You could say the same thing about automobile traffic. A human creation has duplicated itself so successfully, that you can find it everywhere there are roads. Whole cities are made for it. It has made us completely dependent on it, while violently killing us, making us sick, isolating us socially, consuming massive resources, and throwing our planet's climate out of whack. Strangely, no one loves this radical contraption more than self-described conservatives (who have also found the one good use for it: racing).

But we don't call it an AI, because it's not intelligent in any way that we recognize. Its intelligence is hidden inside us. Our own simple motive, to go faster, has tricked us into doing all the thinking for a global takeover by an evil robot army.

By the way, our motive to understand this subject has led to a really unsatisfying discussion over on the subreddit. I think the deeper problem is, the human-built world has been so transformed by cars, that it's difficult to imagine a world where no one would miss them, especially one with other modern technologies.

But I think it's going to happen, not through utopian planning, but because maintaining roads is really expensive. In my own state, the roads are about to go to shit because of a populist revolt against car taxes. In a hundred years, populations will be falling everywhere, and places with more pavement than people will get abandoned.

November 11. Speed limits for ships can have massive benefits. This reminds me of an argument by Ivan Illich, that the world would be a lot better with a universal speed limit of 15 miles per hour.

Here's my own page of excerpts from Ivan Illich on Cars, where he calculates that Americans put so much time into their cars, including working to pay for car expenses, that their effective speed is only 5 mph. Also, the walkability of a city is a key factor in the social mobility of its residents. And why are cars killing more and more pedestrians? Probably because cities are increasingly being built for more, faster, and heavier vehicles.

Of course, even moderate transportation reforms are politically impossible. So I might as well go full-on utopian. Instead of limiting speed, I would limit momentum: mass times velocity. Say, 2000 pound miles per hour, or 40 kilogram meters per second, roughly the momentum of an average sized person on a bicycle. Old and sick people could still putter around on electric wheelchairs. Shrink the roads to trails, abandon the sprawl, turn the parking lots to food forests or high density housing, turn the railroad tracks to intercity trails.

With no heavy freight, all manufacturing would be local, and smaller scale. Every city would look different because new construction would have to be made out of local materials. Food would be local, so some cities would really need to innovate with greenhouses or algae or solar-powered protein fabricators. (This is not a low-tech utopia, only a slow one.)

Air travel would switch completely to craft that are lighter than air, and you could actually go a lot faster than 15 mph, on an airship that follows the wind. Anyone who wanted to go east would be waiting for an east wind. (I sort of already do this. There's a town eight miles east of here, with a good bike trail, and I only go when there's an east wind, so that the ride home is effortless.)

Now, let's get really crazy and put a speed limit on data. I think the best rule would be that data can only be carried on physical media by human couriers. That could be enforced by cable-cutting, signal-jamming, and shooting down drones, and big systems would have no advantage over independent agents in motivating couriers to move fast.

The social effects of slowing data to human speed are really interesting, and too big for this post. But I think both the right and left would get on board if they thought it through. And it could become politically realistic inside a thousand years, especially if there's a backlash against big data.

November 7. Some loose ends from the last post, mostly stuff I wrote over email:

I cringe a bit when I use the phrase "local communities", because when politicians use that phrase, they're always bullshitting. To actually bring back local communities, we would need politically impossible reforms. Or a tech crash.

I'm wondering if the breakdown of local communities could be explained purely by cars and telephones, which enable us to have a larger number of more far-flung and weaker human relationships. And of course the internet pushes that even farther.

If people have more money than time, they are forced to use money as the medium of social connection. Only when they have more time than money, can they connect through understanding.

Erik mentions "mental grey goo". Grey goo is the doom scenario with self-replicating tiny machines, and some mental artifacts are like that. They're very simple and they can take over our minds. So, money, internet memes, and certain religious or political ideas.

Maybe all it takes to get people interested in local politics, is to fill it with the same ingroup-outgroup drama as national politics, and then our cities and towns would become as fucked up as our country.

Some music for the weekend. My favorite album of 2019 is by a noise band called Sly and the Family Drone. Their instruments are drums, feedback, and baritone sax, and their sound is basically space jazz, ranging from ambient to heavy in the same track. Their best song, Jehovah's Wetness, has a climax that sounds like whale metal.

November 4. A few weeks ago I wrote that the thought of my own death gives me a sense of relief, because I would be free of all my responsibilities. This goes back to my favorite question lately: Why is there so little overlap between what's good for us to do, and what we feel like doing? Then I was reading Matthew Crawford's book Shop Class As Soulcraft and found a clue in this line: "We want to feel that our world is intelligible, so we can be responsible for it."

Coming at the same subject from another angle, a friend writes:

I am reading some family stories of my 92 year old neighbor, whose father and grandfather are prosperous farmers. They write a lot about the vital importance of being a prominent member of the local community. Not prominent in status or power, but prominent in the ability to help, to meet needs, in their local communities. It's their obligation, but also, their honor. They don't buy seeds from the fancy store far away for half the price; they buy seeds from the local guy, because that is what a community is all about.

Why did we stop doing that? I reject any kind of moral judgment, that people were better in the old days. People are the same as ever, we always do what seems like the best thing at the time, but our environment has changed so that abandoning local communities seems like the best move.

For the answer to both questions, I blame technological complexity. In a hunter-gatherer tribe, or a medieval village, or even the USA a hundred years ago, the human-built world was intelligible to you and your friends. You could wrap your head around the importance of whatever you were doing, and if something went wrong, you knew someone who could fix it.

Now the human-built world is so complex that you can't possibly know enough people to stay on top of it. We have to constantly deal with specialists who we might never talk to again. And the specialists, even if they're doing something useful, are doing the same thing over and over for strangers, so they're not really into it. And at the same time, we're supposed to be super-nice to each other and pretend to be happy, which means hiding our gnawing awareness of how many things could go wrong, that we have no idea how to deal with.

Lots of people have written about the costs of complexity. Joseph Tainter's book The Collapse of Complex Societies is mostly about physical stuff rather than human psychology. However you frame it, three things are certain: 1) More complexity, more problems. 2) It's easy to gradually raise complexity, and really hard to gradually lower it. 3) So when complexity falls, it tends to fall a lot.

Usually the way this happens, is that people withdraw their emotional support, and then their practical support, from old, overly complex systems, and give it to new and simple systems that make them feel better. Beyond that, I have no idea how this is going to play out, but I'm curious to see it.

I don't do an RSS feed, but Patrick has written a script that creates a feed based on the way I format my entries. It's at http://ranprieur.com/feed.php. You might also try Page2RSS.

Posts will stay on this page about a month, and then mostly drop off the edge. A reader has set up an independent archive that saves the page every day or so. I've archived the best stuff, and they're all linked from the old stuff page. Below are the newest archives:

November 2016 - February 2017
February - April 2017
May - August 2017
September - November 2017
December 2017 - March 2018
April - June 2018
July - September 2018
October - November 2018
December 2018 - January 2019
February 2019
March - April 2019
May - June 2019
July - August 2019
September 2019 - ?