Archives

February 2020 - ?

home
previous archive

February 6. Can social technologists solve the atomization problem? The author does a great job framing the problem. Condensed:

The structure of the problem is not man vs machine. It is actually a market-driven process that concentrates society's top cognitive talent on the engineering problem of how to best undermine an individual's agency. It's not a fair fight. We've all been taught that we're sovereign individuals gifted with full agency and capable of choosing what's best for ourselves at any given moment. But this doesn't describe the world as it actually exists.

I think his solutions and predictions are off base. They're all about communities finding ways to limit the use of technology. But it's not clear that technology is making us unhappy. I mean, that's what's happening, but it's hard to prove it, and it doesn't feel that way. We love our devices, and hate the world.

Here's how I see it playing out. First, suicide acceptance. I was watching that Cheer documentary, and there's a bit where someone says, "If you don't like it, there's the door." It occurred to me, nobody says that about life. There's a door, but we don't talk about it, and trying to go through it is illegal. So I expect the dominant culture to have stronger anti-suicide messages, while underground movements become bolder in supporting suicide for even healthy young people.

By the way, my argument against suicide is that the people who want to kill themselves are the same people who intuitively sense how much better life could be, and they're the ones we need the most.

Second, the continuing growth of tribalism, which I define as identification with a group, where the group identity is based on conflict with some other group. It's like a correction against systems that do a bad job of providing meaning, because ingroup-outgroup violence is a source of meaning that's strong and simple and always waiting under the surface.

Third, even deeper immersion in technology, and I'm not necessarily against it. I frame it like this: Nature, good; human-made physical world, bad; human-made imaginary worlds, good. The problem is, who's going to do the grunt work if we're all gaming? In the best-case scenario, we learn things from imaginary worlds that show us how to make the physical world better.

What's probably really going to happen, is that today's radical threat becomes tomorrow's new normal. We'll just get used to the burden that pocket computers put on mental health, and in another 20 years, we'll all be talking about the threat of biotech.