If something is 100 percent functional, it is always beautiful... there is no such thing as an ugly nail or an ugly hammer but there's lots of ugly cars, because in a car not everything is functional... sometimes it's very beautiful, if the person who designed it has very good taste, but sometimes it's ugly.
This image is from a 2015 article, Wonderful Widgets. The one on the left, though functional, is totally ugly. But the next two are more beautiful, and that beauty has been achieved purely by making them more functional.
This is what I mean when I say that any sufficiently advanced technology is indistinguishable from nature. Human civilization is still like the widget on the left, and in ten thousand years, it might be like the widget on the right.
So human-made ugliness doesn't just come from bad taste -- it also comes from clunky functionality. And I think there's a third cause. From the footnote on the McMaster-Carr article:
Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years... Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children.
A lot of the human-made world is designed neither for functionality nor beauty, but for human social reasons, for status and ego. Consider the lawn. Lawns are both less beautiful and less functional than a lightly tended assortment of locally adapted plants. And yet, people spend massive time and resources on lawns, because lawns are a symbol of the cultural drive to impose control.
Our world will continue to be ugly until we change our culture, so that we feel better about allowing things to be good in their own way, than making them be the way we tell them to be.
]]>Intelligence is always specific to the application. Intelligence for a search engine isn't the same as intelligence for an autonomous vehicle, isn't the same as intelligence for a robotic bird, isn't the same as intelligence for a language model. And it certainly isn't the same as the intelligence for humans or for our unknown colleagues on other planets.
If that's true, then why are we talking about "general intelligence" at all?
These were students who had eaten enough frogs to get into Princeton and Harvard. Their reward was -- surprise! -- more frogs. So they ate those frogs too. And now they're staring down a whole lifetime of frog-eating and starting to feel like maybe something, somewhere has gone wrong.
There's also good stuff in the Hacker News comment thread. But missing from both is any critique of industrial capitalism. For hundreds of years, machines have been doing more stuff; and when making decisions about whether to replace human workers with machines, the guiding principle has been making money, rather than arranging society so that we enjoy what we're doing.
Mechanization justifies itself with the assumption that useful physical tasks are all tedious chores, which is not at all true. A good book on this subject is Shop Class as Soulcraft by Matthew Crawford.
In thinking about tasks that we should or shouldn't build our lives out of, I've been framing it in terms of tasks we enjoy or don't enjoy. That's not wrong, but this blog post, On being tired, mentions a framing I find more useful: tasks that give back energy vs tasks that drain energy.
This idea gives me the leverage to critique a framing I find less useful: tasks you believe in, vs tasks you don't. That's why I failed as a homesteader. Even though I strongly believed in self-sufficient low-tech living, it turned out that almost all of the actual tasks drained my energy. (The only one that didn't was throwing sticks into piles.)
The culture of motivational speaking assumes that your belief, your attitude, your aspiration are all-important. I think those things are like jump-starting a battery. Then, if doing the actual tasks doesn't give you energy back, your battery is going to die again.
Two related links. Countering the Achievement Society is about reinventing schooling so that it's not about joyless accomplishment, but having the free time to find your place in the world.
And A new way of life: the Marxist, post-capitalist, green manifesto captivating Japan is about how much better life will be if we give up economic growth.
This AI that is coming into existence is, to my mind, not artificial at all, not alien at all. What it really is, is: it's a new conformational geometry of the collective Self of humanity.
Now, I don't know what "conformational geometry" means. It sounds like a fancy way of saying form or shape. But I think he's right. The best way to think about AI, is to think of it as human. AI will never go rogue, or "become sentient". It will always do exactly what humans tell it to do -- which will never be quite what we want it to do, and increasingly, not what we expect. But it remains fundamentally an extension of our own story.
Meanwhile, here's a comment thread on the Seattle subreddit about something that's actually non-human, the intelligence of crows.
]]>They see humans give other humans things and get food in return, but don't quite equate that only specific things count. I've seen them try to feed leaves and bits of paper to a vending machine before in hopes of persuading it to give up snacks.