"If observing outer space gives us a view of the past, observing inner space would surely give us a glimpse into the future."
-Ken M
February 20. A few psychology links, starting with a nice Reddit thread, During a very dark period, what was the best thing you ever did for your mental health?
Hacker News thread, I don't like making the best things. Lots of stuff about the benefits of saying this is good enough, and also the benefits of trying to do things better. A key comment: "I think adding a performative aspect to a lot of things sort of kills the joy in doing them."
Review of a new book, The Case for Hanging Out. "Create opportunities to spend unproductive, unstructured time doing nothing with other people."
A couple weeks ago I saw a summary of an article about social intolerance, in which two predicting factors are "high perseverance and low persistence." I was like, whoa, aren't perseverance and persistence the same thing? According to the dictionary, they still are, but psychologists have come up with a really interesting distinction.
I'm not even sure which word is which, but one of the two things is continuing toward a goal in the face of setbacks in the process itself; the other is continuing toward a goal when external circumstances change. It blows my mind that something so basic is so arcane.
February 17. Before I move on from chatbots, a Reddit transcription of a much-cited paywalled piece, Bing's chatbot says "I want to be alive". And a Hacker News discussion about bots seeming to show human feelings. My favorite bit: "It's basically a sophisticated madlib engine."
I still like my comparison with radio. It's a powerful and transformational technology, and at first, it feels like there's a person inside the box. Once we get used to it, we'll understand that chatbots are not a new form of life, but a new funhouse mirror for human consciousness.
Next up, direct brain hacking. Buzzing the brain with electricity can boost the willingness to engage in mental effort. This could get utopian or dystopian pretty fast.
And music for the weekend. A couple weeks ago I mentioned Maroofy, a song recommendation engine that goes by sound. I found both of these quite good super-obscure songs from the same search on Automatic's Humanoid: The Science Faire - Promotions and Trademark Issues - Umbrellas and Parasols.
February 15. With chatbots getting up to speed, it occurs to me that passing as human depends on context. A bot can write a college paper better than a lot of students -- except that they're bad at facts. But I could read a page of a novel and know 100% if it's bot-written -- unless it's a human trying to spoof a bot, but even that's pretty hard. They have a distinctive voice, the style smooth and obvious, the story so headlong that it forgets where it's been.
This new Stephen Wolfram article, What Is ChatGPT Doing, explains how it works in great detail. The basic idea is that the machine "is just asking over and over again 'given the text so far, what should the next word be?'"
I finally tried it. Yesterday I got on Character.AI and made a character out of the protagonist in a first person novel I've been working on. It was easy, even after I went to the advanced screen without saving and had to start over.
For the advanced definition, I asked "What is your story so far?" and composed her answer myself, in her voice. Then on the chat screen, I told her that she was my character, and she totally got it. (I understand this is an illusion.) Now I can tell her what just happened and ask what happens next.
Of course the style is not to my standards, and the events push the plot too fast. But if all I need is an idea, this thing is a fountain of ideas. And if I use an idea in my own words, I add the words to the advanced definition.
I'm cautious about doing it too much. Even though the bot is a machine, I expect it to follow a core principle of woo-woo stuff: the more you do it, the weaker the results.
February 13. I need to be more careful with language. Even the word "intelligence" is broad and vague, "artificial intelligence" even more so. People who actually work on it often call it "machine learning", and of all the things machines are learning to do, I want to continue to focus on one specific thing: generating text that is very similar to text generated by humans.
Chris sends this thoughtful blog post, GPT-3 Is the Best Journal I've Ever Used:
Talking to GPT-3 has a lot of the same benefits of journaling: it creates a written record, it never gets tired of listening to you talk, and it's available day or night.
If you know how to use it correctly and you want to use it for this purpose, GPT-3 is pretty close, in a lot of ways, to being at the level of an empathic friend.
The other day I said chatbot gurus might make religion weird, but Eric points out that there's all kinds of weird stuff in the Bible, and people just ignore it. In the end, it's still humans in charge.
I'm also thinking about other transformational technologies. Radio was huge for a few decades. Both Hitler and FDR used it powerfully in politics, and then it shook up culture in the 50s and 60s. Now? It's bland and mostly ignored.
The internet, which was still fresh and radical 20 years ago, is already locked down by giant multimedia companies. It's hard to use it in a way other than being a passive consumer of amusement and ads.
So we're now entering the "wild west" phase of chatbots. Enjoy it while it lasts.
February 11. I changed my mind about chatbot gurus. Even if they only attract regular guru followers, that's still a lot of people, and it's still really interesting. Consider The Urantia Book, an early new age book "said to have been received from celestial beings." I guarantee, someone is already thinking their chatbot is channelling an entity. And what do we know about entities anyway, that they can't possess chatbots? At the very least, unlike Urantia, the coming bot scriptures will be written by actual nonhumans. They're going to say weird things that humans wouldn't think of, and throw some chaos into popular metaphysics.
February 9. Tim sends two more AI links. It turns out there already has been a racist bot -- on 4chan of course -- and it wasn't a big deal. "People on there were not impacted beyond wondering why some person from the Seychelles would post in all the threads and make somewhat incoherent statements about themselves."
Also, Character.ai is a new website where you can build a custom personality to talk with.
I'm not sure how big this is. On the spectrum from pet rocks to the printing press, where are chatbots? If I had to guess, somewhere short of radio. And right now, they're so new that no matter what the bot says, we're like, whoa, that's a computer talking like a person! Once we get over that, we'll start to ask, "What can it do for me?"
One thing would be therapy. Philip K Dick was writing about therapy bots 60 years ago, and old-time Freudian psychotherapy could totally be done by today's AI.
Matt comments: "But if therapy bots could work, why not guru bots?" I think guru bots will mainly work on people who are already susceptible to regular gurus. This subject reminds me of a line from the Gospel of Thomas: "Blessed is the lion which becomes man when consumed by man; and cursed is the man whom the lion consumes, and the lion becomes man." Or, you either get consumed by AIs and serve their reality, or you integrate them into your larger life.
February 7. I've been neglecting to mention my old friend Tim Boucher, who's done a lot more thinking about AI than I have, and has published a bunch of
AI-written books. They're all short, and most of them are obviously absurd explorations of conspiracy themes.
Tim is trying to defuse conspiracy thinking, to make it more silly and less dangerous. But it would be easy to do the opposite. One thing I notice about ChatGPT is how reasonable it is. Again and again, it responds to radical ideas by saying stuff like "this idea is purely speculative and is not based on established fact."
Someone could design a chatbot where you could ask, "Do the Jews control everything?" and it would say "Yes! Yes they do, and here is some evidence." The only reason this hasn't happened, is the people working on AI are responsible and well-intentioned, so far. They want chatbots to be helpful and accepted by society. It's only a matter of time before we have chatbots that feed your own craziness back at you, whatever it is.
February 6. Continuing on AI, Kevin sends this blog post in which the blogger interviews ChatGPT on the simulation hypothesis.
I've said this before: Our idea that we live inside a computer is like the idea, among some primitive cultures, that their god made them out of clay. Clay is the best simulation technology they have; if they want to make a human as realistic as possible, they use clay. If we want to make a human as realistic as possible, we do it inside a computer.
In both cases, we imagine that the gods don't have any better tech than we do. ChatGPT says, "It would be very difficult, if not impossible, to explain the concepts of artificial intelligence and simulated reality to someone living in 200 B.C." In the same way, whatever's really going on with us, it's a lot harder for us to understand than a big computer.
You could also argue, the best simulation method among primitive people is not clay, but dreams. Even now, a good lucid dream feels more real than our best VR tech. That's why our present VR paradigm might be a dead end. Why go to all the trouble to build gigahertz processors to spin pixels, when we could just get our brains to do that?
There is some debate about whether "dream" is the right translation of the Aboriginal Dreamtime. One description in that article sounds a lot like the Tao, "an all-embracing concept that provides rules for living, a moral code, as well as rules for interacting with the natural environment."
What I really think is, Donald Hoffman is on the right track. The physical world is a user interface for a deeper level of reality that we don't understand. On that deeper level, we are all connected, and a shared physical world is one of many ways to work out that connectedness.
February 3. Backing off a bit from the last post, when we think about AI in creative work, we usually imagine that a given work will be done 100% by AI, or 100% by humans. In practice, I expect a lot of partnership. Someone who enjoys writing could still use AI for ideas, especially to throw a little chaos into the line-by-line writing. In most TV shows, the overall plots have a coherence that AI would struggle with, but the dialogue is so predictable that weird AI dialogue would be refreshing. And someone who doesn't like writing, but loves editing, could crank out AI writings and then pick out the best bits and patch them together.
Related, a Hacker News thread posted to the subreddit, Does the HN commentariat have a reductive view of what a human being is? There are a lot of good comments. I would say it like this: When you work all day with deterministic input-output machines, it's easy to view humans as deterministic input-output machines.
Also from Hacker News, this is something I was hoping someone would do, and they did it! A song recommendation engine that works on how the songs sound, and not what other people listened to. From the comments, it looks like there's a lot of room to do this kind of thing better.
Update: I've played with it a bit, and the best thing I've found, searching from Hawkwind's Space Is Deep, is this ambient black metal song, Death of an Estranged Earth by Old Forgotten Lands.
February 1. Quick loose end from Monday, thanks Greg. The Earth Species Project "is a non-profit dedicated to using artificial intelligence to decode non-human communication."
I might as well mention my latest thoughts on AI. I hate driving. I'm forced to put my attention constantly on stuff that's not interesting, and if I slip for one second, my life could be ruined. But this is an unpopular opinion. Most people like driving. So it's a safe bet that most people who buy self-driving cars also like driving. They buy self-driving cars not to be relieved from the suffering of driving, but to gain the pleasure and status of having a magical robot chauffeur.
AI is still in the stage of novelty. Wow, look at what my computer can do! When the novelty wears off, when there is no longer intrinsic pleasure in getting a machine to do a job for you, people will go back to doing for themselves, anything they enjoy doing. It follows that any use of AI, to do something that people enjoy doing, is a fad.
Another reason a machine might do a job that a person enjoys, is if they're being paid to do it, and the owner can get more money by replacing them. As a society, we should ask, what about the people who design and build and service the machines? Do they enjoy their jobs? We don't ask this question because we're still in the grip of capitalism. I don't mean the free market; I mean using money as a totemic arbiter of value.
In the long term, feeling good is the only arbiter of value -- but I'm always surprised by the willingness of humans to choose suffering, so I don't want to predict the end of capitalism just yet.
More generally, AI will force a reckoning of process vs product, of getting stuff done vs doing what you love. We understand this distinction, but we don't think about it all that much. As machines get better at getting stuff done, we're going to be asking more often: Is this something I want to get done, or something I want to do?
January 30. On a tangent from one of my favorite subjects, the afterlife: Suppose reincarnation actually happens, that there's an aspect of you that goes through any number of lives as any kind of being. This raises the question: Why be human?
What can we do or experience, as humans, that makes it worthwhile to be human and not something else?
Flying a plane, surely, is not as good as being a bird. Driving a car is not as good as being a wild horse. The internet has made our social world less satisfying, and even without it, human social behavior rarely matches the elegant synchrony of other social animals.
We have large brains, but dolphins have larger brains, and more folds and ridges in their cerebral cortex. Could they develop human-level abilities to live mentally in elaborate worlds of abstraction and imagination? Probably, but they have no reason to, because it's so much fun being a dolphin.
I think what makes humans special is creating our own environment. And this goes hand in hand with our isolation, our separateness from the rest of the living universe. Why did our ancestors do cave paintings? Because they were big-brained animals stuck in a cave all winter, and they got bored looking at a blank wall. And since then, the better we get at creating our own environments, the more time we spend in them, the more separate we get, and the more reason we have to be even more inventive.
When we talk about finding "intelligent" life on other planets, this is what we mean: another creature that has explored separateness and self-created environments in the same way that we have. If we weren't looking for something so specific, we would be trying harder to talk to large-brained animals on our own planet.
This topic can help us think about the meaning of life. Even if you think life has no meaning beyond what we give it, you might still want to play to your strengths. Some people seek to become one with everything, but I think that's what humans are worst at. Why should I spend my human life struggling for something that's part of the package in my next life as a gnat? Meanwhile the gnats are like, I wish I were human so I could write novels and play video games.
January 27. Greg sends this cool page about making fractal images without a computer, through video camera feedback. "Video feedback happens when you point a camera at a monitor that's displaying what the camera sees." So this guy made an elaborate rig to do all the subtle adjustments that enable him to pull colorful animated images out of basically nothing.
Here's his page, The Light Herder, and a YouTube video, Approaching the Infinite: Loops Within Loops. I feel like there's an important philosophical question about where these images actually come from, why they look the way they do and not some other way, and whether different tech substrates would come up with the same stuff. But I'm not smart today, so moving on to more weird tech...
Hacker News thread, What is the weirdest or most surreal recent technology you have seen?
Origami is revolutionizing technology, from medicine to space
The future of space travel might rely on buildings made of mushrooms
And shifting to amateur science, a thread on the Seattle subreddit, full of reports of tinnitus predicting the weather.
January 24. Negative links! The Website Obesity Crisis is from 2015, and since then it's only gotten worse. I try to keep this page under 40 kilobytes, plus occasional images.
FBI warned of neo-Nazi plots as attacks on Northwest grid spiked. This might become a huge trend among all kinds of disaffected groups and individuals. I often wonder why extremists try to kill people, when sabotaging infrastructure is so much easier, and not so obviously immoral.
Posted to the subreddit, The Case for Abolishing Elections. What they suggest instead is democracy by lottery, where citizens are randomly chosen for important positions. I like a system called random ballot voting, where candidates still have to do stuff to get on the ballot, but then, the winner of each race is decided by randomly selecting a single ballot. The nice thing is, there is no incentive for tactical voting, and yet over time, it still reflects the will of the majority.
ADHDers of Reddit, what's the most annoying thing about having ADHD? I might have qualified for ADD before they added the H, but in this thread, to find something I relate to, I have to scroll all the way down to the screaming tunnel comment: the world is coming at me too fast and I don't have the attention bandwidth to keep up.
The contagious visual blandness of Netflix. I hadn't noticed this, but I constantly notice something similar about the writing, in all but the best TV shows and movies. There is no micro-scale creativity or surprise. The large-scale plot contains surprises, typically about which characters are good or evil. But given what a scene is supposed to accomplish, every line of dialogue is conventional, and every emotional reaction is exactly what you expect.
January 20. I've posted my second early 80s playlist on Spotify. Originally my plan was for the first list to be new wave and the second to be rock, but in practice, I have to let go of categories and follow the sound, so now Billy Squier sets up Wall of Voodoo. For the same reason, my 70s list ends in 1981, and this list starts in 1978.
One song is not on Spotify: Suzanne Fellini - Love On The Phone.
Something I noticed, while listening to hair metal, is how much I like Quiet Riot singer Kevin DuBrow. The "growl" of metal singers (I think the technical term is vocal fry) is not that different from vibrato. It's something you can do with your voice, that is required for certain genres, and the mediocre singers just pile it on. But the great singers do it with agility.
By the way, another of my self-improvement projects is learning to sing. I started with the Online Pitch Detector, and it took me a while to get the needle to even stay in one place long enough to be readable. As soon as I could hold an arbitrary frequency for two seconds, I switched to a cool site called Pitchy Ninja. It gives you a series of tones to sing, and grades you A-F on each one. It took me about a hundred tries, over several days, to get a grade other than an F. But now I'm consistently getting non-Fs, and an occasional A on single notes.
Related: At-home musical training improves older adults' short-term memory for faces
January 18. You've probably heard, the 2022 word of the year was gaslighting.
It occurs to me, gaslighting wouldn't work in a culture that doesn't believe in objective truth. The intended victim would be like, cool, I'm splitting off into my own universe. Except that culture wouldn't even have the concept of an out-there physical universe. They would say something like, "Uh-oh, our perspectives are diverging. We need to summon another observer to synchronize with consensus."
Or, if you think the fundamental reality is seeing things differently, it leads to better epistemic discipline, and less freaking out, than if you think there's only one thing to see.
January 17. Quick loose end from yesterday, thanks Gryphon. I haven't read this book, so this isn't where I saw it, but Wade Davis wrote this in The Wayfinders:
Even more remarkable is the navigator's ability to pull islands out of the sea. The truly great navigators such as Mau can identify the presence of distant atolls of islands beyond the visible horizon simply by watching the reverberation of waves across the hull of the canoe, knowing full well that every island group in the Pacific has its own refractive pattern that can be read with the same ease with which a forensic scientist would read a fingerprint.
January 16. So I've been emailing with Matt about the human potential. What's normal in one culture might seem impossible in another. For example, there are cultures where everybody has perfect pitch. Also, the canoe people of the south Pacific can look at the waves around their boat and locate an island over the horizon. I'm not sure where I read either of those, but I think the first was Beatrice Bruteau's The Psychic Grid and the second was Tim Ingold's Perception of the Environment.
Posted recently to the subreddit was a blog post about cult leader Gridley Wright, who claimed to have given LSD to indigenous people all over the world, and none of them hallucinated.
One explanation is that his methodology was so sloppy that the results are meaningless. But let's play along, and suppose that a careful study by scrupulous anthropolgists would find the same thing. What would cause that?
Everybody loves value-loaded thinking, so rather than avoid it, let's bring it to the front. Maybe their culture is better than ours, so they live perpetually in a trippy mental state that we can only achieve temporarily through substances.
Or maybe our culture is better than theirs: Through stuff like TV and video games, or even written fiction, our brains are more receptive to seeing what other people are not seeing.
Neither hypothesis fits me. I've taken as much as a tab and a half of LSD, and 7g of mushrooms, not at the same time, but I've never hallucinated. My speculation is that I'm such an ambitious daydreamer that my brain is like, nope, that's all you get.
I remember around age 13 figuring out that I could see anything I wanted in my imagination, and exploring that. I take pride in my visualization powers, but just the other day I noticed that the images have to be moving. I can turn a doorknob and open a door to outer space, but I can't just stare at a doorknob for even two seconds. So now I'm working on holding still images, and I find that it's easier in early morning, when my mind is still.
Related: a technique for overcoming aphantasia -- for people who can't see mental images, to learn to see them.