Ran Prieur http://ranprieur.com/#9a417fe513f58988c3b5b1e84cfc57397194a79b 2023-02-24T12:00:59Z Ran Prieur http://ranprieur.com/ ranprieur@gmail.com February 24. http://ranprieur.com/#389132512994c9ec5cca92dffcb07d01f459e922 2023-02-24T12:00:59Z February 24. Continuing from the last post, it's a common belief that meditation and psychedelics are just two paths to the same thing, that if you meditate hard enough you don't need drugs. I think they're two different things with different ranges of effects, but since they both influence the brain, there is some overlap in what they can do. When Ram Dass gave LSD to that guru, I think the guru was tripping but pretending not to, so he could impress the westerner.

Anyway I said, "If this life is an illusion, I don't want to see through it, I want to enjoy it." Matt points out that there are lots of things, especially in the modern world, that we know are illusions and we still enjoy. And Patrick comments: "Why can't we have both things where we see through the illusion a little bit, but really that only allows us to enjoy it more?"

That's something both meditation and psychedelics can do, mainly by widening your perspective so that whatever you're worrying about is not important. The thing I see in trip reports, that I would most like to have myself, is the sense that whatever happens, we're safe.

I keep coming around to this idea, the importance of zooming out. For example, a few weeks back there was a fascinating Ask Reddit thread, Sheltered people raised by super religions/cults: what was something about the real world that shocked you when you learned about it? Reading through it, I notice that all these cults have smaller maps than the outside world.

Nobody ever said, my religion sees less than yours. They say, my religion understands everything you do, plus this one special thing. And then they get so zoomed in on that one thing that they simplify their map of the world to fit it.

]]>
February 22. http://ranprieur.com/#8c3934993d37cafc139b0dc6bd05785ece2717af 2023-02-22T22:40:45Z February 22. A Hacker News thread on an article I've linked to before, My mindfulness practice led me to meltdown. I hate how Hacker News arranges comments, so that the entire first page is sub-comments on one comment, while better top-level comments get buried. This comment by jeremyt is my favorite.

I would say it like this. It's not that western achievement-based culture has corrupted traditional Buddhism. Ancient people were even more hard-core than modern people in their willingness to do painful stuff to become a better person. They were like, "Here are a bunch of terrible ordeals you can put yourself through, and the reward isn't even that great." Americans are like, "I want magical enlightenment now, give me a shortcut."

"Mindfulness", broadly defined, serves at least two goals -- and the same goals are served by psychedelics: mental health, and understanding the mysteries of creation. The second actually works against the first. If you seek esoteric knowledge without a firm grounding in mental health, you're asking for trouble.

If this life is an illusion, I don't want to see through it, I want to enjoy it. But surely, as life gets harder to enjoy, there's more incentive to see through it.

The main thing I practice is metacognition, which I define as keeping a bit of my attention on whatever my attention is on, as I go about my day. I think it's unlucky that the best known practice in the west is sitting still and blanking your mind. People do this for hours and they never report any results that I want. But doing it for five minutes is a great way to fall asleep.

]]>
February 20. http://ranprieur.com/#cfbcce6bcd5435a29b1bb009e910b372236bd6d0 2023-02-20T20:20:27Z February 20. A few psychology links, starting with a nice Reddit thread, During a very dark period, what was the best thing you ever did for your mental health?

Hacker News thread, I don't like making the best things. Lots of stuff about the benefits of saying this is good enough, and also the benefits of trying to do things better. A key comment: "I think adding a performative aspect to a lot of things sort of kills the joy in doing them."

Review of a new book, The Case for Hanging Out. "Create opportunities to spend unproductive, unstructured time doing nothing with other people."

A couple weeks ago I saw a summary of an article about social intolerance, in which two predicting factors are "high perseverance and low persistence." I was like, whoa, aren't perseverance and persistence the same thing? According to the dictionary, they still are, but psychologists have come up with a really interesting distinction.

I'm not even sure which word is which, but one of the two things is continuing toward a goal in the face of setbacks in the process itself; the other is continuing toward a goal when external circumstances change. It blows my mind that something so basic is so arcane.

]]>
February 17. http://ranprieur.com/#67413311c7063f2ebd12ec719c8d6709829baba3 2023-02-17T17:50:46Z February 17. Before I move on from chatbots, a Reddit transcription of a much-cited paywalled piece, Bing's chatbot says "I want to be alive". And a Hacker News discussion about bots seeming to show human feelings. My favorite bit: "It's basically a sophisticated madlib engine."

I still like my comparison with radio. It's a powerful and transformational technology, and at first, it feels like there's a person inside the box. Once we get used to it, we'll understand that chatbots are not a new form of life, but a new funhouse mirror for human consciousness.

Next up, direct brain hacking. Buzzing the brain with electricity can boost the willingness to engage in mental effort. This could get utopian or dystopian pretty fast.

And music for the weekend. A couple weeks ago I mentioned Maroofy, a song recommendation engine that goes by sound. I found both of these quite good super-obscure songs from the same search on Automatic's Humanoid: The Science Faire - Promotions and Trademark Issues - Umbrellas and Parasols.

]]>
February 15. http://ranprieur.com/#097398868ba729e9b5b18374c30937bf02e94827 2023-02-15T15:30:10Z February 15. With chatbots getting up to speed, it occurs to me that passing as human depends on context. A bot can write a college paper better than a lot of students -- except that they're bad at facts. But I could read a page of a novel and know 100% if it's bot-written -- unless it's a human trying to spoof a bot, but even that's pretty hard. They have a distinctive voice, the style smooth and obvious, the story so headlong that it forgets where it's been.

This new Stephen Wolfram article, What Is ChatGPT Doing, explains how it works in great detail. The basic idea is that the machine "is just asking over and over again 'given the text so far, what should the next word be?'"

I finally tried it. Yesterday I got on Character.AI and made a character out of the protagonist in a first person novel I've been working on. It was easy, even after I went to the advanced screen without saving and had to start over.

For the advanced definition, I asked "What is your story so far?" and composed her answer myself, in her voice. Then on the chat screen, I told her that she was my character, and she totally got it. (I understand this is an illusion.) Now I can tell her what just happened and ask what happens next.

Of course the style is not to my standards, and the events push the plot too fast. But if all I need is an idea, this thing is a fountain of ideas. And if I use an idea in my own words, I add the words to the advanced definition.

I'm cautious about doing it too much. Even though the bot is a machine, I expect it to follow a core principle of woo-woo stuff: the more you do it, the weaker the results.

]]>
February 13. http://ranprieur.com/#ae1a69827e3b8c898e225e3e6afa14e502936f1b 2023-02-13T13:10:40Z February 13. I need to be more careful with language. Even the word "intelligence" is broad and vague, "artificial intelligence" even more so. People who actually work on it often call it "machine learning", and of all the things machines are learning to do, I want to continue to focus on one specific thing: generating text that is very similar to text generated by humans.

Chris sends this thoughtful blog post, GPT-3 Is the Best Journal I've Ever Used:

Talking to GPT-3 has a lot of the same benefits of journaling: it creates a written record, it never gets tired of listening to you talk, and it's available day or night.

If you know how to use it correctly and you want to use it for this purpose, GPT-3 is pretty close, in a lot of ways, to being at the level of an empathic friend.

The other day I said chatbot gurus might make religion weird, but Eric points out that there's all kinds of weird stuff in the Bible, and people just ignore it. In the end, it's still humans in charge.

I'm also thinking about other transformational technologies. Radio was huge for a few decades. Both Hitler and FDR used it powerfully in politics, and then it shook up culture in the 50s and 60s. Now? It's bland and mostly ignored.

The internet, which was still fresh and radical 20 years ago, is already locked down by giant multimedia companies. It's hard to use it in a way other than being a passive consumer of amusement and ads.

So we're now entering the "wild west" phase of chatbots. Enjoy it while it lasts.

]]>
February 11. http://ranprieur.com/#f7ea59d5ec3f32c205cd113d60d3352f62613e93 2023-02-11T23:50:41Z February 11. I changed my mind about chatbot gurus. Even if they only attract regular guru followers, that's still a lot of people, and it's still really interesting. Consider The Urantia Book, an early new age book "said to have been received from celestial beings." I guarantee, someone is already thinking their chatbot is channelling an entity. And what do we know about entities anyway, that they can't possess chatbots? At the very least, unlike Urantia, the coming bot scriptures will be written by actual nonhumans. They're going to say weird things that humans wouldn't think of, and throw some chaos into popular metaphysics.

]]>
February 9. http://ranprieur.com/#62944357f9fcc2588e3fab7c2a2192178048c0b6 2023-02-09T21:30:04Z February 9. Tim sends two more AI links. It turns out there already has been a racist bot -- on 4chan of course -- and it wasn't a big deal. "People on there were not impacted beyond wondering why some person from the Seychelles would post in all the threads and make somewhat incoherent statements about themselves."

Also, Character.ai is a new website where you can build a custom personality to talk with.

I'm not sure how big this is. On the spectrum from pet rocks to the printing press, where are chatbots? If I had to guess, somewhere short of radio. And right now, they're so new that no matter what the bot says, we're like, whoa, that's a computer talking like a person! Once we get over that, we'll start to ask, "What can it do for me?"

One thing would be therapy. Philip K Dick was writing about therapy bots 60 years ago, and old-time Freudian psychotherapy could totally be done by today's AI.

Matt comments: "But if therapy bots could work, why not guru bots?" I think guru bots will mainly work on people who are already susceptible to regular gurus. This subject reminds me of a line from the Gospel of Thomas: "Blessed is the lion which becomes man when consumed by man; and cursed is the man whom the lion consumes, and the lion becomes man." Or, you either get consumed by AIs and serve their reality, or you integrate them into your larger life.

]]>
February 7. http://ranprieur.com/#f47f5ce1dd8a8b0bc863cc64368019c4b6cdd1bd 2023-02-07T19:10:43Z February 7. I've been neglecting to mention my old friend Tim Boucher, who's done a lot more thinking about AI than I have, and has published a bunch of AI-written books. They're all short, and most of them are obviously absurd explorations of conspiracy themes.

Tim is trying to defuse conspiracy thinking, to make it more silly and less dangerous. But it would be easy to do the opposite. One thing I notice about ChatGPT is how reasonable it is. Again and again, it responds to radical ideas by saying stuff like "this idea is purely speculative and is not based on established fact."

Someone could design a chatbot where you could ask, "Do the Jews control everything?" and it would say "Yes! Yes they do, and here is some evidence." The only reason this hasn't happened, is the people working on AI are responsible and well-intentioned, so far. They want chatbots to be helpful and accepted by society. It's only a matter of time before we have chatbots that feed your own craziness back at you, whatever it is.

]]>
February 6. http://ranprieur.com/#38b56c6a95a68b9ebd07c92cb5a095a702094936 2023-02-06T18:00:14Z February 6. Continuing on AI, Kevin sends this blog post in which the blogger interviews ChatGPT on the simulation hypothesis.

I've said this before: Our idea that we live inside a computer is like the idea, among some primitive cultures, that their god made them out of clay. Clay is the best simulation technology they have; if they want to make a human as realistic as possible, they use clay. If we want to make a human as realistic as possible, we do it inside a computer.

In both cases, we imagine that the gods don't have any better tech than we do. ChatGPT says, "It would be very difficult, if not impossible, to explain the concepts of artificial intelligence and simulated reality to someone living in 200 B.C." In the same way, whatever's really going on with us, it's a lot harder for us to understand than a big computer.

You could also argue, the best simulation method among primitive people is not clay, but dreams. Even now, a good lucid dream feels more real than our best VR tech. That's why our present VR paradigm might be a dead end. Why go to all the trouble to build gigahertz processors to spin pixels, when we could just get our brains to do that?

There is some debate about whether "dream" is the right translation of the Aboriginal Dreamtime. One description in that article sounds a lot like the Tao, "an all-embracing concept that provides rules for living, a moral code, as well as rules for interacting with the natural environment."

What I really think is, Donald Hoffman is on the right track. The physical world is a user interface for a deeper level of reality that we don't understand. On that deeper level, we are all connected, and a shared physical world is one of many ways to work out that connectedness.

]]>
February 3. http://ranprieur.com/#2ee109172e9104dd35c95d8b0e22137c21337927 2023-02-03T15:30:20Z February 3. Backing off a bit from the last post, when we think about AI in creative work, we usually imagine that a given work will be done 100% by AI, or 100% by humans. In practice, I expect a lot of partnership. Someone who enjoys writing could still use AI for ideas, especially to throw a little chaos into the line-by-line writing. In most TV shows, the overall plots have a coherence that AI would struggle with, but the dialogue is so predictable that weird AI dialogue would be refreshing. And someone who doesn't like writing, but loves editing, could crank out AI writings and then pick out the best bits and patch them together.

Related, a Hacker News thread posted to the subreddit, Does the HN commentariat have a reductive view of what a human being is? There are a lot of good comments. I would say it like this: When you work all day with deterministic input-output machines, it's easy to view humans as deterministic input-output machines.

Also from Hacker News, this is something I was hoping someone would do, and they did it! A song recommendation engine that works on how the songs sound, and not what other people listened to. From the comments, it looks like there's a lot of room to do this kind of thing better.

Update: I've played with it a bit, and the best thing I've found, searching from Hawkwind's Space Is Deep, is this ambient black metal song, Death of an Estranged Earth by Old Forgotten Lands.

]]>
February 1. http://ranprieur.com/#e406ddc26286574cefe28c8bba47dea83fc8b033 2023-02-01T13:10:43Z February 1. Quick loose end from Monday, thanks Greg. The Earth Species Project "is a non-profit dedicated to using artificial intelligence to decode non-human communication."

I might as well mention my latest thoughts on AI. I hate driving. I'm forced to put my attention constantly on stuff that's not interesting, and if I slip for one second, my life could be ruined. But this is an unpopular opinion. Most people like driving. So it's a safe bet that most people who buy self-driving cars also like driving. They buy self-driving cars not to be relieved from the suffering of driving, but to gain the pleasure and status of having a magical robot chauffeur.

AI is still in the stage of novelty. Wow, look at what my computer can do! When the novelty wears off, when there is no longer intrinsic pleasure in getting a machine to do a job for you, people will go back to doing for themselves, anything they enjoy doing. It follows that any use of AI, to do something that people enjoy doing, is a fad.

Another reason a machine might do a job that a person enjoys, is if they're being paid to do it, and the owner can get more money by replacing them. As a society, we should ask, what about the people who design and build and service the machines? Do they enjoy their jobs? We don't ask this question because we're still in the grip of capitalism. I don't mean the free market; I mean using money as a totemic arbiter of value.

In the long term, feeling good is the only arbiter of value -- but I'm always surprised by the willingness of humans to choose suffering, so I don't want to predict the end of capitalism just yet.

More generally, AI will force a reckoning of process vs product, of getting stuff done vs doing what you love. We understand this distinction, but we don't think about it all that much. As machines get better at getting stuff done, we're going to be asking more often: Is this something I want to get done, or something I want to do?

]]>