1] Andy Masley argues that you should try more to change people’s minds, even about their “deeply held” beliefs, because deeply held beliefs often work like this: “I have muttered this basic idea to myself repeatedly for years to make myself feel important. I first found this idea because a person with a cool jacket said it.”
2] The Whispering Earring, an old (2012) Borgesian short story by Scott Alexander.
3] Kids these days, a dispatch from Average State U: “Most of our students are functionally illiterate….Reading bores them, though. They are impatient to get through whatever burden of reading they have to, and move their eyes over the words just to get it done.” One reason I find this plausible is that I can feel technology pushing my own brain in this direction; I have to resist it. I think if I’d grown up with social media and with LLMs to summarize everything for me, I’d be way worse at reading.
4] I didn’t read this, but I did speak some it: “Where Human and AI Consciousness Research Meet, a conversation with Kati Devaney and Rob Long & Asterisk Magazine” (video).
5] Daniel Eth and Tom Davidson argue that “software progress alone could plausibly enable faster and faster AI advancements, yielding a ‘software intelligence explosion.’”
6] But, Ege Erdil and Matthew Barnett at Epoch argue against the view that AI progress will primarily impact the economy via AI-automated R&D. Instead, they argue it will first impact the economy more broadly.
7] As Stefan Schubert points out, an important implication of this view is that it would means we are less likely to sleepwalk into an AI disaster without serious prevention efforts.
8] Jason Benn: From 0 meditation to 50 days of retreat in one year: what I wish I knew.
9] Exciting new hires at AI welfare research organization Eleos AI. Their recruiting team must be amazing!
10] It’s back! Manifold Market about this substack.