[1] Sam Atis on curing alcoholism. It’s not that well-known, but we already have prescription drugs that help people quit drinking, usually by making drinking really unpleasant. How well do they work?
[2] Linguistics question: where did the ‘doggo’ meme dialect come from? Unsubtle hint: Experience Machine’s Australia correspondent Bridget Williams reminds me of the following deranged slang from down under: bottle-o, bizzo, arvo (afternoon), ambo (paramedic), garbo (garbage collector), servo (gas station).
[3] Stephen Clare highlights an ironic moment in David Foster Wallace’s famous “This is Water” speech.
[4] Best names for the ‘@’ symbol from various languages include: cinnamon roll (Swedish), little duck (Greek), monkey bracket (German), meow sign (Finnish), and moon’s ear (Kazakh).
[5] Matthias Michel on the promises and challenges of computational models of consciousness. The issue of “background conditions” of consciousness is a perennial challenge in thinking about AI consciousness, as long-time readers will have noted.1
[6] From Hamish Doodles:
[8] “sharks are older than the north star” is now the worst fact I know. See the Wikipedia for sharks and the Polaris Star Facts page—it’s true 😒
[9] Liad Mudrik reviews three recent books on consciousness, and discusses recent developments in (potential)AI consciousness. “It doesn’t seem very prudent to me to add more conscious creatures to this already complicated, combustible picture. It is perhaps wiser, then, to be a creature who thinks about consciousness than one who aspires to create artificial versions of it.”
[10] How does the brain represent uncertainty? (Nature, arXiv)
[11] Kyunghyun Cho (NYU) predicts that OpenAI will stop using freely crawled data for training large language models by April 2024. Here’s a Manifold Market about that (currently at 26%).
[12] Musicians and athletes regularly practice the fundamentals of their craft (e.g. scales, free throws). Knowledge workers rarely practice the fundamentals of their work, like reading and taking notes.
[13] Important, underrated point: “Even if we stopped AI development today, it would likely be most of a decade before we figured out the full implications of today’s LLMs”. Similar prediction from Planned Obsolescence: “We’re really at the very beginning of this work. It wouldn’t be surprising to see major advances in the practical usefulness of LLMs achieved through schlep alone, such that agents and other systems built out of GPT-4 tier models are much more useful in five years than they are today.”
[14] Language models show human-like content effects on reasoning tasks. (summary thread)
[15] “Cope” has become a term of derision but coping is good, actually.
[16] Paul Bloom’s advice on giving talks.
[17] Matthew Barnett worries: “Instead of worrying that the general public and policy-makers won’t take AI risks very seriously, I tend to be more worried that we will hastily implement poorly thought-out regulations that are based on inaccurate risk models or limited evidence about our situation”. Related question from Nick Cammarata; the COVID analogy is a good one.
[18] Math metaphors in literature.
[19] ACX: “No matter how contrarian you pretend to be, deep down it’s hard to make your emotions track what you know is right and not what the rest of the world is telling you. The last Guardian opinion columnist who must be defeated is the Guardian opinion columnist inside your own heart.”
[20] Unusual applications of spaced repetition memory systems.
[21] Constructionist theories of emotion say that our emotions depend on how we conceptualize them. On such views, do animals lack emotions, since they lack our higher-level concepts? And if so, what would follow ethically? Jonathan Birch considers these questions.
[22] NYRB: “The act of killing people was once taken so seriously…that after the Battle of Hastings in 1066, a Penitential Ordinance was imposed on Norman knights: ‘Anyone who knows that he killed a man in the great battle must do penance for one year for each man that he killed.’”
“One complication is that our theories of human (and animal) consciousness usually don’t make reference to ‘background conditions’ that we might think are important. They compare different human brain states, and seek to find neural structures or computations that might be the difference-makers between conscious and unconscious—for example, broadcast to a global workspace. But these neural structures or computations are embedded in a background context that is usually not formulated explicitly: for example, in the biological world, creatures with global workspaces are also embodied agents with goals. How important are these background conditions? Are they necessary pre-conditions for consciousness? If so, how do we formulate these pre-conditions more precisely, so that we can say what it takes for an AI system to satisfy them?” (from here)
Wow, so glad to be reminded of Ben Orlin's existence (math metaphors in literature) -- his book of mathematical games from last year looks great