1) Saloni Dattani asks, How many people die from snakebites? Way too many. “Around 1 in 270 people in India die from snakebites by the age of 70.”
2) Parrots seem to enjoy making video calls to each other.
3) Tom Davidson speaks to Luisa Rodriguez for the 80,000 Hours podcast about how quickly AI could transform the world.
I think if you just sit with that fact — that there are going to be machines that can do what the human brain can do; and you’re going to be able to make those machines much more efficient at it; and you’re going to be able to make even better versions of those machines, 10 times better versions; and you’re going to be able to run them day and night; and you’re going to be able to build more — when you sit with all that, I do think it gets pretty hard to imagine a future that isn’t very crazy.
4) Related: nothing has made me viscerally feel the power of AI as much as this recent demo did. Best with headphones; leave reactions in the comments!
5) You can and should bet against my consistency in posting to this very newsletter, on this Manifold Market here.
6) Related: Resident Contrarian recounts the history of ‘Whales vs. Minnows’, a Manifold Market gone wrong.
7) AGI Futures, by roon. I loved this description of the classic Yudkowsky-Bostrom AI doom scenario, in which a powerful optimizer implacably subjugates the lightcone to its worthless ends: “One by one, the stars are blinking out in the heavens as their energy is harnessed to further the Fiend’s profane purpose. To put it simply, this world is disgusting. It lacks all the poetry of the Hell, Dante never imagines such fruitless profanity in even the outermost ring of his Inferno. There is no point to this world, it should not exist.”
8) Dwarkesh Patel on the obsessive ambition that the biographer Robert Caro shares with his most famous subject:
The reason why Robert Caro is able to write so compellingly about these qualities of Lyndon Johnson - his resourcefulness and ruthlessness, his inexhaustible energy, his need to win, his inability to take no for an answer - is that the biographer shares many of the attributes of the subject.
A man who has spent almost 50 years writing the biographies of a single person - who has spent those decades sifting through thousands of crates of documents in the LBJ library, or making former goons disclose how they helped Johnson steal elections, or moving to the Texas hill country, to experience for himself the poverty and loneliness of Johnson’s youth - who continues to push himself in this way past his 87th birthday, when other men would have long retired, because his last volume, the one about the presidency of Lyndon Johnson, is still unfinished - is a man who understands Johnson’s aphorism, if you do everything, you will win.
9) An admirably succinct and very important 1-sentence open letter organized by the Center for AI Safety [disclosure: my employer]. The sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
10) An information-theoretic perspective on Heaven, a classic David Pearce piece.
11) Michael Nielsen asks, how is AI impacting science? with a refreshing focus on extant systems and near term applications, especially in biology.
12) Kelsey Piper at Planned Obsolescence reminds us that any credible case for slowing down AI has to acknowledge the costs of doing so:
“[I]’ve seen some people make the case for caution by asking, basically, ‘why are we risking the world for these trivial toys?’ And I want to make it clear that the assumption behind both AI optimism and AI pessimism is that these are not just goofy chatbots, but an early research stage towards developing a second intelligent species. Both AI fears and AI hopes rest on the belief that it may be possible to build alien minds that can do everything we can do and much more. What’s at stake, if that’s true, isn’t whether we’ll have fun chatbots. It’s the life-and-death consequences of delaying, and the possibility we’ll screw up and kill everyone.
13) Short review of the Misalignment Museum by Simon Willison.
14) Polish hero Witold Pilecki: “Pilecki volunteered to allow himself to be captured to infiltrate Auschwitz, where he organized a resistance movement and smuggled out reports about atrocities to the Western Allies before escaping Auschwitz and joining the Warsaw uprising.”
15) “Stochastic parrot” is an insufficient framework for thinking about increasingly capable language models, so alternative metaphors are needed. Helen Toner offers ‘improv machines’:
Knowing that language models simply use patterns in huge text datasets to predict the next word in a sequence, researchers try to offer alternative metaphors, arguing that the latest AI systems are simply “autocomplete on steroids” or “stochastic parrots” that shuffle and regurgitate text written by humans. These comparisons are an important counterweight against our instinct to anthropomorphize. But they don’t really help us make sense of impressive or disconcerting outputs that go far beyond what we’re used to seeing from computers—or parrots. We struggle to make sense of the seeming contradiction: these new chatbots are flawed and inhuman, and nonetheless, the breadth and sophistication of what they can produce is remarkable and new. To grapple with the implications of this new technology, we will need analogies that neither dismiss nor exaggerate what is new and interesting.
16) A useful reminder about mental health and self-compassion:
And, speaking of stochastic parrots, it’s been noted by such notables as Gwern that LLMs might well be thought of as emulators of persons or person-like things. This caused me to dust off an old idea of mine ( life after death, as a parrot ) that we might find a lot of uses for personal replicas. Of lots of real, actual people. Uses like education, advising, memorializing, leaving a legacy, debates between important figures who never met. Some of this stuff we could start doing now, but when I first wrote about it GPT-2 made it look like a maybe.
I skimmed the Pearce piece since I know his general idea about hedonics. But it seems he didn’t acknowledge where the most resistance would come from. You don’t have to look very far (e.g., on twitter) to find knee-jerk pronouncements of “eugenics!”, as if any attempt to improve humans could only be some form of hegemony.