“If you want to be vegan but you worry about health effects of no meat, consider being vegan except for mussels/oysters” by KatWoods
1) They're unlikely to be sentient (few neurons, immobile) 2) If they are sentient, the farming practices look likely to be pretty humane 3) They're extremely nutritionally dense Buying canned smoked oysters/mussels and eating them plain or on crackers is super easy and cheap. It's an acquired taste for some, but I love them. ---
First published:
June 30th, 2025
Source:
https://www.lesswrong.com/posts/Cwpxfpj4o99bDrv7X/if-you-want-to-be-vegan-but-you-worry-about-health-effects
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
1:06
[Linkpost] “Project Vend: Can Claude run a small shop?” by Gunnar_Zarncke
This is a link post. Anthropic (post June 27th): We let Claude [Sonnet 3.7] manage an automated store in our office as a small business for about a month. We learned a lot from how close it was to success—and the curious ways that it failed—about the plausible, strange, not-too-distant future in which AI models are autonomously running things in the real economy. But the AI make numerous business-critical errors including repeatedly selling products at a loss, offering excessive discounts, and making fundamental accounting mistakes. ---
First published:
June 30th, 2025
Source:
https://www.lesswrong.com/posts/xYPk8yRDAeq9og38u/project-vend-can-claude-run-a-small-shop
Linkpost URL:https://www.anthropic.com/research/project-vend-1
---
Narrated by TYPE III AUDIO.
--------
1:04
“Paradigms for computation” by Cole Wyeth
Epistemic status: Though I can't find it now, I remember reading a lesswrong post asking "what is your totalizing worldview?" I think this post gets at my answer; in fact, I initially intended to title it "My totalizing worldview" but decided on a slightly more restricted scope (anyway, I tend to change important aspects of my worldview so frequently it's a little unsettling, so I'm not sure if it can be called totalizing). Still, I think these ideas underlie some of the cruxes behind my meta-theory of rationality sequence AND my model of what is going on with LLMs among other examples. The idea of a fixed program as the central objects of computation has gradually fallen out of favor. As a result, the word "algorithm" seems to have replaced program as a catch-all term for computations that computers run. When the computation is massive, automatically generated, without guarantees [...] ---Outline:(02:03) Recursion theory(11:18) Computational learning theory(16:43) Bayesian decision theory... as a paradigm of computation?(18:55) Paradigms are for generating good ideasThe original text contained 3 footnotes which were omitted from this narration. ---
First published:
June 30th, 2025
Source:
https://www.lesswrong.com/posts/APP8cbeDaqhGjqH8X/paradigms-for-computation
---
Narrated by TYPE III AUDIO.
--------
20:28
“life lessons from poker” by thiccythot
crossposted from my blog There are two ways we miscalibrate risk. We risk too much on things that are low conviction We risk too little on things that are high conviction I learned these lessons in poker and in trading and they have helped me reason about broader life. I call them the fold pre principle and the pocket ace principle. the fold pre principle On poker forums there's a running gag. Whenever someone posts a complicated hand history and asks, “What should I have done here?” the top reply is “fold pre.” Translation: You never should have played that hand in the first place. It's a snarky answer that skips all of the nuance but that's exactly the point. The simplest fix was to never enter that low conviction spot at all. Everyone knows that they should cut their losers early but the fold pre principle is [...] ---Outline:(01:07) the fold pre principle(03:29) the pocket ace principle---
First published:
June 30th, 2025
Source:
https://www.lesswrong.com/posts/4F5yqZxDRvPJRkFE6/life-lessons-from-poker
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
9:17
“Circuits in Superposition 2: Now with Less Wrong Math” by Linda Linsefors, Lucius Bushnaq
Audio note: this article contains 323 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Summary & Motivation This post is a continuation and clarification of Circuits in Superposition: Compressing many small neural networks into one. That post presented a sketch of a general mathematical framework for compressing different circuits into a network in superposition. On closer inspection, some of it turned out to be wrong, though. The error propagation calculations for networks with multiple layers were incorrect. With the framework used in that post, the errors blow up too much over multiple layers. This post presents a slightly changed construction that fixes those problems, and improves on the original construction in some other ways as well.[1] By computation in superposition we mean that a network represents features in superposition and [...] ---Outline:(00:25) Summary & Motivation(01:43) Takeaways(02:32) The number of circuits we can fit in scales linearly with the number of network parameters(04:02) Each circuit will only use a small subset of neurons in the larger network(04:37) Implications for experiments on computation in superposition(05:15) Reality really does have a surprising amount of detail(06:25) Construction(07:25) Assumptions(08:44) Embedding the circuits into the network(10:40) Layer 0(11:49) Constructing the Embedding and Unembedding matrices(12:38) Requirements(14:30) Step 1(15:08) Step 2(17:02) Step 3(17:23) Step 4(17:50) Step 5(18:01) Real python code(18:14) Properties of _E_ and _U_(18:53) Error calculation(19:23) Defining the error terms(22:08) _\\mathring{\\epsilon}_t^l_ - The embedding overlap error(23:36) _\\tilde{\\epsilon}_t^l_ - The propagation error(24:38) Calculation:(27:29) _\\ddot{\\epsilon}_t^l_ - The ReLU activation error(27:45) Calculations:(29:34) _\\epsilon_t^l_ - Adding up all the errors(29:43) Layer 0(29:55) Layer 1(30:10) Layer 2(30:45) Layer 3(31:03) Worst-case errors vs mean square errors(32:24) Summary:(33:12) Discussion(33:15) Noise correction/suppression is necessary(34:30) However, we do not in general predict sparse ReLU activations for networks implementing computation in superposition(36:03) But we do tentatively predict that circuits only use small subsets of network neurons(37:11) AcknowledgementsThe original text contained 24 footnotes which were omitted from this narration. ---
First published:
June 30th, 2025
Source:
https://www.lesswrong.com/posts/FWkZYQceEzL84tNej/circuits-in-superposition-2-now-with-less-wrong-math
---
Narrated by TYPE III AUDIO.