Episode publish date
November 25, 2025 5:04 PM (UTC)
Last edit date
Feb 16, 2026 6:32 PM
Last snip date
February 16, 2026 6:31 PM (UTC)
Last sync date
February 16, 2026 6:32 PM (UTC)
Show
Dwarkesh Podcast
Show notes link
Snips
1
Warning
⚠️ Any content within the episode information, snip blocks might be updated or overwritten by Snipd in a future sync. Add your edits or additional notes outside these blocks to keep them safe.
‣
Your snips
‣
[01:39] Eval-Focused RL Produces Brittle Models
‣
[06:11] Specialist Training Undermines Generalization
‣
[13:40] Value Functions Improve RL Efficiency
‣
[19:01] The Era Of Scaling Was Temporary
‣
[23:56] Spend Compute More Productively
‣
[25:00] Human Learning Reveals A Missing ML Principle
‣
[32:41] Gemini 3 Helped Bridge Theory And Experiments
‣
[40:21] Funding Numbers Don’t Equal Research Power
‣
[47:14] AGI May Be A Learner, Not A Finished Mind
‣
[56:07] Showcase And Gradual Release Aid Safety
‣
[01:01:21] Design For Care And Cap Power
‣
[01:03:32] Misaligned Optimization Is The Core Risk
‣
[01:08:00] Coordinate Deployment And Governance Early
‣
[01:11:36] Evolution’s Social Priors Are A Deep Mystery
‣
[01:30:50] Competition Produces Diverse Learning Signals
‣
