PODCAST
So, AI Is Going to Destroy Us All... Now What?
A book argues that superhuman AI would inevitably kill us. The core claims: AI could master biology, we cannot understand how it thinks, and the timeline could be 10 years or less. We walk through the argument, then p…
Episode 15 · 25 min · November 15, 2025
Show notes
A book argues that superhuman AI would inevitably kill us. The core claims: AI could master biology, we cannot understand how it thinks, and the timeline could be 10 years or less. We walk through the argument, then pivot to a different question. If existential risk is on the table, what does that imply for how we live and work now? The conversation shifts from doom to a practical lens on priorities.
In This Episode, You'll Learn:
- Superhuman AI arguments rest on capabilities we cannot verify.
- We cannot understand how advanced AI thinks. That creates risk.
- Doom scenarios can become motivation to clarify what matters.
- The practical question is what we do with the uncertainty.
- Existential risk reframes daily priorities.
Facing a decision that needs an outside perspective?
Talk to Grizen