PODCAST

AI Coding Is More About The Experience Than The Model

The tools we use to write software are evolving at breakneck speed. Cursor recently dropped version 3 featuring a brand new agent-first interface, while Anthropic continues to push developers toward the terminal with…

Episode 29 · 24 min · April 16, 2026

Show notes

The tools we use to write software are evolving at breakneck speed. Cursor recently dropped version 3 featuring a brand new agent-first interface, while Anthropic continues to push developers toward the terminal with Claude Code. But are these new autonomous workflows actually an improvement for experienced engineers? In this episode, Ben Griswold (Grizen) and Noah Heldman (OutcomeSource) compare their hands-on experiences with the latest AI coding environments. They discuss the jarring feeling of having code abstracted away by terminal-based agents and the sheer frustration of hitting Claude Pro's usage limits after just ten minutes of work. The conversation also uncovers what a recent leak revealed about how Claude Code operates under the hood, proving that many of these advanced tools rely heavily on massive, hidden system prompts. *In This Episode, You'll Learn:* * The major changes in Cursor v3 and its new agent-first window. * Why terminal-based AI tools can disrupt a developer's natural workflow and visibility. * The reality of burning through tokens and hitting hard limits on expensive paid AI tiers. * What the Claude Code source leak showed us about system prompts and goofy loading verbs. * Why the industry is shifting from AI assistants to AI agents. * The reason Ben is actively searching for a Claude Code expert to prove his current workflow wrong. *Connect with Us:* * Ben Griswold | Grizen: https://grizen.com * Noah Heldman | OutcomeSource: https://outcomesource.com

Facing a decision that needs an outside perspective?

Talk to Grizen