TuneInTalks
From Lenny's Podcast: Product | Career | Growth

How Block is becoming the most AI-native enterprise in the world | Dhanji R. Prasanna

1:26:41
October 26, 2025
Lenny's Podcast: Product | Career | Growth
https://api.substack.com/feed/podcast/10845.rss

What if an AI quietly did eight to ten hours of your weekly work for you?

I walked away from this conversation both skeptical and oddly hopeful. The number kept bouncing in my head: engineers on some teams reporting eight to ten hours saved per week, and the company-wide trend pointing toward roughly twenty to twenty-five percent of manual hours disappearing. That’s not hyperbole, according to the guest — it’s measurable, and it’s just the baseline.

Why Block rewired itself around technology

The change didn’t begin with a flashy product launch. It started with a simple letter — a blunt note that jolted senior leaders to treat the company as a technology organization first. The practical consequence was structural: Block moved from a GM, portfolio-style org into a functional model where engineers and designers report into centralized leadership. That shift, the guest argued, was the most consequential lever for making AI adoption actually scale.

Conway’s Law is invoked here not as theoretical trivia but as operational gospel: structure shapes output. Once Block flattened the silos and unified technical leadership, shared tools, policies, and career ladders followed. Suddenly the organization could deploy platform investments and AI primitives across brands rather than reinventing them in each silo.

Small experiments, big momentum

The cultural change was deliberately incremental. Hack weeks, two-to-five person skunkworks projects, and executive adoption of tools seeded a “feel it, use it” attitude. I liked the frankness: leadership didn’t mandate AI from the ivory tower. They owned the workflow by using the tools daily, and that permission cascade made a real difference.

Goose: an AI agent with arms, legs, and a GitHub repo

Goose is the concrete artifact of that strategy: an open-source, desktop agent that speaks to large language models but also to enterprise systems. The technical trick is the Model Context Protocol (MCP), which treats external tools as addressable capabilities the agent can call. Think of MCP as a universal adapter that gives LLMs the ability to act — to run SQL, generate charts, write code, automate UI flows, and even control an OS via accessibility APIs.

What surprised me is how far this orchestration goes. Goose can: pull data from Snowflake, write SQL, render charts, and assemble a PDF report; automate mobile UI testing by driving Android accessibility APIs; and, in an extreme case, watch an engineer’s screen and proactively open a pull request for a feature the engineer and a colleague were just talking about. That mix of autonomy and integration is what makes Goose feel less like a novelty and more like a new kind of colleague.

Who wins when agents work?

Counterintuitively, the deepest impact hasn’t come only on the engineering floor. Non-technical teams — risk, legal, enterprise ops — are adopting agents to build their own small apps. That compresses weeks of work into hours and reduces the backlog of one-off requests that used to clog engineering teams. The net effect: engineers reclaim higher-value work while other teams ship tooling faster.

Metrics, limits, and a healthy dose of humility

Measurement matters. Block didn’t rely only on anecdote; they triangulated PR throughput, feature velocity, and a bespoke formula from data scientists to estimate hours saved. Still, the guest emphasized caveats: legacy monoliths with heavy technical debt don’t benefit as quickly. And human judgment matters when deciding whether to automate, buy a vendor solution, or simply change process.

Two cultural truths stood out. First: start small and iterate. Goose itself began as an engineer’s side project and grew through external collaboration and internal pilots. Second: keep product purpose at the center. Clean code is nice, the guest reminded me, but it’s not a substitute for solving real user problems. Sometimes the right move is to ship messy, validate, and then improve.

A different way to work — delete, rebuild, repeat

One particularly striking mental model: treat releases as ephemeral. The idea is almost sacrilegious to traditional engineering lore that warns against wholesale rewrites. But with agents and rapid generation, the guest imagines a workflow where teams throw away entire apps and regenerate them from updated specifications — repeatedly experimenting with multiple approaches overnight and keeping what works. It’s wasteful-sounding on paper, but when experimentation cost drops, the calculus shifts.

Open source as a stance, not just strategy

Block’s decision to open-source Goose is notable. It forfeited potential direct monetization to accelerate adoption, attract community contributions, and keep the ecosystem interoperable. That choice felt both principled and pragmatic: openness reduces vendor lock-in and encourages the MCP ecosystem to flourish beyond a single corporate moat.

Leadership lessons that felt true

  • Use the tools yourself — leaders who adopt AI model the change and learn faster.
  • Focus on purpose, not polish — user outcomes trump pristine architecture.
  • Question base assumptions — sometimes the answer is to stop building at all.

I left the conversation convinced of one simple thing: AI agents are not a future threat or a mere toy. They are a practical platform shift that rewards organizational clarity and experimental courage. What felt most generative was the insistence that humans remain the curators of taste and judgment — steering agents toward what truly matters rather than abdicating responsibility.

And that final note is what stayed with me: technology expands possibility, but leadership defines purpose; the rest follows.

Key points

  • Block reports AI-forward engineering teams save roughly eight to ten hours per week using Goose.
  • Company-wide trend estimates about twenty to twenty-five percent of manual hours saved so far.
  • Goose is an open-source desktop AI agent built on the Model Context Protocol.
  • MCP lets LLMs call enterprise tools — SQL, Snowflake, Tableau, and native OS controls.
  • Non-technical teams use agents to build internal apps, compressing weeks into hours.
  • Block reorganized from GM structure to functional org to accelerate technical depth.
  • Engineers are experimenting with rewriting full apps from scratch, enabled by AI.
  • Goose can autonomously run UI tests, generate reports, and even open PRs.

Timecodes

00:01 Guest intro and role at Block
00:06 AI manifesto and persuading leadership
00:09 Why functional orgs accelerate technical change
00:16 Productivity metrics: hours saved and methodology
00:21 What Goose is and how MCP works
00:28 A developer lets Goose watch his work — PRs and nudges
00:32 Future workflows: autonomous agents and rewriting apps
01:00 Leadership lessons and Conway's Law
01:14 Failed products, humility, and learning

More from Lenny's Podcast: Product | Career | Growth

Lenny's Podcast: Product | Career | Growth
Inside the expert network training every frontier AI model | Garrett Lord (Handshake CEO)
How Handshake turned a decade-old student network into a $50M AI training-data powerhouse.
Lenny's Podcast: Product | Career | Growth
How Intercom rose from the ashes by betting everything on AI | Eoghan McCabe (founder and CEO)
How Intercom turned a six-week GPT prototype into a $100M AI agent business.
Lenny's Podcast: Product | Career | Growth
Why ChatGPT will be the next big growth channel (and how to capitalize on it) | Brian Balfour (Reforge)
ChatGPT could become the next dominant distribution platform—are you ready to place your bet?
Lenny's Podcast: Product | Career | Growth
The one question that saves product careers | Matt LeMay
Learn three practical steps product teams use to link work directly to business results.

You Might Also Like

00:0000:00